-
Notifications
You must be signed in to change notification settings - Fork 268
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Irf tool #2315
Irf tool #2315
Conversation
ctapipe/tools/make_irf.py
Outdated
description = "Tool to create IRF files in GAD format" | ||
|
||
def make_derived_columns(self, events, spectrum, target_spectrum): | ||
events["pointing_az"] = 0 * u.deg |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This can be obtained nowadays from the observation block information (and is loaded if you pass load_observation_info
to TableLoader
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For the event weighting, we might also want to have the option to join the simulation config as well, as in principle the simulated spectrum can change between runs (even if currently it does not), for example if extra sims were run to fill in statistics at high-energies only. That feature should be a separate PR though, and may not be necessary if in the IRF tool we use the simulated particle distribution historgrams
ctapipe/tools/make_irf.py
Outdated
) | ||
|
||
# scale relative sensitivity by Crab flux to get the flux sensitivity | ||
for s in (sens2, self.sensitivity): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure why you also scale sens2
, which is never used or returned. What is the difference between self.sensitivity computed above, and sens2, which is computed during the optimization? Are they the same thing?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a legacy of whatever script it I converted into the tool, I've just added saving sens2 to the output to check if there is any difference at all. For example, it might be that historically there was some difference but over time pyirf was refactored to use the same sensitivity function everywhere and now there is no difference.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Which script are you referring to?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The current example script in pyirf only computes sensitivity once using optimize_gh_cuts
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
self.signal[self.signal["selected"]], bins=self.reco_energy_bins | ||
) | ||
|
||
background_hist = estimate_background( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is the difference between this background hist and the one generated in finish()
? One uses estimate_background
and one uses background_2d
. Are these the same computations? I worry that things are computed multiple times, and it's not clear which ones are in the output vs used in the sensitivity
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In fact, I don't find pyirf.sensitivity.estimate_background
in the PyIRF documentation, so could it be incorrect?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That function is what pyIRF uses internally so probably this is a problem with the docs.
To be honest I'm not sure what the difference is. It looks like estimate_background only does one offset bin but also rescales from the size of the Off region to the On region. Meanwhile background_2d calculates background rates in each offset bin, but does not rescale to the On region size...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
pyirf.sensitivity.estimate_background
this function estimates the number of background counts (n_off) for the given observation time and given on region size in the binning used for the sensitivity computation.
pyirf.irf.background.background_2d
is the background IRF component as defined by GADF.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In fact, I don't find pyirf.sensitivity.estimate_background in the PyIRF documentation, so could it be incorrect?
It was missing in the module __all__
, added now here:
cta-observatory/pyirf#256
self.fov_offset_bins, | ||
) | ||
) | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For debugging and benchmarking, it might be nice to add a few more historgrams in the output files (all vs energy and offset):
- N_on and N_off counts (maybe even N_excess and Significance), which can help to see where limiting factors of background and minimum counts are taken into account.
- background rate per particle species (same as the background2d already there, but for electrons and protons separately)
- Effective area for protons and electrons separately
…ent and some renaming
It occurs to me that it would be very useful to add a unit test that reads the generated IRFs using gammapy. You would have to add gammapy the test requirements in setup.cfg, but that is fine - it's not a ctapipe dependency, but still could be a test dependency. That way you test explicitly that the output is readable, which currently it is not due to a mismatch in the HDU bins for background. |
copy Max's fix to the doc config to ignore traitlets things.
Follow up here #2473 |
Here is a first draft of an tool to generate irf-files from a set of DL2 files.
It is primarily a port of the protopipe functionality, but I've tried to also incorporate changes from lstpipe and pyirf example scripts. In practice I think most of the differences is about the handling of offset binning. For expediency I followed the protopipe example on that.