From e09b1519f090c78f0d9bd748c7e93d2fa7275381 Mon Sep 17 00:00:00 2001 From: Abbie Stevens Date: Sat, 26 Mar 2022 15:06:04 -0400 Subject: [PATCH 01/45] Added NICER to mission list, updated list of presentations with links --- docs/intro.rst | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/docs/intro.rst b/docs/intro.rst index bef5f86b6..0c733052f 100644 --- a/docs/intro.rst +++ b/docs/intro.rst @@ -15,7 +15,7 @@ Current Capabilities Currently implemented functionality in this library comprises: -* loading event lists from fits files of a few missions (RXTE/PCA, NuSTAR/FPM, XMM-Newton/EPIC) +* loading event lists from fits files of a few missions (RXTE/PCA, NuSTAR/FPM, XMM-Newton/EPIC, NICER/XTI) * constructing light curves from event data, various operations on light curves (e.g. addition, subtraction, joining, and truncation) * Good Time Interval operations * power spectra in Leahy, rms normalization, absolute rms and no normalization @@ -60,7 +60,8 @@ Presentations Members of the Stingray team have given a number of presentations which introduce Stingray. These include: - +- `2nd Severo Ochoa School on Statistics, Data Mining, and Machine Learning (2021) `_ +- `9th Microquasar Workshop (2021) `_ - `European Week of Astronomy and Space Science (2018) `_ - `ADASS (Astronomical Data Analysis Software and Systems; meeting 2017, proceedings 2020) `_ - `AAS 16th High-Energy Astrophysics Division meeting (2017) `_ From c1b6a70bf1807311b232eeca0ba700be5108734a Mon Sep 17 00:00:00 2001 From: Abbie Stevens Date: Sat, 26 Mar 2022 15:09:25 -0400 Subject: [PATCH 02/45] Added updated link to Microquasar slides instead of 2016 slides, and removed link to 2017 project page on timelabtechnologies. --- docs/contributing.rst | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/docs/contributing.rst b/docs/contributing.rst index 752d33730..aedd2b725 100644 --- a/docs/contributing.rst +++ b/docs/contributing.rst @@ -20,7 +20,6 @@ We encourage you to get involved with Stingray in any way you can! First, read through the `README `_. Then, fork the `stingray `_ and `notebooks `_ repositories (if you need a primer on GitHub and git version control, `look here `_) and work your way through the Jupyter notebook tutorials for the main modules. Once you've familiarized yourself with the basics of Stingray, go to the `Stingray issues page `_ and try to tackle one! -Other ways to get involved are outlined on the `project ideas `_ page, along with some astrophysical background/motivation. -Finally, you can read `these slides `_ from an early talk on Stingray at the Python in Astronomy 2016 conference. +Finally, you can read `these slides `_ from a talk on Stingray in 2021 at the 9th Microquasar Workshop. For organizing and coordinating the software development, we have a Slack group and a mailing list -- please use `this link `_ for Slack or send `one of us `_ an email to join. From f32c4aa4df96e386a152a7874225f432d4b9cf1c Mon Sep 17 00:00:00 2001 From: Abbie Stevens Date: Sat, 26 Mar 2022 15:21:24 -0400 Subject: [PATCH 03/45] Removed list of previous performance improvements, since it now includes the whole changelog. --- docs/history.rst | 7 ------- 1 file changed, 7 deletions(-) diff --git a/docs/history.rst b/docs/history.rst index cbc8c463d..9ef2d6949 100644 --- a/docs/history.rst +++ b/docs/history.rst @@ -4,13 +4,6 @@ History .. include:: ../CHANGELOG.rst -Performance improvements -======================== - -Version 0.2 introduced a few performance improvements when ``Lightcurve`` objects are created. -Once the user defines either the counts per bin or the count rates, the other quantity will be evaluated _lazily_, the first time it is requested. -Also, we introduce a new ``low_memory`` option in ``Lightcurve``: if selected, and users define e.g. ``counts``, ``countrate`` will be calculated _every_time it is requested, and will not be stored in order to free up RAM. - Previous projects merged to Stingray ==================================== From 4df7e3e259994a5fd2847fe0d34eaae0e1a6fa79 Mon Sep 17 00:00:00 2001 From: Abbie Stevens Date: Sat, 26 Mar 2022 15:23:14 -0400 Subject: [PATCH 04/45] Removed the 'possibly a GUI' and replaced it with 'a GUI', and renamed 'power users' as 'experienced coders'. --- docs/index.rst | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/index.rst b/docs/index.rst index e3a78e92a..594354763 100644 --- a/docs/index.rst +++ b/docs/index.rst @@ -14,7 +14,7 @@ There are a number of official software packages for X-ray spectral fitting (XSP Such a widely used and standard software package does not exist for X-ray timing, so until now it has mainly been the domain of custom, proprietary software. Stingray originated during the 2016 workshop `The X-ray Spectral-Timing Revolution `_: a group of X-ray astronomers and developers decided to agree on a common platform to develop a new software package. The goals were to merge existing efforts towards a timing package in Python, following the best guidelines for modern open-source programming, thereby providing the basis for developing spectral-timing analysis tools. -This software provides an easily accessible scripting interface (possibly a GUI) and an API for power users. +This software provides an easily accessible scripting interface, a GUI, and an API for experienced coders. Stingray's ultimate goal is to provide the community with a package that eases the learning curve for advanced spectral-timing techniques, with a correct statistical framework. Further spectral-timing functionality, in particularly command line scripts based on the API defined within Stingray, is available in the package `HENDRICS `_. From 5f6d09f7bd192ed6c0480a71577dd8866cce31c2 Mon Sep 17 00:00:00 2001 From: Abbie Stevens Date: Sat, 26 Mar 2022 15:25:43 -0400 Subject: [PATCH 05/45] Added a line between dynpsd and ccf. --- docs/api.rst | 2 ++ 1 file changed, 2 insertions(+) diff --git a/docs/api.rst b/docs/api.rst index 30c97e58f..8a84549f0 100644 --- a/docs/api.rst +++ b/docs/api.rst @@ -88,6 +88,8 @@ Dynamical Powerspectrum .. autoclass:: stingray.DynamicalPowerspectrum :members: :inherited-members: + +---- CrossCorrelation ---------------- From 12031e01d314aa16c4260c870eb063243706026a Mon Sep 17 00:00:00 2001 From: Abbie Stevens Date: Sat, 26 Mar 2022 15:28:36 -0400 Subject: [PATCH 06/45] Removed the full citation for the ASCL entry, since we want people to cite the two papers, but left the mention of the ASCL entry at the end. Also added two commas earlier on for better reading clarity. --- docs/citing.rst | 25 ++----------------------- 1 file changed, 2 insertions(+), 23 deletions(-) diff --git a/docs/citing.rst b/docs/citing.rst index 4443335e5..d20bae673 100644 --- a/docs/citing.rst +++ b/docs/citing.rst @@ -3,7 +3,7 @@ Citing Stingray *************** Citations are still the main currency of the academic world, and *the* best way to ensure that Stingray continues to be supported and we can continue to work on it. -If you use Stingray in data analysis leading to a publication, we ask that you cite *both* a `DOI `_ which points to the software itself *and* our papers describing the Stingray project. +If you use Stingray in data analysis leading to a publication, we ask that you cite *both* a `DOI `_, which points to the software itself, *and* our papers describing the Stingray project. DOI === @@ -17,7 +17,7 @@ If this isn't possible — for example, because you worked with an unreleased v Papers ====== -Please cite both of the following papers: +Please cite both of the following papers from ApJ and JOSS: .. raw:: html @@ -74,27 +74,6 @@ Please cite both of the following papers: document.body.removeChild(el); } - function copyAsclBib() { - var bibtex = `@MISC{2016ascl.soft08001H, - author = {{Huppenkothen}, Daniela and {Bachetti}, Matteo and {Stevens}, Abigail L. and {Migliari}, Simone and {Balm}, Paul}, - title = "{Stingray: Spectral-timing software}", - keywords = {Software}, - year = 2016, - month = aug, - eid = {ascl:1608.001}, - pages = {ascl:1608.001}, - archivePrefix = {ascl}, - eprint = {1608.001}, - adsurl = {https://ui.adsabs.harvard.edu/abs/2016ascl.soft08001H}, - adsnote = {Provided by the SAO/NASA Astrophysics Data System} - }`; - const el = document.createElement('textarea'); - el.value = bibtex; - document.body.appendChild(el); - el.select(); - document.execCommand('copy'); - document.body.removeChild(el); - }
    From 02fe7e9136f3391c8ba8dea797c287764d98a286 Mon Sep 17 00:00:00 2001 From: Abbie Stevens Date: Sat, 26 Mar 2022 15:29:15 -0400 Subject: [PATCH 07/45] Added a period at the end of a sentence. --- docs/spectime.rst | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/spectime.rst b/docs/spectime.rst index 647ba62c7..9f3beabec 100644 --- a/docs/spectime.rst +++ b/docs/spectime.rst @@ -2,7 +2,7 @@ Spectral timing exploration of a black hole candidate with NICER **************************************************************** In this Tutorial, we will show an example spectral timing exploration of a -black hole binary using NICER data +black hole binary using NICER data. .. toctree:: :maxdepth: 2 From fd027ca2a2e3df80205d44874558e7fcbf40780a Mon Sep 17 00:00:00 2001 From: Abbie Stevens Date: Sat, 26 Mar 2022 15:31:58 -0400 Subject: [PATCH 08/45] Updated for v1.0 proper. --- CHANGELOG.rst | 8 +++++--- 1 file changed, 5 insertions(+), 3 deletions(-) diff --git a/CHANGELOG.rst b/CHANGELOG.rst index 54e9cb767..df9545679 100644 --- a/CHANGELOG.rst +++ b/CHANGELOG.rst @@ -1,9 +1,9 @@ Changelog ========= -v1.0beta (2022-02-25) +v1.0 (2022-03-28) --------------------- -TL,DR: these things will break your code with v1.0beta: +TL,DR: these things will break your code with v1.0: - Python version < 3.8 - The ``gtis`` keyword in `pulse/pulsar.py` (it is now ``gti``, without the 's') @@ -18,7 +18,7 @@ New - New infrastructure for converting ``EventList`` and ``LightCurve`` objects into Astropy ``TimeSeries`` - New infrastructure for converting most Stingray classes into Astropy ``Table`` objects, Xarray and Pandas data frames. - Save and load of most Stingray classes to/from many different file formats (``pickle``, ``ECSV``, ``HDF5``, ``FITS``, and all formats compatible with Astropy Table) -- Accept input ``EventList`` in ``DynamicalPowerSpectrum`` +- Accept input ``EventList`` in ``DynamicalPowerSpectrum`` - New ``stingray.fourier`` module containing the basic timing products, usable on ``numpy`` arrays, and centralizes fft import - New methods in ``Crossspectrum`` and ``Powerspectrum`` to load data from specific inputs: ``from_events``, ``from_lightcurve``, ``from_time_array``, ``from_lc_list`` (``from_time_array`` was also tested using memory-mapped event lists as inputs: useful in very large datasets) - New and improved spectral timing methods: ``ComplexCovarianceSpectrum``, ``CovarianceSpectrum``, ``LagSpectrum``, ``RmsSpectrum`` @@ -54,6 +54,8 @@ Bug fixes __ https://github.com/StingraySoftware/stingray/compare/v0.3...v1.0-beta +v1.0beta was released on 2022-02-25. + v0.3 (2021-05-31) ----------------- From f8d13c6ed8e3ab9a058a168ccdda13001eec61f4 Mon Sep 17 00:00:00 2001 From: Abbie Stevens Date: Sat, 26 Mar 2022 15:47:43 -0400 Subject: [PATCH 09/45] Listed all the people who have contributed to Stingray! Took names from github contributors list. --- CREDITS.rst | 36 +++++++++++++++++++++++++++++++++++- 1 file changed, 35 insertions(+), 1 deletion(-) diff --git a/CREDITS.rst b/CREDITS.rst index 325243b93..e08a4c5dc 100644 --- a/CREDITS.rst +++ b/CREDITS.rst @@ -14,10 +14,44 @@ Stingray Project Coordinators Contributors ============ +* Riccardo Campana +* Meg Davis +* Amogh Desai +* Omar Gamal +* Nitish Garg +* Nick Gorgone +* Anurag Hota +* Ajit Jadhav +* Usman Khan +* Sambhav Kothari +* Sandeep Kumar +* Max Mahlke +* Evandro Martinez Ribeiro +* Himanshu Mishra +* Sashank Mishra +* Stuart Mumford +* paopaofi +* parkma99 +* Francesco Pisanu +* Rashmi Raj +* Haroon Rashid +* Achilles Rasquinha +* Saurav Sachidanand +* Parul Sethi +* Swapnil Sharma +* Brigitta Sipocz +* Arfon Smith +* John Swinbank +* Akash Tandon +* tappina +* Mihir Tripathi +* Ricardo Vallés Blanco +* Dhruv Vats + If you have contributed to Stingray and your name is missing, please send an email to the coordinators, or `open a pull request for this page `_ -in the `Stingray repository `_) +in the `Stingray repository `_. Acknowledgements ================ From 10571617881f96586a4ff5b3751baa6c940e6f74 Mon Sep 17 00:00:00 2001 From: Abbie Stevens Date: Sat, 26 Mar 2022 16:03:53 -0400 Subject: [PATCH 10/45] Combined quicklook.rst and spectime.rst into this one docs page on data exploration. --- docs/dataexplo.rst | 28 ++++++++++++++++++++++++++++ 1 file changed, 28 insertions(+) create mode 100644 docs/dataexplo.rst diff --git a/docs/dataexplo.rst b/docs/dataexplo.rst new file mode 100644 index 000000000..7183275c1 --- /dev/null +++ b/docs/dataexplo.rst @@ -0,0 +1,28 @@ +Data Exploration +**************** + +These notebook tutorials show some ways to explore data with +Stingray. + +A quick look at a NuSTAR observation +==================================== + +Stingray transparently loads datasets from many HEASOFT-supported missions. +In this Tutorial, we will show an example quicklook of a NuSTAR observation. + +.. toctree:: + :maxdepth: 2 + + notebooks/DataQuickLook/Quicklook NuSTAR data with Stingray.ipynb + + +Spectral timing exploration with NICER +====================================== + +In this Tutorial, we will show an example spectral timing exploration of a +black hole binary using NICER data. + +.. toctree:: + :maxdepth: 2 + + notebooks/Spectral Timing/Spectral Timing Exploration.ipynb From c84f6cc98cd13a34527de0a4ecec1b9ed3bf63be Mon Sep 17 00:00:00 2001 From: Abbie Stevens Date: Sat, 26 Mar 2022 16:04:40 -0400 Subject: [PATCH 11/45] Removed quicklook.rst and spectim.rst since they're now in dataexplo.rst. --- docs/quicklook.rst | 10 ---------- docs/spectime.rst | 10 ---------- 2 files changed, 20 deletions(-) delete mode 100644 docs/quicklook.rst delete mode 100644 docs/spectime.rst diff --git a/docs/quicklook.rst b/docs/quicklook.rst deleted file mode 100644 index 64b50eea3..000000000 --- a/docs/quicklook.rst +++ /dev/null @@ -1,10 +0,0 @@ -Quicklook of an X-ray observation -********************************* - -Stingray transparently loads datasets from many HEASOFT-supported missions. -In this Tutorial, we will show an example quicklook of a NuSTAR observation. - -.. toctree:: - :maxdepth: 2 - - notebooks/DataQuickLook/Quicklook NuSTAR data with Stingray.ipynb diff --git a/docs/spectime.rst b/docs/spectime.rst deleted file mode 100644 index 9f3beabec..000000000 --- a/docs/spectime.rst +++ /dev/null @@ -1,10 +0,0 @@ -Spectral timing exploration of a black hole candidate with NICER -**************************************************************** - -In this Tutorial, we will show an example spectral timing exploration of a -black hole binary using NICER data. - -.. toctree:: - :maxdepth: 2 - - notebooks/Spectral Timing/Spectral Timing Exploration.ipynb From 368c9b3c2e32b3739b0729e83b3c35bd72da559c Mon Sep 17 00:00:00 2001 From: Abbie Stevens Date: Sat, 26 Mar 2022 16:17:25 -0400 Subject: [PATCH 12/45] Added dataexplo to index, removed quicklook and spectime. --- docs/index.rst | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/docs/index.rst b/docs/index.rst index 594354763..5940564fa 100644 --- a/docs/index.rst +++ b/docs/index.rst @@ -33,9 +33,8 @@ Contents intro install core - quicklook + dataexplo modeling - spectime simulator pulsar deadtime From f6fe560b68d2a128900b4b82c991ba373d5e5dd5 Mon Sep 17 00:00:00 2001 From: Abbie Stevens Date: Sat, 26 Mar 2022 16:34:06 -0400 Subject: [PATCH 13/45] Added note about nbsphinx (also needed to build docs locally). --- docs/install.rst | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/install.rst b/docs/install.rst index 8ba8d5aa3..fe9685293 100644 --- a/docs/install.rst +++ b/docs/install.rst @@ -104,7 +104,7 @@ Documentation ------------- The documentation including tutorials is hosted `here `_. -The documentation uses `sphinx `_ to build and requires the extension sphinx-astropy `_. +The documentation uses `sphinx `_ to build and requires the extensions `sphinx-astropy `_ and `nbsphinx `_. You can build the API reference yourself by going into the ``docs`` folder within the ``stingray`` root directory and running the ``Makefile``: :: From 90c9acb38f70ebbf70825cc15be588f776064e1b Mon Sep 17 00:00:00 2001 From: Abbie Stevens Date: Sat, 26 Mar 2022 16:34:49 -0400 Subject: [PATCH 14/45] Corrected docs link metadata. --- setup.cfg | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/setup.cfg b/setup.cfg index 25496e316..7d2c95f5a 100644 --- a/setup.cfg +++ b/setup.cfg @@ -4,7 +4,7 @@ author = Stingray Developers author_email = spectraltiming-stingray@googlegroups.com license = MIT license_file = LICENSE.rst -url = https://stingray.readthedocs.io/ +url = https://docs.stingray.science description = Time Series Methods For Astronomical X-ray Data long_description = file: README.rst long_description_content_type = text/x-rst From f322c5e02fdd7ac2758c1c2c3b574b6ed595db8e Mon Sep 17 00:00:00 2001 From: Abbie Stevens Date: Sat, 26 Mar 2022 16:35:16 -0400 Subject: [PATCH 15/45] Added version number to metadata. --- setup.cfg | 1 + 1 file changed, 1 insertion(+) diff --git a/setup.cfg b/setup.cfg index 7d2c95f5a..7cc856c26 100644 --- a/setup.cfg +++ b/setup.cfg @@ -5,6 +5,7 @@ author_email = spectraltiming-stingray@googlegroups.com license = MIT license_file = LICENSE.rst url = https://docs.stingray.science +version = 1.0 description = Time Series Methods For Astronomical X-ray Data long_description = file: README.rst long_description_content_type = text/x-rst From 3d3f88a7de455e1f26c45f1eca18a8054273c0a9 Mon Sep 17 00:00:00 2001 From: Abbie Stevens Date: Sat, 26 Mar 2022 16:39:45 -0400 Subject: [PATCH 16/45] Removed py3.6 and py3.7 from environments. --- tox.ini | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/tox.ini b/tox.ini index 7b7030ce4..cb413dab1 100644 --- a/tox.ini +++ b/tox.ini @@ -1,8 +1,8 @@ [tox] envlist = - py{36,37,38,39,310}-test{,-alldeps,-devdeps}{,-cov} - py{36,37,38,39,310}-test-numpy{116,117,118} - py{36,37,38,39,310}-test-astropy{3,4,5,lts} + py{38,39,310}-test{,-alldeps,-devdeps}{,-cov} + py{38,39,310}-test-numpy{116,117,118} + py{38,39,310}-test-astropy{3,4,5,lts} build_docs linkcheck codestyle From 7b4df63237b1c3f87ab6b6275749f3912070807f Mon Sep 17 00:00:00 2001 From: Abbie Stevens Date: Sat, 26 Mar 2022 18:17:11 -0400 Subject: [PATCH 17/45] More details about passing GTIs to powerspectrum objects (as per issue raised on github), tighened up phrasing and made quotes more consistent throughout, corrected many character line overhangs, found a few instances where the gti keyword wasn't being used and uses that instead of events.gti or lc.gti, found a few instances of the iterable lc comment not matching the actual keywords (looks like it was copy-pasted from crossspectrum without editing). --- stingray/powerspectrum.py | 415 +++++++++++++++++++++----------------- 1 file changed, 230 insertions(+), 185 deletions(-) diff --git a/stingray/powerspectrum.py b/stingray/powerspectrum.py index e6aecf83e..775429770 100755 --- a/stingray/powerspectrum.py +++ b/stingray/powerspectrum.py @@ -34,12 +34,12 @@ def show_progress(a, **kwargs): class Powerspectrum(Crossspectrum): type = "powerspectrum" """ - Make a :class:`Powerspectrum` (also called periodogram) from a (binned) light curve. - Periodograms can be normalized by either Leahy normalization, fractional rms - normalizaation, absolute rms normalization, or not at all. + Make a :class:`Powerspectrum` (also called periodogram) from a (binned) + light curve. Periodograms can be normalized by either Leahy normalization, + fractional rms normalization, absolute rms normalization, or not at all. - You can also make an empty :class:`Powerspectrum` object to populate with your - own fourier-transformed data (this can sometimes be useful when making + You can also make an empty :class:`Powerspectrum` object to populate with + your own fourier-transformed data (this can sometimes be useful when making binned power spectra). Parameters @@ -56,7 +56,10 @@ class Powerspectrum(Crossspectrum): gti: 2-d float array ``[[gti0_0, gti0_1], [gti1_0, gti1_1], ...]`` -- Good Time intervals. This choice overrides the GTIs in the single light curves. Use with - care! + care, especially if these GTIs have overlaps with the input + `Lightcurve` GTIs! If you're getting errors regarding your GTIs, don't + use this and only give GTIs to the `Lightcurve` object before making + the power spectrum. skip_checks: bool Skip initial checks, for speed or other reasons (you need to trust your @@ -94,9 +97,10 @@ class Powerspectrum(Crossspectrum): The total number of photons in the light curve legacy: bool - Use the legacy machinery of AveragedPowerspectrum. This might be useful to compare - with old results, and is also needed to use light curve lists as an input, to - conserve the spectra of each segment, or to use the large_data option. + Use the legacy machinery of `AveragedPowerspectrum`. This might be + useful to compare with old results, and is also needed to use light + curve lists as an input, to conserve the spectra of each segment, or to + use the large_data option. """ def __init__(self, data=None, norm="frac", gti=None, @@ -168,10 +172,10 @@ def compute_rms(self, min_freq, max_freq, white_noise_offset=0.): Parameters ---------- min_freq: float - The lower frequency bound for the calculation + The lower frequency bound for the calculation. max_freq: float - The upper frequency bound for the calculation + The upper frequency bound for the calculation. Other parameters ---------------- @@ -185,10 +189,10 @@ def compute_rms(self, min_freq, max_freq, white_noise_offset=0.): ------- rms: float The fractional rms amplitude contained between ``min_freq`` and - ``max_freq`` + ``max_freq``. rms_err: float - The error on the fractional rms amplitude + The error on the fractional rms amplitude. """ minind = self.freq.searchsorted(min_freq) @@ -233,7 +237,7 @@ def _rms_error(self, powers): Returns ------- delta_rms: float - The error on the fractional rms amplitude + The error on the fractional rms amplitude. """ nphots = self.nphots p_err = scipy.stats.chi2(2.0 * self.m).var() * powers / self.m / nphots @@ -270,9 +274,9 @@ def classical_significances(self, threshold=1, trial_correction=False): the power spectrum, where index is the numerical index of the power in question. If a ``threshold`` is set, then only powers with p-values *below* that threshold with their respective indices. If - ``trial_correction`` is set to ``True``, then the threshold will be corrected - for the number of trials (frequencies) in the power spectrum before - being used. + ``trial_correction`` is set to ``True``, then the threshold will be + corrected for the number of trials (frequencies) in the power spectrum + before being used. Parameters ---------- @@ -324,9 +328,11 @@ def classical_significances(self, threshold=1, trial_correction=False): return pvals def modulation_upper_limit(self, fmin=None, fmax=None, c=0.95): - """Upper limit on a sinusoidal modulation. + """ + Upper limit on a sinusoidal modulation. - To understand the meaning of this amplitude: if the modulation is described by: + To understand the meaning of this amplitude: if the modulation is + described by: ..math:: p = \overline{p} (1 + a * \sin(x)) @@ -336,12 +342,14 @@ def modulation_upper_limit(self, fmin=None, fmax=None, c=0.95): ..math:: p = \overline{p} (1 + \sum_l a_l * \sin(lx)) a is equivalent to :math:`\sqrt(\sum_l a_l^2)`. - See `stingray.stats.power_upper_limit`, `stingray.stats.amplitude_upper_limit` + See `stingray.stats.power_upper_limit`, + `stingray.stats.amplitude_upper_limit` for more information. - The formula used to calculate the upper limit assumes the Leahy normalization. - If the periodogram is in another normalization, we will internally convert - it to Leahy before calculating the upper limit. + The formula used to calculate the upper limit assumes the Leahy + normalization. + If the periodogram is in another normalization, we will internally + convert it to Leahy before calculating the upper limit. Parameters ---------- @@ -359,7 +367,8 @@ def modulation_upper_limit(self, fmin=None, fmax=None, c=0.95): Returns ------- a: float - The modulation amplitude that could produce P>pmeas with 1 - c probability + The modulation amplitude that could produce P>pmeas with 1 - c + probability. Examples -------- @@ -415,22 +424,25 @@ def from_time_array(times, dt, segment_size=None, gti=None, norm="frac", Other parameters ---------------- segment_size : float - The length, in seconds, of the light curve segments that will be averaged - Only relevant (and required) for AveragedPowerspectrum - gti: [[gti0_0, gti0_1], [gti1_0, gti1_1], ...] - Good Time intervals. + The length, in seconds, of the light curve segments that will be + averaged. Only relevant (and required) for `AveragedPowerspectrum`. norm : str, default "frac" - The normalization of the periodogram. "abs" is absolute rms, "frac" is - fractional rms, "leahy" is Leahy+83 normalization, and "none" is the - unnormalized periodogram + The normalization of the periodogram. `abs` is absolute rms, `frac` + is fractional rms, `leahy` is Leahy+83 normalization, and `none` is + the unnormalized periodogram. use_common_mean : bool, default True - The mean of the light curve can be estimated in each interval, or on - the full light curve. This gives different results (Alston+2013). - Here we assume the mean is calculated on the full light curve, but - the user can set ``use_common_mean`` to False to calculate it on a - per-segment basis. + The mean of the light curve can be estimated in each interval, or + on the full light curve. This gives different results + (Alston+2013). By default, we assume the mean is calculated on the + full light curve, but the user can set ``use_common_mean`` to False + to calculate it on a per-segment basis. silent : bool, default False - Silence the progress bars + Silence the progress bars. + gti: [[gti0_0, gti0_1], [gti1_0, gti1_1], ...] + Additional, optional Good Time intervals that get intersected with + the GTIs of the input object. Can cause errors if there are + overlaps between these GTIs and the input object GTIs. If that + happens, assign the desired GTIs to the input object. """ return powerspectrum_from_time_array( @@ -453,23 +465,25 @@ def from_events(events, dt, segment_size=None, norm="frac", Other parameters ---------------- segment_size : float - The length, in seconds, of the light curve segments that will be averaged - Only relevant (and required) for AveragedPowerspectrum + The length, in seconds, of the light curve segments that will be + averaged. Only relevant (and required) for `AveragedPowerspectrum`. norm : str, default "frac" - The normalization of the periodogram. "abs" is absolute rms, "frac" is - fractional rms, "leahy" is Leahy+83 normalization, and "none" is the - unnormalized periodogram + The normalization of the periodogram. `abs` is absolute rms, `frac` + is fractional rms, `leahy` is Leahy+83 normalization, and `none` is + the unnormalized periodogram. use_common_mean : bool, default True - The mean of the light curve can be estimated in each interval, or on - the full light curve. This gives different results (Alston+2013). - Here we assume the mean is calculated on the full light curve, but - the user can set ``use_common_mean`` to False to calculate it on a - per-segment basis. + The mean of the light curve can be estimated in each interval, or + on the full light curve. This gives different results + (Alston+2013). By default, we assume the mean is calculated on the + full light curve, but the user can set ``use_common_mean`` to False + to calculate it on a per-segment basis. silent : bool, default False - Silence the progress bars + Silence the progress bars. gti: [[gti0_0, gti0_1], [gti1_0, gti1_1], ...] - Additional, optional, Good Time intervals, that get interesected with the - GTIs of the input object. + Additional, optional Good Time intervals that get intersected with + the GTIs of the input object. Can cause errors if there are + overlaps between these GTIs and the input object GTIs. If that + happens, assign the desired GTIs to the input object. """ return powerspectrum_from_events( @@ -492,23 +506,25 @@ def from_lightcurve(lc, segment_size=None, norm="frac", Other parameters ---------------- segment_size : float - The length, in seconds, of the light curve segments that will be averaged - Only relevant (and required) for AveragedPowerspectrum + The length, in seconds, of the light curve segments that will be + averaged. Only relevant (and required) for `AveragedPowerspectrum`. norm : str, default "frac" - The normalization of the periodogram. "abs" is absolute rms, "frac" is - fractional rms, "leahy" is Leahy+83 normalization, and "none" is the - unnormalized periodogram + The normalization of the periodogram. `abs` is absolute rms, `frac` + is fractional rms, `leahy` is Leahy+83 normalization, and `none` is + the unnormalized periodogram. use_common_mean : bool, default True - The mean of the light curve can be estimated in each interval, or on - the full light curve. This gives different results (Alston+2013). - Here we assume the mean is calculated on the full light curve, but - the user can set ``use_common_mean`` to False to calculate it on a - per-segment basis. + The mean of the light curve can be estimated in each interval, or + on the full light curve. This gives different results + (Alston+2013). By default, we assume the mean is calculated on the + full light curve, but the user can set ``use_common_mean`` to False + to calculate it on a per-segment basis. silent : bool, default False - Silence the progress bars + Silence the progress bars. gti: [[gti0_0, gti0_1], [gti1_0, gti1_1], ...] - Additional, optional, Good Time intervals, that get interesected with the - GTIs of the input object. + Additional, optional Good Time intervals that get intersected with + the GTIs of the input object. Can cause errors if there are + overlaps between these GTIs and the input object GTIs. If that + happens, assign the desired GTIs to the input object. """ return powerspectrum_from_lightcurve( @@ -522,10 +538,8 @@ def from_lc_iterable(iter_lc, dt, segment_size=None, norm="frac", Parameters ---------- - iter_lc1 : iterable of `stingray.Lightcurve` objects or `np.array` - Light curves from channel 1. If arrays, use them as counts - iter_lc1 : iterable of `stingray.Lightcurve` objects or `np.array` - Light curves from channel 2. If arrays, use them as counts + iter_lc : iterable of `stingray.Lightcurve` objects or `np.array` + Light curves. If arrays, use them as counts. dt : float The time resolution of the light curves (sets the Nyquist frequency) @@ -533,22 +547,25 @@ def from_lc_iterable(iter_lc, dt, segment_size=None, norm="frac", Other parameters ---------------- segment_size : float - The length, in seconds, of the light curve segments that will be averaged - Only relevant (and required) for AveragedPowerspectrum + The length, in seconds, of the light curve segments that will be + averaged. Only relevant (and required) for `AveragedPowerspectrum`. norm : str, default "frac" - The normalization of the periodogram. "abs" is absolute rms, "frac" is - fractional rms, "leahy" is Leahy+83 normalization, and "none" is the - unnormalized periodogram + The normalization of the periodogram. `abs` is absolute rms, `frac` + is fractional rms, `leahy` is Leahy+83 normalization, and `none` is + the unnormalized periodogram. use_common_mean : bool, default True - The mean of the light curve can be estimated in each interval, or on - the full light curve. This gives different results (Alston+2013). - Here we assume the mean is calculated on the full light curve, but - the user can set ``use_common_mean`` to False to calculate it on a - per-segment basis. + The mean of the light curve can be estimated in each interval, or + on the full light curve. This gives different results + (Alston+2013). By default, we assume the mean is calculated on the + full light curve, but the user can set ``use_common_mean`` to False + to calculate it on a per-segment basis. silent : bool, default False - Silence the progress bars + Silence the progress bars. gti: [[gti0_0, gti0_1], [gti1_0, gti1_1], ...] - Good Time intervals. + Additional, optional Good Time intervals that get intersected with + the GTIs of the input object. Can cause errors if there are + overlaps between these GTIs and the input object GTIs. If that + happens, assign the desired GTIs to the input object. """ return powerspectrum_from_lc_iterable( @@ -558,7 +575,8 @@ def from_lc_iterable(iter_lc, dt, segment_size=None, norm="frac", def _initialize_from_any_input( self, data, dt=None, segment_size=None, norm="frac", silent=False, use_common_mean=True, gti=None): - """Initialize the class, trying to understand the input types. + """ + Initialize the class, trying to understand the input types. The input arguments are the same as ``__init__()``. Based on the type of ``data``, this method will call the appropriate @@ -630,7 +648,8 @@ class AveragedPowerspectrum(AveragedCrossspectrum, Powerspectrum): Parameters ---------- - data: :class:`stingray.Lightcurve`object OR iterable of :class:`stingray.Lightcurve` objects OR :class:`stingray.EventList` object + data: :class:`stingray.Lightcurve`object OR iterable of + :class:`stingray.Lightcurve` objects OR :class:`stingray.EventList` object The light curve data to be Fourier-transformed. segment_size: float @@ -647,18 +666,22 @@ class AveragedPowerspectrum(AveragedCrossspectrum, Powerspectrum): gti: 2-d float array ``[[gti0_0, gti0_1], [gti1_0, gti1_1], ...]`` -- Good Time intervals. This choice overrides the GTIs in the single light curves. Use with - care! + care, especially if these GTIs have overlaps with the input + `Lightcurve` GTIs! If you're getting errors regarding your GTIs, don't + use this and only give GTIs to the `Lightcurve` object before making + the power spectrum. silent : bool, default False Do not show a progress bar when generating an averaged cross spectrum. - Useful for the batch execution of many spectra + Useful for the batch execution of many spectra. dt: float The time resolution of the light curve. Only needed when constructing - light curves in the case where data is of :class:EventList + light curves in the case where data is of :class:EventList. large_data : bool, default False - Use only for data larger than 10**7 data points!! Uses zarr and dask for computation. + Use only for data larger than 10**7 data points!! Uses zarr and dask + for computation. save_all : bool, default False Save all intermediate PDSs used for the final average. Use with care. @@ -667,19 +690,19 @@ class AveragedPowerspectrum(AveragedCrossspectrum, Powerspectrum): skip_checks: bool Skip initial checks, for speed or other reasons (you need to trust your - inputs!) + inputs!). Attributes ---------- norm: {``leahy`` | ``frac`` | ``abs`` | ``none`` } - the normalization of the periodogram + The normalization of the periodogram. freq: numpy.ndarray - The array of mid-bin frequencies that the Fourier transform samples + The array of mid-bin frequencies that the Fourier transform samples. power: numpy.ndarray The array of normalized squared absolute values of Fourier - amplitudes + amplitudes. power_err: numpy.ndarray The uncertainties of ``power``. @@ -689,21 +712,22 @@ class AveragedPowerspectrum(AveragedCrossspectrum, Powerspectrum): realization (``m=1``) the error is equal to the power. df: float - The frequency resolution + The frequency resolution. m: int - The number of averaged periodograms + The number of averaged periodograms. n: int - The number of data points in the light curve + The number of data points in the light curve. nphots: float - The total number of photons in the light curve + The total number of photons in the light curve. legacy: bool - Use the legacy machinery of AveragedPowerspectrum. This might be useful to compare - with old results, and is also needed to use light curve lists as an input, to - conserve the spectra of each segment, or to use the large_data option. + Use the legacy machinery of `AveragedPowerspectrum`. This might be + useful to compare with old results, and is also needed to use light + curve lists as an input, to conserve the spectra of each segment, or to + use the large_data option. """ def __init__(self, data=None, segment_size=None, norm="frac", gti=None, @@ -815,8 +839,8 @@ def _make_segment_spectrum(self, lc, segment_size, silent=False): Parameters ---------- - lc : :class:`stingray.Lightcurve` objects\ - The input light curve + lc : :class:`stingray.Lightcurve` objects + The input light curve. segment_size : ``numpy.float`` Size of each light curve segment to use for averaging. @@ -824,15 +848,17 @@ def _make_segment_spectrum(self, lc, segment_size, silent=False): Other parameters ---------------- silent : bool, default False - Suppress progress bars + Suppress progress bars. Returns ------- power_all : list of :class:`Powerspectrum` objects - A list of power spectra calculated independently from each light curve segment + A list of power spectra calculated independently from each light + curve segment. nphots_all : ``numpy.ndarray`` - List containing the number of photons for all segments calculated from ``lc`` + List containing the number of photons for all segments calculated + from ``lc``. """ if not isinstance(lc, Lightcurve): raise TypeError("lc must be a Lightcurve object") @@ -895,12 +921,12 @@ class DynamicalPowerspectrum(AveragedPowerspectrum): Parameters ---------- lc : :class:`stingray.Lightcurve` or :class:`stingray.EventList` object - The time series or event list of which the Dynamical powerspectrum is + The time series or event list of which the dynamical power spectrum is to be calculated. segment_size : float, default 1 - Length of the segment of light curve, default value is 1 (in whatever units - the ``time`` array in the :class:`Lightcurve`` object uses). + Length of the segment of light curve, default value is 1 (in whatever + units the ``time`` array in the :class:`Lightcurve`` object uses). norm: {``leahy`` | ``frac`` | ``abs`` | ``none`` }, optional, default ``frac`` The normaliation of the periodogram to be used. @@ -910,7 +936,10 @@ class DynamicalPowerspectrum(AveragedPowerspectrum): gti: 2-d float array ``[[gti0_0, gti0_1], [gti1_0, gti1_1], ...]`` -- Good Time intervals. This choice overrides the GTIs in the single light curves. Use with - care! + care, especially if these GTIs have overlaps with the input + `Lightcurve` GTIs! If you're getting errors regarding your GTIs, don't + use this and only give GTIs to the `Lightcurve` object before making + the power spectrum. Attributes ---------- @@ -923,10 +952,10 @@ class DynamicalPowerspectrum(AveragedPowerspectrum): dyn_ps : np.ndarray The matrix of normalized squared absolute values of Fourier amplitudes. The axis are given by the ``freq`` - and ``time`` attributes + and ``time`` attributes. norm: {``leahy`` | ``frac`` | ``abs`` | ``none``} - the normalization of the periodogram + The normalization of the periodogram. freq: numpy.ndarray The array of mid-bin frequencies that the Fourier transform samples @@ -966,7 +995,7 @@ def _make_matrix(self, lc): ---------- lc : :class:`Lightcurve` object The :class:`Lightcurve` object from which to generate the dynamical - power spectrum + power spectrum. """ ps_all, _ = AveragedPowerspectrum._make_segment_spectrum( self, lc, self.segment_size) @@ -1010,9 +1039,9 @@ def rebin_frequency(self, df_new, method="sum"): Parameters ---------- df_new: float - The new frequency resolution of the Dynamical Power Spectrum. - Must be larger than the frequency resolution of the old Dynamical - Power Spectrum! + The new frequency resolution of the dynamical power spectrum. + Must be larger than the frequency resolution of the old dynamical + power spectrum! method: {``sum`` | ``mean`` | ``average``}, optional, default ``sum`` This keyword argument sets whether the counts in the new bins @@ -1033,8 +1062,8 @@ def rebin_frequency(self, df_new, method="sum"): def trace_maximum(self, min_freq=None, max_freq=None): """ - Return the indices of the maximum powers in each segment :class:`Powerspectrum` - between specified frequencies. + Return the indices of the maximum powers in each segment + :class:`Powerspectrum` between specified frequencies. Parameters ---------- @@ -1074,9 +1103,9 @@ def rebin_time(self, dt_new, method='sum'): Parameters ---------- dt_new: float - The new time resolution of the Dynamical Power Spectrum. - Must be larger than the time resolution of the old Dynamical Power - Spectrum! + The new time resolution of the dynamical power spectrum. + Must be larger than the time resolution of the old dynamical power + spectrum! method: {"sum" | "mean" | "average"}, optional, default "sum" This keyword argument sets whether the counts in the new bins @@ -1110,9 +1139,11 @@ def rebin_time(self, dt_new, method='sum'): return new_dynspec_object -def powerspectrum_from_time_array(times, dt, segment_size=None, gti=None, norm="frac", - silent=False, use_common_mean=True): - """Calculate AveragedPowerspectrum from an array of event times. +def powerspectrum_from_time_array(times, dt, segment_size=None, gti=None, + norm="frac", silent=False, + use_common_mean=True): + """ + Calculate `AveragedPowerspectrum` from an array of event times. Parameters ---------- @@ -1125,21 +1156,25 @@ def powerspectrum_from_time_array(times, dt, segment_size=None, gti=None, norm=" Other parameters ---------------- segment_size : float - The length, in seconds, of the light curve segments that will be averaged + The length, in seconds, of the light curve segments that will be + averaged. gti : [[gti0, gti1], ...] - Good Time intervals + Additional, optional Good Time intervals that get intersected with + the GTIs of the input object. Can cause errors if there are + overlaps between these GTIs and the input object GTIs. If that + happens, assign the desired GTIs to the input object. norm : str, default "frac" - The normalization of the periodogram. "abs" is absolute rms, "frac" is - fractional rms, "leahy" is Leahy+83 normalization, and "none" is the - unnormalized periodogram + The normalization of the periodogram. `abs` is absolute rms, `frac` + is fractional rms, `leahy` is Leahy+83 normalization, and `none` is + the unnormalized periodogram. use_common_mean : bool, default True - The mean of the light curve can be estimated in each interval, or on - the full light curve. This gives different results (Alston+2013). - Here we assume the mean is calculated on the full light curve, but - the user can set ``use_common_mean`` to False to calculate it on a - per-segment basis. + The mean of the light curve can be estimated in each interval, or + on the full light curve. This gives different results + (Alston+2013). By default, we assume the mean is calculated on the + full light curve, but the user can set ``use_common_mean`` to False + to calculate it on a per-segment basis. silent : bool, default False - Silence the progress bars + Silence the progress bars. Returns ------- @@ -1157,8 +1192,8 @@ def powerspectrum_from_time_array(times, dt, segment_size=None, gti=None, norm=" return _create_powerspectrum_from_result_table(table, force_averaged=force_averaged) -def powerspectrum_from_events(events, dt, segment_size=None, norm="frac", - silent=False, use_common_mean=True, gti=None): +def powerspectrum_from_events(events, dt, segment_size=None, gti=None, + norm="frac", silent=False, use_common_mean=True): """Calculate AveragedPowerspectrum from an event list Parameters @@ -1172,22 +1207,25 @@ def powerspectrum_from_events(events, dt, segment_size=None, norm="frac", Other parameters ---------------- segment_size : float - The length, in seconds, of the light curve segments that will be averaged + The length, in seconds, of the light curve segments that will be + averaged. + gti : [[gti0, gti1], ...] + Additional, optional Good Time intervals that get intersected with + the GTIs of the input object. Can cause errors if there are + overlaps between these GTIs and the input object GTIs. If that + happens, assign the desired GTIs to the input object. norm : str, default "frac" - The normalization of the periodogram. "abs" is absolute rms, "frac" is - fractional rms, "leahy" is Leahy+83 normalization, and "none" is the - unnormalized periodogram + The normalization of the periodogram. `abs` is absolute rms, `frac` + is fractional rms, `leahy` is Leahy+83 normalization, and `none` is + the unnormalized periodogram. use_common_mean : bool, default True - The mean of the light curve can be estimated in each interval, or on - the full light curve. This gives different results (Alston+2013). - Here we assume the mean is calculated on the full light curve, but - the user can set ``use_common_mean`` to False to calculate it on a - per-segment basis. + The mean of the light curve can be estimated in each interval, or + on the full light curve. This gives different results + (Alston+2013). By default, we assume the mean is calculated on the + full light curve, but the user can set ``use_common_mean`` to False + to calculate it on a per-segment basis. silent : bool, default False - Silence the progress bars - gti: [[gti0_0, gti0_1], [gti1_0, gti1_1], ...] - Additional, optional, Good Time intervals, that get interesected with the - GTIs of the input object. + Silence the progress bars. Returns ------- @@ -1196,20 +1234,20 @@ def powerspectrum_from_events(events, dt, segment_size=None, norm="frac", """ return powerspectrum_from_time_array( - events.time, dt, segment_size, events.gti, norm=norm, + events.time, dt, segment_size, gti, norm=norm, silent=silent, use_common_mean=use_common_mean ) -def powerspectrum_from_lightcurve(lc, segment_size=None, norm="frac", - silent=False, use_common_mean=True, - gti=None): - """Calculate AveragedPowerspectrum from a light curve +def powerspectrum_from_lightcurve(lc, segment_size=None, gti=None, norm="frac", + silent=False, use_common_mean=True): + """ + Calculate AveragedPowerspectrum from a light curve Parameters ---------- events : `stingray.Lightcurve` - Light curve to be analyzed + Light curve to be analyzed. dt : float The time resolution of the intermediate light curves (sets the Nyquist frequency) @@ -1217,22 +1255,25 @@ def powerspectrum_from_lightcurve(lc, segment_size=None, norm="frac", Other parameters ---------------- segment_size : float - The length, in seconds, of the light curve segments that will be averaged + The length, in seconds, of the light curve segments that will be + averaged. + gti : [[gti0, gti1], ...] + Additional, optional Good Time intervals that get intersected with + the GTIs of the input object. Can cause errors if there are + overlaps between these GTIs and the input object GTIs. If that + happens, assign the desired GTIs to the input object. norm : str, default "frac" - The normalization of the periodogram. "abs" is absolute rms, "frac" is - fractional rms, "leahy" is Leahy+83 normalization, and "none" is the - unnormalized periodogram + The normalization of the periodogram. `abs` is absolute rms, `frac` + is fractional rms, `leahy` is Leahy+83 normalization, and `none` is + the unnormalized periodogram. use_common_mean : bool, default True - The mean of the light curve can be estimated in each interval, or on - the full light curve. This gives different results (Alston+2013). - Here we assume the mean is calculated on the full light curve, but - the user can set ``use_common_mean`` to False to calculate it on a - per-segment basis. + The mean of the light curve can be estimated in each interval, or + on the full light curve. This gives different results + (Alston+2013). By default, we assume the mean is calculated on the + full light curve, but the user can set ``use_common_mean`` to False + to calculate it on a per-segment basis. silent : bool, default False - Silence the progress bars - gti: [[gti0_0, gti0_1], [gti1_0, gti1_1], ...] - Additional, optional, Good Time intervals, that get interesected with the - GTIs of the input object. + Silence the progress bars. Returns ------- @@ -1247,24 +1288,24 @@ def powerspectrum_from_lightcurve(lc, segment_size=None, norm="frac", err = lc.counts_err table = avg_pds_from_events( - lc.time, lc.gti, segment_size, lc.dt, + lc.time, gti, segment_size, lc.dt, norm=norm, use_common_mean=use_common_mean, silent=silent, fluxes=lc.counts, errors=err) - return _create_powerspectrum_from_result_table(table, force_averaged=force_averaged) + return _create_powerspectrum_from_result_table(table, + force_averaged=force_averaged) -def powerspectrum_from_lc_iterable(iter_lc, dt, segment_size=None, norm="frac", - silent=False, use_common_mean=True, gti=None): +def powerspectrum_from_lc_iterable(iter_lc, dt, segment_size=None, gti=None, + norm="frac", silent=False, + use_common_mean=True): """Calculate AveragedCrossspectrum from two light curves Parameters ---------- - iter_lc1 : iterable of `stingray.Lightcurve` objects or `np.array` - Light curves from channel 1. If arrays, use them as counts - iter_lc1 : iterable of `stingray.Lightcurve` objects or `np.array` - Light curves from channel 2. If arrays, use them as counts + iter_lc : iterable of `stingray.Lightcurve` objects or `np.array` + Light curves. If arrays, use them as counts dt : float The time resolution of the light curves (sets the Nyquist frequency) @@ -1272,22 +1313,26 @@ def powerspectrum_from_lc_iterable(iter_lc, dt, segment_size=None, norm="frac", Other parameters ---------------- segment_size : float, default None - The length, in seconds, of the light curve segments that will be averaged. - If not ``None``, it will be used to check the segment size of the output. + The length, in seconds, of the light curve segments that will be + averaged. If not ``None``, it will be used to check the segment size of + the output. + gti : [[gti0, gti1], ...] + Additional, optional Good Time intervals that get intersected with + the GTIs of the input object. Can cause errors if there are + overlaps between these GTIs and the input object GTIs. If that + happens, assign the desired GTIs to the input object. norm : str, default "frac" - The normalization of the periodogram. "abs" is absolute rms, "frac" is - fractional rms, "leahy" is Leahy+83 normalization, and "none" is the - unnormalized periodogram + The normalization of the periodogram. `abs` is absolute rms, `frac` + is fractional rms, `leahy` is Leahy+83 normalization, and `none` is + the unnormalized periodogram. use_common_mean : bool, default True - The mean of the light curve can be estimated in each interval, or on - the full light curve. This gives different results (Alston+2013). - Here we assume the mean is calculated on the full light curve, but - the user can set ``use_common_mean`` to False to calculate it on a - per-segment basis. + The mean of the light curve can be estimated in each interval, or + on the full light curve. This gives different results + (Alston+2013). By default, we assume the mean is calculated on the + full light curve, but the user can set ``use_common_mean`` to False + to calculate it on a per-segment basis. silent : bool, default False - Silence the progress bars - gti: [[gti0_0, gti0_1], [gti1_0, gti1_1], ...] - Good Time intervals. + Silence the progress bars. Returns ------- From d32032d97b64830b1bdd174db591981aab13174f Mon Sep 17 00:00:00 2001 From: Abbie Stevens Date: Sat, 26 Mar 2022 18:25:23 -0400 Subject: [PATCH 18/45] Making order of optional variables consistent between the function definition and the comment. --- stingray/powerspectrum.py | 88 ++++++++++++++++++++------------------- 1 file changed, 46 insertions(+), 42 deletions(-) diff --git a/stingray/powerspectrum.py b/stingray/powerspectrum.py index 775429770..f50c35e04 100755 --- a/stingray/powerspectrum.py +++ b/stingray/powerspectrum.py @@ -411,7 +411,8 @@ def modulation_upper_limit(self, fmin=None, fmax=None, c=0.95): @staticmethod def from_time_array(times, dt, segment_size=None, gti=None, norm="frac", silent=False, use_common_mean=True): - """Calculate AveragedPowerspectrum from an array of event times. + """ + Calculate `AveragedPowerspectrum` from an array of event times. Parameters ---------- @@ -426,6 +427,11 @@ def from_time_array(times, dt, segment_size=None, gti=None, norm="frac", segment_size : float The length, in seconds, of the light curve segments that will be averaged. Only relevant (and required) for `AveragedPowerspectrum`. + gti: [[gti0_0, gti0_1], [gti1_0, gti1_1], ...] + Additional, optional Good Time intervals that get intersected with + the GTIs of the input object. Can cause errors if there are + overlaps between these GTIs and the input object GTIs. If that + happens, assign the desired GTIs to the input object. norm : str, default "frac" The normalization of the periodogram. `abs` is absolute rms, `frac` is fractional rms, `leahy` is Leahy+83 normalization, and `none` is @@ -438,11 +444,6 @@ def from_time_array(times, dt, segment_size=None, gti=None, norm="frac", to calculate it on a per-segment basis. silent : bool, default False Silence the progress bars. - gti: [[gti0_0, gti0_1], [gti1_0, gti1_1], ...] - Additional, optional Good Time intervals that get intersected with - the GTIs of the input object. Can cause errors if there are - overlaps between these GTIs and the input object GTIs. If that - happens, assign the desired GTIs to the input object. """ return powerspectrum_from_time_array( @@ -450,9 +451,10 @@ def from_time_array(times, dt, segment_size=None, gti=None, norm="frac", silent=silent, use_common_mean=use_common_mean) @staticmethod - def from_events(events, dt, segment_size=None, norm="frac", - silent=False, use_common_mean=True, gti=None): - """Calculate AveragedPowerspectrum from an event list + def from_events(events, dt, segment_size=None, gti=None, norm="frac", + silent=False, use_common_mean=True): + """ + Calculate `AveragedPowerspectrum` from an event list. Parameters ---------- @@ -467,6 +469,11 @@ def from_events(events, dt, segment_size=None, norm="frac", segment_size : float The length, in seconds, of the light curve segments that will be averaged. Only relevant (and required) for `AveragedPowerspectrum`. + gti: [[gti0_0, gti0_1], [gti1_0, gti1_1], ...] + Additional, optional Good Time intervals that get intersected with + the GTIs of the input object. Can cause errors if there are + overlaps between these GTIs and the input object GTIs. If that + happens, assign the desired GTIs to the input object. norm : str, default "frac" The normalization of the periodogram. `abs` is absolute rms, `frac` is fractional rms, `leahy` is Leahy+83 normalization, and `none` is @@ -479,20 +486,15 @@ def from_events(events, dt, segment_size=None, norm="frac", to calculate it on a per-segment basis. silent : bool, default False Silence the progress bars. - gti: [[gti0_0, gti0_1], [gti1_0, gti1_1], ...] - Additional, optional Good Time intervals that get intersected with - the GTIs of the input object. Can cause errors if there are - overlaps between these GTIs and the input object GTIs. If that - happens, assign the desired GTIs to the input object. """ return powerspectrum_from_events( - events, dt, segment_size=segment_size, norm=norm, - silent=silent, use_common_mean=use_common_mean, gti=gti) + events, dt, segment_size=segment_size, gti=gti, norm=norm, + silent=silent, use_common_mean=use_common_mean) @staticmethod - def from_lightcurve(lc, segment_size=None, norm="frac", - silent=False, use_common_mean=True, gti=None): + def from_lightcurve(lc, segment_size=None, gti=None, norm="frac", + silent=False, use_common_mean=True): """Calculate AveragedPowerspectrum from a light curve Parameters @@ -508,6 +510,11 @@ def from_lightcurve(lc, segment_size=None, norm="frac", segment_size : float The length, in seconds, of the light curve segments that will be averaged. Only relevant (and required) for `AveragedPowerspectrum`. + gti: [[gti0_0, gti0_1], [gti1_0, gti1_1], ...] + Additional, optional Good Time intervals that get intersected with + the GTIs of the input object. Can cause errors if there are + overlaps between these GTIs and the input object GTIs. If that + happens, assign the desired GTIs to the input object. norm : str, default "frac" The normalization of the periodogram. `abs` is absolute rms, `frac` is fractional rms, `leahy` is Leahy+83 normalization, and `none` is @@ -520,21 +527,17 @@ def from_lightcurve(lc, segment_size=None, norm="frac", to calculate it on a per-segment basis. silent : bool, default False Silence the progress bars. - gti: [[gti0_0, gti0_1], [gti1_0, gti1_1], ...] - Additional, optional Good Time intervals that get intersected with - the GTIs of the input object. Can cause errors if there are - overlaps between these GTIs and the input object GTIs. If that - happens, assign the desired GTIs to the input object. """ return powerspectrum_from_lightcurve( - lc, segment_size=segment_size, norm=norm, - silent=silent, use_common_mean=use_common_mean, gti=gti) + lc, segment_size=segment_size, gti=gti, norm=norm, + silent=silent, use_common_mean=use_common_mean) @staticmethod - def from_lc_iterable(iter_lc, dt, segment_size=None, norm="frac", - silent=False, use_common_mean=True, gti=None): - """Calculate AveragedCrossspectrum from two light curves + def from_lc_iterable(iter_lc, dt, segment_size=None, gti=None, norm="frac", + silent=False, use_common_mean=True): + """ + Calculate `AveragedCrossspectrum` from two light curves. Parameters ---------- @@ -549,6 +552,11 @@ def from_lc_iterable(iter_lc, dt, segment_size=None, norm="frac", segment_size : float The length, in seconds, of the light curve segments that will be averaged. Only relevant (and required) for `AveragedPowerspectrum`. + gti: [[gti0_0, gti0_1], [gti1_0, gti1_1], ...] + Additional, optional Good Time intervals that get intersected with + the GTIs of the input object. Can cause errors if there are + overlaps between these GTIs and the input object GTIs. If that + happens, assign the desired GTIs to the input object. norm : str, default "frac" The normalization of the periodogram. `abs` is absolute rms, `frac` is fractional rms, `leahy` is Leahy+83 normalization, and `none` is @@ -561,20 +569,15 @@ def from_lc_iterable(iter_lc, dt, segment_size=None, norm="frac", to calculate it on a per-segment basis. silent : bool, default False Silence the progress bars. - gti: [[gti0_0, gti0_1], [gti1_0, gti1_1], ...] - Additional, optional Good Time intervals that get intersected with - the GTIs of the input object. Can cause errors if there are - overlaps between these GTIs and the input object GTIs. If that - happens, assign the desired GTIs to the input object. """ return powerspectrum_from_lc_iterable( - iter_lc, dt, segment_size=segment_size, norm=norm, - silent=silent, use_common_mean=use_common_mean, gti=gti) + iter_lc, dt, segment_size=segment_size, gti=gti, norm=norm, + silent=silent, use_common_mean=use_common_mean) def _initialize_from_any_input( - self, data, dt=None, segment_size=None, norm="frac", - silent=False, use_common_mean=True, gti=None): + self, data, dt=None, segment_size=None, gti=None, norm="frac", + silent=False, use_common_mean=True): """ Initialize the class, trying to understand the input types. @@ -874,7 +877,8 @@ def _make_segment_spectrum(self, lc, segment_size, silent=False): check_gtis(self.gti) start_inds, end_inds = \ - bin_intervals_from_gtis(current_gtis, segment_size, lc.time, dt=lc.dt) + bin_intervals_from_gtis(current_gtis, segment_size, lc.time, + dt=lc.dt) power_all = [] nphots_all = [] @@ -912,11 +916,11 @@ class DynamicalPowerspectrum(AveragedPowerspectrum): This class will divide a :class:`Lightcurve` object into segments of length ``segment_size``, create a power spectrum for each segment and store - all powers in a matrix as a function of both time (using the mid-point of each - segment) and frequency. + all powers in a matrix as a function of both time (using the mid-point of + each segment) and frequency. - This is often used to trace changes in period of a (quasi-)periodic signal over - time. + This is often used to trace changes in period of a (quasi-)periodic signal + over time. Parameters ---------- From 4cf770cd711408c58f03b6e89198ab3ac8ea1b90 Mon Sep 17 00:00:00 2001 From: Abbie Stevens Date: Sat, 26 Mar 2022 18:38:41 -0400 Subject: [PATCH 19/45] More explanation of where power spectrum errors come from, adding periods to the ends of sentences, etc. --- stingray/powerspectrum.py | 26 ++++++++++++++------------ 1 file changed, 14 insertions(+), 12 deletions(-) diff --git a/stingray/powerspectrum.py b/stingray/powerspectrum.py index f50c35e04..d164f62f7 100755 --- a/stingray/powerspectrum.py +++ b/stingray/powerspectrum.py @@ -63,38 +63,39 @@ class Powerspectrum(Crossspectrum): skip_checks: bool Skip initial checks, for speed or other reasons (you need to trust your - inputs!) + inputs!). Attributes ---------- norm: {``leahy`` | ``frac`` | ``abs`` | ``none`` } - the normalization of the power spectrun + The normalization of the power spectrum. freq: numpy.ndarray - The array of mid-bin frequencies that the Fourier transform samples + The array of mid-bin frequencies that the Fourier transform samples. power: numpy.ndarray The array of normalized squared absolute values of Fourier - amplitudes + amplitudes. power_err: numpy.ndarray The uncertainties of ``power``. An approximation for each bin given by ``power_err= power/sqrt(m)``. Where ``m`` is the number of power averaged in each bin (by frequency - binning, or averaging power spectrum). Note that for a single - realization (``m=1``) the error is equal to the power. + binning, or averaging power spectra of segments of a light curve). + Note that for a single realization (``m=1``) the error is equal to the + power. df: float - The frequency resolution + The frequency resolution. m: int - The number of averaged powers in each bin + The number of averaged powers in each bin. n: int - The number of data points in the light curve + The number of data points in the light curve. nphots: float - The total number of photons in the light curve + The total number of photons in the light curve. legacy: bool Use the legacy machinery of `AveragedPowerspectrum`. This might be @@ -711,8 +712,9 @@ class AveragedPowerspectrum(AveragedCrossspectrum, Powerspectrum): The uncertainties of ``power``. An approximation for each bin given by ``power_err= power/sqrt(m)``. Where ``m`` is the number of power averaged in each bin (by frequency - binning, or averaging powerspectrum). Note that for a single - realization (``m=1``) the error is equal to the power. + binning, or averaging power spectra of segments of a light curve). + Note that for a single realization (``m=1``) the error is equal to the + power. df: float The frequency resolution. From eaab162ab3c1b2533baad6c1e8e303798414b262 Mon Sep 17 00:00:00 2001 From: Abbie Stevens Date: Sat, 26 Mar 2022 18:39:41 -0400 Subject: [PATCH 20/45] Adding clarifying GTI comments and averaged cross spectrum error comments to the docstrings, putting periods at the end of sentences and making lines wrap at the set character length. --- stingray/crossspectrum.py | 85 +++++++++++++++++++++++++-------------- 1 file changed, 55 insertions(+), 30 deletions(-) diff --git a/stingray/crossspectrum.py b/stingray/crossspectrum.py index 9ed00d7e4..c1fb8b22f 100644 --- a/stingray/crossspectrum.py +++ b/stingray/crossspectrum.py @@ -503,7 +503,10 @@ class Crossspectrum(StingrayObject): ---------------- gti: [[gti0_0, gti0_1], [gti1_0, gti1_1], ...] Good Time intervals. Defaults to the common GTIs from the two input - objects + objects. Could throw errors if these GTIs have overlaps with the input + `Lightcurve` GTIs! If you're getting errors regarding your GTIs, don't + use this and only give GTIs to the `Lightcurve` objects before making + the cross spectrum. lc1: :class:`stingray.Lightcurve`object OR iterable of :class:`stingray.Lightcurve` objects For backwards compatibility only. Like ``data1``, but no @@ -1402,10 +1405,14 @@ def from_time_array( Other parameters ---------------- segment_size : float - The length, in seconds, of the light curve segments that will be averaged. - Only relevant (and required) for AveragedCrossspectrum + The length, in seconds, of the light curve segments that will be + averaged. Only relevant (and required) for `AveragedCrossspectrum`. gti : [[gti0, gti1], ...] - Common Good Time intervals + Good Time intervals. Defaults to the common GTIs from the two input + objects. Could throw errors if these GTIs have overlaps with the + input object GTIs! If you're getting errors regarding your GTIs, + don't use this and only give GTIs to the input objects before + making the cross spectrum. norm : str, default "frac" The normalization of the periodogram. "abs" is absolute rms, "frac" is fractional rms, "leahy" is Leahy+83 normalization, and "none" is the @@ -1485,7 +1492,10 @@ def from_events( the real part gti: [[gti0_0, gti0_1], [gti1_0, gti1_1], ...] Good Time intervals. Defaults to the common GTIs from the two input - objects + objects. Could throw errors if these GTIs have overlaps with the + input object GTIs! If you're getting errors regarding your GTIs, + don't use this and only give GTIs to the input objects before + making the cross spectrum. """ return crossspectrum_from_events( @@ -1544,7 +1554,10 @@ def from_lightcurve( the real part gti: [[gti0_0, gti0_1], [gti1_0, gti1_1], ...] Good Time intervals. Defaults to the common GTIs from the two input - objects + objects. Could throw errors if these GTIs have overlaps with the + input object GTIs! If you're getting errors regarding your GTIs, + don't use this and only give GTIs to the input objects before + making the cross spectrum. """ return crossspectrum_from_lightcurve( lc1, @@ -1605,7 +1618,10 @@ def from_lc_iterable( the real part gti: [[gti0_0, gti0_1], [gti1_0, gti1_1], ...] Good Time intervals. Defaults to the common GTIs from the two input - objects + objects. Could throw errors if these GTIs have overlaps with the + input object GTIs! If you're getting errors regarding your GTIs, + don't use this and only give GTIs to the input objects before + making the cross spectrum. """ return crossspectrum_from_lc_iterable( @@ -1706,16 +1722,18 @@ class AveragedCrossspectrum(Crossspectrum): Parameters ---------- data1: :class:`stingray.Lightcurve`object OR iterable of :class:`stingray.Lightcurve` objects OR :class:`stingray.EventList` object - A light curve from which to compute the cross spectrum. In some cases, this would - be the light curve of the wavelength/energy/frequency band of interest. + A light curve from which to compute the cross spectrum. In some cases, + this would be the light curve of the wavelength/energy/frequency band + of interest. data2: :class:`stingray.Lightcurve`object OR iterable of :class:`stingray.Lightcurve` objects OR :class:`stingray.EventList` object - A second light curve to use in the cross spectrum. In some cases, this would be - the wavelength/energy/frequency reference band to compare the band of interest with. + A second light curve to use in the cross spectrum. In some cases, this + would be the wavelength/energy/frequency reference band to compare the + band of interest with. segment_size: float - The size of each segment to average. Note that if the total - duration of each :class:`Lightcurve` object in ``lc1`` or ``lc2`` is not an + The size of each segment to average. Note that if the total duration of + each :class:`Lightcurve` object in ``lc1`` or ``lc2`` is not an integer multiple of the ``segment_size``, then any fraction left-over at the end of the time series will be lost. Otherwise you introduce artifacts. @@ -1727,7 +1745,10 @@ class AveragedCrossspectrum(Crossspectrum): ---------------- gti: [[gti0_0, gti0_1], [gti1_0, gti1_1], ...] Good Time intervals. Defaults to the common GTIs from the two input - objects + objects. Could throw errors if these GTIs have overlaps with the + input object GTIs! If you're getting errors regarding your GTIs, + don't use this and only give GTIs to the input objects before + making the cross spectrum. dt : float The time resolution of the light curve. Only needed when constructing @@ -1767,53 +1788,57 @@ class AveragedCrossspectrum(Crossspectrum): inputs!) use_common_mean: bool - Averaged Cross spectra are normalized in two possible ways: one is by normalizing + Averaged cross spectra are normalized in two possible ways: one is by normalizing each of the single spectra that get averaged, the other is by normalizing after the averaging. If `use_common_mean` is selected, the spectrum will be normalized after the average. legacy: bool - Use the legacy machinery of AveragedCrossspectrum. This might be useful to compare - with old results, and is also needed to use light curve lists as an input, to - conserve the spectra of each segment, or to use the large_data option. + Use the legacy machinery of `AveragedCrossspectrum`. This might be + useful to compare with old results, and is also needed to use light + curve lists as an input, to conserve the spectra of each segment, or + to use the large_data option. gti: [[gti0_0, gti0_1], [gti1_0, gti1_1], ...] Good Time intervals. Defaults to the common GTIs from the two input - objects + objects. Could throw errors if these GTIs have overlaps with the + input object GTIs! If you're getting errors regarding your GTIs, + don't use this and only give GTIs to the input objects before + making the cross spectrum. Attributes ---------- freq: numpy.ndarray - The array of mid-bin frequencies that the Fourier transform samples + The array of mid-bin frequencies that the Fourier transform samples. power: numpy.ndarray - The array of cross spectra + The array of cross spectra. power_err: numpy.ndarray The uncertainties of ``power``. An approximation for each bin given by ``power_err= power/sqrt(m)``. Where ``m`` is the number of power averaged in each bin (by frequency - binning, or averaging powerspectrum). Note that for a single - realization (``m=1``) the error is equal to the power. + binning, or averaging power spectra of segments of a light curve). + Note that for a single realization (``m=1``) the error is equal to the + power. df: float - The frequency resolution + The frequency resolution. m: int - The number of averaged cross spectra + The number of averaged cross spectra. n: int - The number of time bins per segment of light curve + The number of time bins per segment of light curve. nphots1: float - The total number of photons in the first (interest) light curve + The total number of photons in the first (interest) light curve. nphots2: float - The total number of photons in the second (reference) light curve + The total number of photons in the second (reference) light curve. gti: [[gti0_0, gti0_1], [gti1_0, gti1_1], ...] - Good Time intervals. Defaults to the common GTIs from the two input - objects + Good Time intervals. """ def __init__( From c9b0b827813c7accf92f327181f923f3df5c685f Mon Sep 17 00:00:00 2001 From: Abbie Stevens Date: Sat, 26 Mar 2022 18:59:56 -0400 Subject: [PATCH 21/45] Added more detail to some docstrings, improved PEP8 compliance. --- stingray/gti.py | 227 ++++++++++++++++++++++++++---------------------- 1 file changed, 123 insertions(+), 104 deletions(-) diff --git a/stingray/gti.py b/stingray/gti.py index 1ea26aa42..956de537c 100644 --- a/stingray/gti.py +++ b/stingray/gti.py @@ -25,7 +25,7 @@ def gti_len(gti): - """Deprecated. Use get_total_gti_length.""" + """Deprecated, will be removed in version 2.0. Use get_total_gti_length.""" warnings.warn("This function is deprecated. Use get_total_gti_length instead", DeprecationWarning) return get_total_gti_length(gti, minlen=0) @@ -37,12 +37,12 @@ def get_gti_lengths(gti): Parameters ---------- gti : [[gti00, gti01], [gti10, gti11], ...] - The list of good time intervals + The list of good time intervals. Returns ------- lengths : `np.ndarray` - List of GTI lengths + List of GTI lengths. Examples -------- @@ -59,14 +59,14 @@ def get_total_gti_length(gti, minlen=0): Parameters ---------- gti : [[gti00, gti01], [gti10, gti11], ...] - The list of good time intervals + The list of good time intervals. minlen : float - Minimum GTI length to consider + Minimum GTI length to consider. Returns ------- length : float - total exposure during GTIs + The total exposure during GTIs. Examples -------- @@ -88,11 +88,11 @@ def load_gtis(fits_file, gtistring=None): Parameters ---------- fits_file : str - File name and path for the fits file with the GTIs to be loaded + File name and path for the FITS file with the GTIs to be loaded. gtistring : str - If the name of the extension with the GTIs is not ``GTI``, the alternative - name can be set with this parameter + If the name of the FITS extension with the GTIs is not ``GTI``, the + alternative name can be set with this parameter. Returns ------- @@ -124,12 +124,12 @@ def get_gti_extensions_from_pattern(lchdulist, name_pattern="GTI"): lchdulist: `:class:astropy.io.fits.HDUList` object The full content of a FITS file. name_pattern: str - Pattern indicating all the GTI extensions + Pattern indicating all the GTI extensions. Returns ------- ext_list: list - List of GTI extension numbers whose name matches the input pattern + List of GTI extension numbers whose name matches the input pattern. Examples -------- @@ -156,7 +156,7 @@ def get_gti_extensions_from_pattern(lchdulist, name_pattern="GTI"): def hdu_contains_gti(hdu): - """Test if a given hdu contains a list of GTIs + """Test if a given FITS HDU contains a list of GTIs. Examples -------- @@ -178,17 +178,17 @@ def hdu_contains_gti(hdu): def get_gti_from_hdu(gtihdu): - """Get the GTIs from a given extension. + """Get the GTIs from a given FITS extension. Parameters ---------- gtihdu: `:class:astropy.io.fits.TableHDU` object - The GTI hdu + The GTI HDU. Returns ------- gti_list: [[gti00, gti01], [gti10, gti11], ...] - List of good time intervals + List of good time intervals. Examples -------- @@ -299,13 +299,13 @@ def check_gtis(gti): If GTIs have overlapping or displaced values """ if len(gti) < 1: - raise ValueError("Empty GTIs") + raise ValueError("Empty GTIs.") for g in gti: if np.size(g) != 2 or np.ndim(g) != 1: raise TypeError( - "Please check the formatting of GTIs. They need to be" - " provided as [[gti00, gti01], [gti10, gti11], ...]") + "Please check the formatting of the GTIs. They need to be" + " provided as [[gti00, gti01], [gti10, gti11], ...].") gti = np.array(gti) gti_start = gti[:, 0] @@ -313,12 +313,12 @@ def check_gtis(gti): # Check that GTIs are well-behaved if not np.all(gti_end >= gti_start): - raise ValueError('This GTI end times must be larger than ' - 'GTI start times') + raise ValueError('The GTI end times must be larger than the ' + 'GTI start times.') # Check that there are no overlaps in GTIs if not np.all(gti_start[1:] >= gti_end[:-1]): - raise ValueError('This GTI has overlaps') + raise ValueError('This GTI has overlaps.') return @@ -326,7 +326,7 @@ def check_gtis(gti): @jit(nopython=True) def create_gti_mask_jit(time, gtis, mask, gti_mask, min_length=0): # pragma: no cover """ - Compiled and fast function to create gti mask. + Compiled and fast function to create GTI mask. Parameters ---------- @@ -334,18 +334,19 @@ def create_gti_mask_jit(time, gtis, mask, gti_mask, min_length=0): # pragma: no An array of time stamps gtis : iterable of ``(start, stop)`` pairs - The list of GTIs + The list of GTIs. mask : numpy.ndarray A pre-assigned array of zeros of the same shape as ``time`` Records whether a time stamp is part of the GTIs. gti_mask : numpy.ndarray - A pre-assigned array zeros in the same shape as ``time``; records start/stop of GTIs + A pre-assigned array zeros in the same shape as ``time``; records + start/stop of GTIs. min_length : float - An optional minimum length for the GTIs to be applied. Only GTIs longer than ``min_length`` will - be considered when creating the mask. + An optional minimum length for the GTIs to be applied. Only GTIs longer + than ``min_length`` will be considered when creating the mask. """ gti_el = -1 @@ -404,18 +405,19 @@ def create_gti_mask(time, gtis, safe_interval=None, min_length=0, If ``True```, return the list of new GTIs (if ``min_length > 0``) dt : float - Time resolution of the data, i.e. the interval between time stamps + Time resolution of the data, i.e. the interval between time stamps. epsilon : float - fraction of ``dt`` that is tolerated at the borders of a GTI + Fraction of ``dt`` that is tolerated at the borders of a GTI. Returns ------- mask : bool array - A mask labelling all time stamps that are included in the GTIs versus those that are not. + A mask labelling all time stamps that are included in the GTIs versus + those that are not. new_gtis : ``Nx2`` array - An array of new GTIs created by this function + An array of new GTIs created by this function. """ gtis = np.array(gtis, dtype=np.longdouble) if time.size == 0: @@ -482,15 +484,15 @@ def create_gti_mask_complete(time, gtis, safe_interval=0, min_length=0, return_new_gtis=False, dt=None, epsilon=0.001): """Create GTI mask, allowing for non-constant ``dt``. - Assumes that no overlaps are present between GTIs + Assumes that no overlaps are present between GTIs. Parameters ---------- time : numpy.ndarray - An array of time stamps + An array of time stamps. gtis : ``[[g0_0, g0_1], [g1_0, g1_1], ...]``, float array-like - The list of GTIs + The list of GTIs. Other parameters @@ -500,25 +502,26 @@ def create_gti_mask_complete(time, gtis, safe_interval=0, min_length=0, and the end (if pair of values) of GTIs. min_length : float - An optional minimum length for the GTIs to be applied. Only GTIs longer than ``min_length`` will - be considered when creating the mask. + An optional minimum length for the GTIs to be applied. Only GTIs longer + than ``min_length`` will be considered when creating the mask. return_new_gtis : bool - If ``True``, return the list of new GTIs (if ``min_length > 0``) + If ``True``, return the list of new GTIs (if ``min_length > 0``). dt : float - Time resolution of the data, i.e. the interval between time stamps + Time resolution of the data, i.e. the interval between time stamps. epsilon : float - fraction of ``dt`` that is tolerated at the borders of a GTI + Fraction of ``dt`` that is tolerated at the borders of a GTI. Returns ------- mask : bool array - A mask labelling all time stamps that are included in the GTIs versus those that are not. + A mask labelling all time stamps that are included in the GTIs versus + those that are not. new_gtis : Nx2 array - An array of new GTIs created by this function + An array of new GTIs created by this function. """ check_gtis(gtis) @@ -565,7 +568,7 @@ def create_gti_from_condition(time, condition, Parameters ---------- time : array-like - Array containing time stamps + Array containing time stamps. condition : array-like An array of bools, of the same length of time. @@ -574,7 +577,7 @@ def create_gti_from_condition(time, condition, Returns ------- gtis : ``[[gti0_0, gti0_1], [gti1_0, gti1_1], ...]`` - The newly created GTIs + The newly created GTIs. Other parameters ---------------- @@ -623,7 +626,7 @@ def cross_two_gtis(gti0, gti1): Returns ------- gtis : ``[[gti0_0, gti0_1], [gti1_0, gti1_1], ...]`` - The newly created GTIs + The newly created GTIs. See Also -------- @@ -718,13 +721,14 @@ def cross_gtis(gti_list): Parameters ---------- gti_list : array-like - List of GTI arrays, each one in the usual format ``[[gti0_0, gti0_1], [gti1_0, gti1_1], ...]`` + List of GTI arrays, each one in the usual format + ``[[gti0_0, gti0_1], [gti1_0, gti1_1], ...]``. Returns ------- gti0: 2-d float array ``[[gti0_0, gti0_1], [gti1_0, gti1_1], ...]`` - The newly created GTIs + The newly created GTIs. See Also -------- @@ -769,20 +773,20 @@ def get_btis(gtis, start_time=None, stop_time=None): Parameters ---------- gtis : iterable - A list of GTIs + A list of GTIs. start_time : float - Optional start time of the overall observation (e.g. can be earlier than the first time stamp - in ``gtis``. + Optional start time of the overall observation (e.g. can be earlier + than the first time stamp in ``gtis``). stop_time : float - Optional stop time of the overall observation (e.g. can be later than the last time stamp in - ``gtis``. + Optional stop time of the overall observation (e.g. can be later than + the last time stamp in``gtis``). Returns ------- btis : numpy.ndarray - A list of bad time intervals + A list of bad time intervals. """ # Check GTIs if len(gtis) == 0: @@ -840,15 +844,15 @@ def check_separate(gti0, gti1): Parameters ---------- gti0: 2-d float array - List of GTIs of form ``[[gti0_0, gti0_1], [gti1_0, gti1_1], ...]`` + List of GTIs of form ``[[gti0_0, gti0_1], [gti1_0, gti1_1], ...]``. gti1: 2-d float array - List of GTIs of form ``[[gti0_0, gti0_1], [gti1_0, gti1_1], ...]`` + List of GTIs of form ``[[gti0_0, gti0_1], [gti1_0, gti1_1], ...]``. Returns ------- separate: bool - ``True`` if GTIs are mutually exclusive, ``False`` if not + ``True`` if GTIs are mutually exclusive, ``False`` if not. Examples -------- @@ -892,8 +896,8 @@ def check_separate(gti0, gti1): def join_equal_gti_boundaries(gti): - """If the start of a GTI is right at the end of another, join them. - + """ + If the start of a GTI is right at the end of another, join them. """ new_gtis = [] for l in gti: @@ -912,7 +916,8 @@ def join_equal_gti_boundaries(gti): def append_gtis(gti0, gti1): - """Union of two non-overlapping GTIs. + """ + Union of two non-overlapping GTIs. If the two GTIs "touch", this is tolerated and the touching GTIs are joined in a single one. @@ -920,15 +925,15 @@ def append_gtis(gti0, gti1): Parameters ---------- gti0: 2-d float array - List of GTIs of the form ``[[gti0_0, gti0_1], [gti1_0, gti1_1], ...]`` + List of GTIs of the form ``[[gti0_0, gti0_1], [gti1_0, gti1_1], ...]``. gti1: 2-d float array - List of GTIs of the form ``[[gti0_0, gti0_1], [gti1_0, gti1_1], ...]`` + List of GTIs of the form ``[[gti0_0, gti0_1], [gti1_0, gti1_1], ...]``. Returns ------- gti: 2-d float array - The newly created GTI + The newly created GTI array. Examples -------- @@ -1050,9 +1055,10 @@ def time_intervals_from_gtis(gtis, segment_size, fraction_step=1, Length of the time segments fraction_step : float - If the step is not a full ``segment_size`` but less (e.g. a moving window), - this indicates the ratio between step step and ``segment_size`` (e.g. - 0.5 means that the window shifts of half ``segment_size``) + If the step is not a full ``segment_size`` but less (e.g. a moving + window), this indicates the ratio between step step and + ``segment_size`` (e.g. ``0.5`` means that the window shifts by half + ``segment_size``). Returns ------- @@ -1093,20 +1099,20 @@ def calculate_segment_bin_start(startbin, stopbin, nbin, fraction_step=1): Parameters ---------- startbin : int - Starting bin of the interval + Starting bin of the interval. stopbin : int - Last bin of the interval + Last bin of the interval. nbin : int - number of bins in each interval + Number of bins in each interval. Other Parameters ---------------- fraction_step : float If the step is not a full ``nbin`` but less (e.g. a moving window), this indicates the ratio between the step and ``nbin`` (e.g. - ``0.5`` means that the window shifts of half ``nbin``) + ``0.5`` means that the window shifts by half ``nbin``). Returns ------- @@ -1134,25 +1140,26 @@ def calculate_segment_bin_start(startbin, stopbin, nbin, fraction_step=1): def bin_intervals_from_gtis(gtis, segment_size, time, dt=None, fraction_step=1, epsilon=0.001): - """Compute start/stop times of equal time intervals, compatible with GTIs, and map them - to the indices of an array of time stamps. + """ + Compute start/stop times of equal time intervals, compatible with GTIs, + and map them to the indices of an array of time stamps. Used to start each FFT/PDS/cospectrum from the start of a GTI, and stop before the next gap in data (end of GTI). In this case, it is necessary to specify the time array containing the times of the light curve bins. - Returns start and stop bins of the intervals to use for the PDS + Returns start and stop bins of the intervals to use for the PDS. Parameters ---------- gtis : 2-d float array - List of GTIs of the form ``[[gti0_0, gti0_1], [gti1_0, gti1_1], ...]`` + List of GTIs of the form ``[[gti0_0, gti0_1], [gti1_0, gti1_1], ...]``. segment_size : float - Length of each time segment + Length of each time segment. time : array-like - Array of time stamps + Array of time stamps. Other Parameters ---------------- @@ -1160,12 +1167,14 @@ def bin_intervals_from_gtis(gtis, segment_size, time, dt=None, fraction_step=1, Time resolution of the light curve. epsilon : float, default 0.001 - The tolerance, in fraction of ``dt``, for the comparisons at the borders + The tolerance, in fraction of ``dt``, for the comparisons at the + borders. fraction_step : float - If the step is not a full ``segment_size`` but less (e.g. a moving window), - this indicates the ratio between step step and ``segment_size`` (e.g. - ``0.5`` means that the window shifts of half ``segment_size``) + If the step is not a full ``segment_size`` but less (e.g. a moving + window), this indicates the ratio between step step and + ``segment_size`` (e.g. ``0.5`` means that the window shifts by half + ``segment_size``). Returns ------- @@ -1237,17 +1246,18 @@ def bin_intervals_from_gtis(gtis, segment_size, time, dt=None, fraction_step=1, def gti_border_bins(gtis, time, dt=None, epsilon=0.001): - """Find the indices in a time array corresponding to the borders of GTIs. + """ + Find the indices in a time array corresponding to the borders of GTIs. GTIs shorter than the bin time are not returned. Parameters ---------- gtis : 2-d float array - List of GTIs of the form ``[[gti0_0, gti0_1], [gti1_0, gti1_1], ...]`` + List of GTIs of the form ``[[gti0_0, gti0_1], [gti1_0, gti1_1], ...]``. time : array-like - Array of time stamps + Array of time stamps. Other Parameters ---------------- @@ -1255,12 +1265,14 @@ def gti_border_bins(gtis, time, dt=None, epsilon=0.001): Time resolution of the light curve. epsilon : float, default 0.001 - The tolerance, in fraction of ``dt``, for the comparisons at the borders + The tolerance, in fraction of ``dt``, for the comparisons at the + borders. fraction_step : float - If the step is not a full ``segment_size`` but less (e.g. a moving window), - this indicates the ratio between step step and ``segment_size`` (e.g. - ``0.5`` means that the window shifts of half ``segment_size``) + If the step is not a full ``segment_size`` but less (e.g. a moving + window), this indicates the ratio between step step and + ``segment_size`` (e.g. ``0.5`` means that the window shifts by half + ``segment_size``). Returns ------- @@ -1328,13 +1340,15 @@ def gti_border_bins(gtis, time, dt=None, epsilon=0.001): def generate_indices_of_boundaries(times, gti, segment_size=None, dt=0): - """Get index boundaries and times from different parts of the observation. + """ + Get index boundaries and times from different parts of the observation. It wraps around `generate_indices_of_gti_boundaries`, `generate_indices_of_segment_boundaries_binned`, and `generate_indices_of_segment_boundaries_unbinned` depending on: - + ``segment_size`` being ``None`` (give GTI boundaries, segment boundaries otherwise) + + ``segment_size`` being ``None`` (give GTI boundaries, segment boundaries + otherwise) + ``dt`` being 0 or nonzero (unevenly sampled, evenly sampled otherwise) Examples @@ -1369,17 +1383,18 @@ def generate_indices_of_boundaries(times, gti, segment_size=None, dt=0): def generate_indices_of_gti_boundaries(times, gti, dt=0): - """Get the indices of events from different GTIs of the observation. + """ + Get the indices of events from different GTIs of the observation. This is a generator, yielding the boundaries of each GTI and the - corresponding indices in the time array + corresponding indices in the time array. Parameters ---------- times : float `np.array` - Array of times + Array of times. gti : [[gti00, gti01], [gti10, gti11], ...] - good time intervals + Good time intervals. Other parameters ---------------- @@ -1389,11 +1404,11 @@ def generate_indices_of_gti_boundaries(times, gti, dt=0): Yields ------- g0: float - Start time of current GTI + Start time of current GTI. g1: float - End time of current GTI + End time of current GTI. startidx: int - Start index of the current GTI in the time array + Start index of the current GTI in the time array. stopidx: int End index of the current GTI in the time array. Note that this is larger by one, so that `time[startidx:stopidx]` returns the correct @@ -1422,25 +1437,25 @@ def generate_indices_of_segment_boundaries_unbinned(times, gti, segment_size): """Get the indices of events from different segments of the observation. This is a generator, yielding the boundaries of each segment and the - corresponding indices in the time array + corresponding indices in the time array. Parameters ---------- times : float `np.array` - Array of times + Array of times. gti : [[gti00, gti01], [gti10, gti11], ...] - good time intervals + Good time intervals. segment_size : float - length of segments + Length of segments. Yields ------- t0: float - Start time of current segment + Start time of current segment. t1: float - End time of current segment + End time of current segment. startidx: int - Start index of the current segment in the time array + Start index of the current segment in the time array. stopidx: int End index of the current segment in the time array. Note that this is larger by one, so that `time[startidx:stopidx]` returns the correct @@ -1476,8 +1491,10 @@ def generate_indices_of_segment_boundaries_unbinned(times, gti, segment_size): yield s, e, idx0, idx1 -def generate_indices_of_segment_boundaries_binned(times, gti, segment_size, dt=None): - """Get the indices of binned times from different segments of the observation. +def generate_indices_of_segment_boundaries_binned(times, gti, segment_size, + dt=None): + """ + Get the indices of binned times from different segments of the observation. This is a generator, yielding the boundaries of each segment and the corresponding indices in the time array @@ -1517,9 +1534,11 @@ def generate_indices_of_segment_boundaries_binned(times, gti, segment_size, dt=N """ gti = np.asarray(gti) times = np.asarray(times) - startidx, stopidx = bin_intervals_from_gtis(gti, segment_size, times, dt=dt) + startidx, stopidx = bin_intervals_from_gtis(gti, segment_size, times, + dt=dt) if dt is None: dt = 0 for idx0, idx1 in zip(startidx, stopidx): - yield times[idx0] - dt / 2, times[min(idx1, times.size - 1)] - dt / 2, idx0, idx1 + yield times[idx0] - dt / 2, times[min(idx1, times.size - 1)] - dt / 2,\ + idx0, idx1 From 2d9229f497312bb5f6935d6de819207465570b89 Mon Sep 17 00:00:00 2001 From: Abbie Stevens Date: Sat, 26 Mar 2022 19:02:04 -0400 Subject: [PATCH 22/45] Tried to keep docstrings consistent in formatting. --- stingray/gti.py | 46 ++++++++++++++++++++++++++++------------------ 1 file changed, 28 insertions(+), 18 deletions(-) diff --git a/stingray/gti.py b/stingray/gti.py index 956de537c..de237592d 100644 --- a/stingray/gti.py +++ b/stingray/gti.py @@ -32,7 +32,8 @@ def gti_len(gti): def get_gti_lengths(gti): - """Calculate the length of each Good Time Interval. + """ + Calculate the length of each Good Time Interval. Parameters ---------- @@ -54,7 +55,8 @@ def get_gti_lengths(gti): def get_total_gti_length(gti, minlen=0): - """Calculate the total exposure during Good Time Intervals. + """ + Calculate the total exposure during Good Time Intervals. Parameters ---------- @@ -98,8 +100,6 @@ def load_gtis(fits_file, gtistring=None): ------- gti_list : list A list of GTI ``(start, stop)`` pairs extracted from the FITS file. - - """ gtistring = assign_value_if_none(gtistring, 'GTI') @@ -117,7 +117,8 @@ def load_gtis(fits_file, gtistring=None): def get_gti_extensions_from_pattern(lchdulist, name_pattern="GTI"): - """Gets the GTI extensions that match a given pattern. + """ + Gets the GTI extensions that match a given pattern. Parameters ---------- @@ -156,7 +157,8 @@ def get_gti_extensions_from_pattern(lchdulist, name_pattern="GTI"): def hdu_contains_gti(hdu): - """Test if a given FITS HDU contains a list of GTIs. + """ + Test if a given FITS HDU contains a list of GTIs. Examples -------- @@ -178,7 +180,8 @@ def hdu_contains_gti(hdu): def get_gti_from_hdu(gtihdu): - """Get the GTIs from a given FITS extension. + """ + Get the GTIs from a given FITS extension. Parameters ---------- @@ -218,10 +221,10 @@ def get_gti_from_hdu(gtihdu): return gti_list -def get_gti_from_all_extensions( - lchdulist, accepted_gtistrings=["GTI"], det_numbers=None -): - """Intersect the GTIs from the all accepted extensions. +def get_gti_from_all_extensions(lchdulist, accepted_gtistrings=["GTI"], + det_numbers=None): + """ + Intersect the GTIs from the all accepted extensions. Parameters ---------- @@ -278,7 +281,8 @@ def get_gti_from_all_extensions( def check_gtis(gti): - """Check if GTIs are well-behaved. + """ + Check if GTIs are well-behaved. Check that: @@ -377,7 +381,8 @@ def create_gti_mask_jit(time, gtis, mask, gti_mask, min_length=0): # pragma: no def create_gti_mask(time, gtis, safe_interval=None, min_length=0, return_new_gtis=False, dt=None, epsilon=0.001): - """Create GTI mask. + """ + Create GTI mask. Assumes that no overlaps are present between GTIs @@ -482,7 +487,8 @@ def create_gti_mask(time, gtis, safe_interval=None, min_length=0, def create_gti_mask_complete(time, gtis, safe_interval=0, min_length=0, return_new_gtis=False, dt=None, epsilon=0.001): - """Create GTI mask, allowing for non-constant ``dt``. + """ + Create GTI mask, allowing for non-constant ``dt``. Assumes that no overlaps are present between GTIs. @@ -563,7 +569,8 @@ def create_gti_mask_complete(time, gtis, safe_interval=0, min_length=0, def create_gti_from_condition(time, condition, safe_interval=0, dt=None): - """Create a GTI list from a time array and a boolean mask (``condition``). + """ + Create a GTI list from a time array and a boolean mask (``condition``). Parameters ---------- @@ -615,7 +622,8 @@ def create_gti_from_condition(time, condition, def cross_two_gtis(gti0, gti1): - """Extract the common intervals from two GTI lists *EXACTLY*. + """ + Extract the common intervals from two GTI lists *EXACTLY*. Parameters ---------- @@ -963,7 +971,8 @@ def append_gtis(gti0, gti1): def join_gtis(gti0, gti1): - """Union of two GTIs. + """ + Union of two GTIs. If GTIs are mutually exclusive, it calls ``append_gtis``. Otherwise we put the extremes of partially overlapping GTIs on an ideal line and look at the @@ -1041,7 +1050,8 @@ def join_gtis(gti0, gti1): def time_intervals_from_gtis(gtis, segment_size, fraction_step=1, epsilon=1e-5): - """Compute start/stop times of equal time intervals, compatible with GTIs. + """ + Compute start/stop times of equal time intervals, compatible with GTIs. Used to start each FFT/PDS/cospectrum from the start of a GTI, and stop before the next gap in data (end of GTI). From 5a7ad4a0bc1f907e24b06b5e027654e2c34a787a Mon Sep 17 00:00:00 2001 From: Abbie Stevens Date: Sat, 26 Mar 2022 19:09:24 -0400 Subject: [PATCH 23/45] Enforced linebreaks at 79 char where made sense. Capitalized Stingray. --- CONTRIBUTING.md | 31 ++++++++++++++++++++++++------- 1 file changed, 24 insertions(+), 7 deletions(-) diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index 02ba12dfc..b7bdde339 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -4,21 +4,33 @@ > All great things have small beginnings. -Hello there! We love and appreciate every small contribution you can make to improve stingray! -We are proudly open source and believe our(yes! yours as well) work will help enhance the quality of research around the world. We want to make contributing to stingray as easy and transparent as possible, whether it's: +Hello there! We love and appreciate every small contribution you can make to +improve Stingray! We are proudly open source and believe our(yes! yours as +well) work will help enhance the quality of research around the world. We +want to make contributing to stingray as easy and transparent as possible, +whether it's: - Reporting a bug - Discussing the current state of the code - Submitting a fix - Proposing new features -A successful project is not just built by amazing programmers but by the combined, unrelenting efforts of coders, testers, reviewers, and documentation writers. There are a few guidelines that we need all contributors to follow so that we can have a chance of keeping on top of things. +A successful project is not just built by amazing programmers but by the +combined, unrelenting efforts of coders, testers, reviewers, and documentation +writers. There are a few guidelines that we need all contributors to follow so +that we can have a chance of keeping on top of things. ## Contribution Guidelines --- -Contributions from everyone, experienced and inexperienced, are welcome! If you don't know where to start, look at the [Open Issues](https://github.com/StingraySoftware/stingray/issues) and/or get involved in our [Slack channel](http://slack-invite.timelabtechnologies.com/) . This code is written in Python 3.8+, but in general we will follow the Astropy/Numpy minimum Python versions. Tests run at each commit during Pull Requests, so it is easy to single out points in the code that break this compatibility. +Contributions from everyone, experienced and inexperienced, are welcome! If +you don't know where to start, look at the +[Open Issues](https://github.com/StingraySoftware/stingray/issues) and/or get +involved in our [Slack channel](http://slack-invite.timelabtechnologies.com/). +This code is written in Python 3.8+, but in general we will follow the Astropy/ +Numpy minimum Python versions. Tests run at each commit during Pull Requests, +so it is easy to single out points in the code that break this compatibility. - **Branches:** - Don't use your main **branch (forked) for anything. Consider deleting your main** branch. @@ -36,7 +48,10 @@ Contributions from everyone, experienced and inexperienced, are welcome! If you ### Contribution Workflow -These, conceptually, are the steps you will follow in contributing to stingray. These steps keep work well organized, with readable history. This in turn makes it easier for project maintainers (that might be you) to see what you’ve done, and why you did it: +These, conceptually, are the steps you will follow in contributing to +Stingray. These steps keep work well organized, with readable history. This in +turn makes it easier for project maintainers (that might be you) to see what +you’ve done, and why you did it: 1. Regularly fetch latest stingray development version `stingray/main` from GitHub. 2. Make a new feature branch. **Recommended:** Use virtual environments to work on branch. @@ -118,7 +133,9 @@ Code Reviews are super-useful: another contributor can review the code, which me --- -The testing framework used by stingray is the pytest framework with tox. To run the tests, you will need to make sure you have the pytest package (version 3.1 or later) as well as the tox tool installed. +The testing framework used by stingray is the pytest framework with tox. To +run the tests, you will need to make sure you have the pytest package (version +3.1 or later) as well as the tox tool installed. - Execute tests using the ```tox -e ``` command. - All tests should be py.test compliant: [http://pytest.org/latest/](http://pytest.org/latest/). @@ -130,4 +147,4 @@ The testing framework used by stingray is the pytest framework with tox. To run --- -- Please refer [CODE_OF_CONDUCT.md](https://github.com/StingraySoftware/stingray/blob/master/CODE_OF_CONDUCT.md) to read about the Community Guidelines +- Please refer to [CODE_OF_CONDUCT.md](https://github.com/StingraySoftware/stingray/blob/master/CODE_OF_CONDUCT.md) to read about the Community Guidelines From 6ae187d7e721e1624f075635fd333d6d3be1679b Mon Sep 17 00:00:00 2001 From: Abbie Stevens Date: Sat, 26 Mar 2022 19:13:40 -0400 Subject: [PATCH 24/45] Improved pep8 compliance except for the examples in the docstrings that were over 79 char. --- stingray/gti.py | 17 +++++++++++------ 1 file changed, 11 insertions(+), 6 deletions(-) diff --git a/stingray/gti.py b/stingray/gti.py index de237592d..fcbf479d6 100644 --- a/stingray/gti.py +++ b/stingray/gti.py @@ -19,15 +19,18 @@ 'cross_two_gtis', 'cross_gtis', 'get_btis', 'get_gti_extensions_from_pattern', 'get_gti_from_all_extensions', 'get_gti_from_hdu', 'get_gti_lengths', 'get_total_gti_length', - 'check_separate', 'append_gtis', 'join_gtis', 'generate_indices_of_gti_boundaries', + 'check_separate', 'append_gtis', 'join_gtis', + 'generate_indices_of_gti_boundaries', 'time_intervals_from_gtis', 'bin_intervals_from_gtis', - 'gti_border_bins', 'generate_indices_of_segment_boundaries_unbinned', 'generate_indices_of_segment_boundaries_binned'] + 'gti_border_bins', + 'generate_indices_of_segment_boundaries_unbinned', + 'generate_indices_of_segment_boundaries_binned'] def gti_len(gti): """Deprecated, will be removed in version 2.0. Use get_total_gti_length.""" - warnings.warn("This function is deprecated. Use get_total_gti_length instead", - DeprecationWarning) + warnings.warn("This function is deprecated. Use get_total_gti_length "\ + "instead", DeprecationWarning) return get_total_gti_length(gti, minlen=0) @@ -328,7 +331,8 @@ def check_gtis(gti): @jit(nopython=True) -def create_gti_mask_jit(time, gtis, mask, gti_mask, min_length=0): # pragma: no cover +def create_gti_mask_jit(time, gtis, mask, gti_mask, + min_length=0): # pragma: no cover """ Compiled and fast function to create GTI mask. @@ -1444,7 +1448,8 @@ def generate_indices_of_gti_boundaries(times, gti, dt=0): def generate_indices_of_segment_boundaries_unbinned(times, gti, segment_size): - """Get the indices of events from different segments of the observation. + """ + Get the indices of events from different segments of the observation. This is a generator, yielding the boundaries of each segment and the corresponding indices in the time array. From 21b4df3770b69fca51a65daaa95d28f9c75cd2ac Mon Sep 17 00:00:00 2001 From: Abbie Stevens Date: Sat, 26 Mar 2022 19:50:47 -0400 Subject: [PATCH 25/45] Made docstrings more consistent with one another (like giving the same gti example, or referring to averaged power spectra vs AveragedPowerspectrum, and fixed pep8 issues. --- stingray/powerspectrum.py | 215 ++++++++++++++++++++------------------ 1 file changed, 115 insertions(+), 100 deletions(-) diff --git a/stingray/powerspectrum.py b/stingray/powerspectrum.py index d164f62f7..df516ad13 100755 --- a/stingray/powerspectrum.py +++ b/stingray/powerspectrum.py @@ -34,11 +34,11 @@ def show_progress(a, **kwargs): class Powerspectrum(Crossspectrum): type = "powerspectrum" """ - Make a :class:`Powerspectrum` (also called periodogram) from a (binned) - light curve. Periodograms can be normalized by either Leahy normalization, + Make a :class:`Powerspectrum` (also called periodogram) from a (binned) + light curve. Periodograms can be normalized by either Leahy normalization, fractional rms normalization, absolute rms normalization, or not at all. - You can also make an empty :class:`Powerspectrum` object to populate with + You can also make an empty :class:`Powerspectrum` object to populate with your own fourier-transformed data (this can sometimes be useful when making binned power spectra). @@ -47,18 +47,18 @@ class Powerspectrum(Crossspectrum): data: :class:`stingray.Lightcurve` object, optional, default ``None`` The light curve data to be Fourier-transformed. - norm: {``leahy`` | ``frac`` | ``abs`` | ``none`` }, optional, default ``frac`` + norm: {"leahy" | "frac" | "abs" | "none" }, optional, default "frac" The normaliation of the power spectrum to be used. Options are - ``leahy``, ``frac``, ``abs`` and ``none``, default is ``frac``. + "leahy", "frac", "abs" and "none", default is "frac". Other Parameters ---------------- gti: 2-d float array ``[[gti0_0, gti0_1], [gti1_0, gti1_1], ...]`` -- Good Time intervals. This choice overrides the GTIs in the single light curves. Use with - care, especially if these GTIs have overlaps with the input - `Lightcurve` GTIs! If you're getting errors regarding your GTIs, don't - use this and only give GTIs to the `Lightcurve` object before making + care, especially if these GTIs have overlaps with the input + object GTIs! If you're getting errors regarding your GTIs, don't + use this and only give GTIs to the input object before making the power spectrum. skip_checks: bool @@ -67,7 +67,7 @@ class Powerspectrum(Crossspectrum): Attributes ---------- - norm: {``leahy`` | ``frac`` | ``abs`` | ``none`` } + norm: {"leahy" | "frac" | "abs" | "none" } The normalization of the power spectrum. freq: numpy.ndarray @@ -81,8 +81,8 @@ class Powerspectrum(Crossspectrum): The uncertainties of ``power``. An approximation for each bin given by ``power_err= power/sqrt(m)``. Where ``m`` is the number of power averaged in each bin (by frequency - binning, or averaging power spectra of segments of a light curve). - Note that for a single realization (``m=1``) the error is equal to the + binning, or averaging power spectra of segments of a light curve). + Note that for a single realization (``m=1``) the error is equal to the power. df: float @@ -98,10 +98,10 @@ class Powerspectrum(Crossspectrum): The total number of photons in the light curve. legacy: bool - Use the legacy machinery of `AveragedPowerspectrum`. This might be - useful to compare with old results, and is also needed to use light - curve lists as an input, to conserve the spectra of each segment, or to - use the large_data option. + Use the legacy machinery of ``AveragedPowerspectrum``. This might be + useful to compare with old results, and is also needed to use light + curve lists as an input, to conserve the spectra of each segment, or + to use the large_data option. """ def __init__(self, data=None, norm="frac", gti=None, @@ -136,8 +136,8 @@ def __init__(self, data=None, norm="frac", gti=None, if not legacy and data is not None: return self._initialize_from_any_input(data, dt=dt, norm=norm) - Crossspectrum.__init__(self, data1=data, data2=data, norm=norm, gti=gti, - dt=dt, skip_checks=True, legacy=legacy) + Crossspectrum.__init__(self, data1=data, data2=data, norm=norm, + gti=gti, dt=dt, skip_checks=True, legacy=legacy) self.nphots = self.nphots1 self.dt = dt @@ -148,12 +148,13 @@ def rebin(self, df=None, f=None, method="mean"): Parameters ---------- df: float - The new frequency resolution + The new frequency resolution. Other Parameters ---------------- f: float - the rebin factor. If specified, it substitutes ``df`` with ``f*self.df`` + The rebin factor. If specified, it substitutes ``df`` with + ``f*self.df``, so ``f>1`` is recommended. Returns ------- @@ -264,10 +265,11 @@ def classical_significances(self, threshold=1, trial_correction=False): 1. The power spectrum is Leahy-normalized 2. There is no source of variability in the data other than the - periodic signal to be determined with this method. This is important! - If there are other sources of (aperiodic) variability in the data, this - method will *not* produce correct results, but instead produce a large - number of spurious false positive detections! + periodic signal to be determined with this method. This is + important! If there are other sources of (aperiodic) variability in + the data, this method will *not* produce correct results, but + instead produce a large number of spurious false positive + detections! 3. There are no significant instrumental effects changing the statistical distribution of the powers (e.g. pile-up or dead time) @@ -287,18 +289,18 @@ def classical_significances(self, threshold=1, trial_correction=False): Default is ``1`` (all p-values will be reported). trial_correction : bool, optional, default ``False`` - A Boolean flag that sets whether the ``threshold`` will be corrected - by the number of frequencies before being applied. This decreases - the ``threshold`` (p-values need to be lower to count as significant). - Default is ``False`` (report all powers) though for any application - where `threshold`` is set to something meaningful, this should also - be applied! + A Boolean flag that sets whether the ``threshold`` will be + corrected by the number of frequencies before being applied. This + decreases the ``threshold`` (p-values need to be lower to count as + significant). Default is ``False`` (report all powers) though for + any application where `threshold`` is set to something meaningful, + this should also be applied! Returns ------- pvals : iterable - A list of ``(p-value, index)`` tuples for all powers that have p-values - lower than the threshold specified in ``threshold``. + A list of ``(p-value, index)`` tuples for all powers that have + p-values lower than the threshold specified in ``threshold``. """ if not self.norm == "leahy": @@ -312,7 +314,8 @@ def classical_significances(self, threshold=1, trial_correction=False): if np.size(self.m) == 1: # calculate p-values for all powers - # leave out zeroth power since it just encodes the number of photons! + # leave out zeroth power since it just encodes the number of + # photons! pv = pds_probability(self.power, n_summed_spectra=self.m, ntrial=ntrial) else: @@ -346,7 +349,7 @@ def modulation_upper_limit(self, fmin=None, fmax=None, c=0.95): See `stingray.stats.power_upper_limit`, `stingray.stats.amplitude_upper_limit` for more information. - + The formula used to calculate the upper limit assumes the Leahy normalization. If the periodogram is in another normalization, we will internally @@ -413,22 +416,23 @@ def modulation_upper_limit(self, fmin=None, fmax=None, c=0.95): def from_time_array(times, dt, segment_size=None, gti=None, norm="frac", silent=False, use_common_mean=True): """ - Calculate `AveragedPowerspectrum` from an array of event times. + Calculate an average power spectrum from an array of event times. Parameters ---------- times : `np.array` - Event arrival times + Event arrival times. dt : float The time resolution of the intermediate light curves - (sets the Nyquist frequency) + (sets the Nyquist frequency). Other parameters ---------------- segment_size : float The length, in seconds, of the light curve segments that will be - averaged. Only relevant (and required) for `AveragedPowerspectrum`. - gti: [[gti0_0, gti0_1], [gti1_0, gti1_1], ...] + averaged. Only relevant (and required) for + ``AveragedPowerspectrum``. + gti: ``[[gti0_0, gti0_1], [gti1_0, gti1_1], ...]`` Additional, optional Good Time intervals that get intersected with the GTIs of the input object. Can cause errors if there are overlaps between these GTIs and the input object GTIs. If that @@ -455,22 +459,23 @@ def from_time_array(times, dt, segment_size=None, gti=None, norm="frac", def from_events(events, dt, segment_size=None, gti=None, norm="frac", silent=False, use_common_mean=True): """ - Calculate `AveragedPowerspectrum` from an event list. + Calculate an average power spectrum from an event list. Parameters ---------- events : `stingray.EventList` - Event list to be analyzed + Event list to be analyzed. dt : float The time resolution of the intermediate light curves - (sets the Nyquist frequency) + (sets the Nyquist frequency). Other parameters ---------------- segment_size : float The length, in seconds, of the light curve segments that will be - averaged. Only relevant (and required) for `AveragedPowerspectrum`. - gti: [[gti0_0, gti0_1], [gti1_0, gti1_1], ...] + averaged. Only relevant (and required) for + ``AveragedPowerspectrum``. + gti: ``[[gti0_0, gti0_1], [gti1_0, gti1_1], ...]`` Additional, optional Good Time intervals that get intersected with the GTIs of the input object. Can cause errors if there are overlaps between these GTIs and the input object GTIs. If that @@ -496,22 +501,24 @@ def from_events(events, dt, segment_size=None, gti=None, norm="frac", @staticmethod def from_lightcurve(lc, segment_size=None, gti=None, norm="frac", silent=False, use_common_mean=True): - """Calculate AveragedPowerspectrum from a light curve + """ + Calculate ``AveragedPowerspectrum`` from a light curve. Parameters ---------- events : `stingray.Lightcurve` - Light curve to be analyzed + Light curve to be analyzed. dt : float The time resolution of the intermediate light curves - (sets the Nyquist frequency) + (sets the Nyquist frequency). Other parameters ---------------- segment_size : float The length, in seconds, of the light curve segments that will be - averaged. Only relevant (and required) for `AveragedPowerspectrum`. - gti: [[gti0_0, gti0_1], [gti1_0, gti1_1], ...] + averaged. Only relevant (and required) for + ``AveragedPowerspectrum``. + gti: ``[[gti0_0, gti0_1], [gti1_0, gti1_1], ...]`` Additional, optional Good Time intervals that get intersected with the GTIs of the input object. Can cause errors if there are overlaps between these GTIs and the input object GTIs. If that @@ -538,7 +545,8 @@ def from_lightcurve(lc, segment_size=None, gti=None, norm="frac", def from_lc_iterable(iter_lc, dt, segment_size=None, gti=None, norm="frac", silent=False, use_common_mean=True): """ - Calculate `AveragedCrossspectrum` from two light curves. + Calculate the average power spectrum of an iterable collection of + light curves. Parameters ---------- @@ -552,8 +560,9 @@ def from_lc_iterable(iter_lc, dt, segment_size=None, gti=None, norm="frac", ---------------- segment_size : float The length, in seconds, of the light curve segments that will be - averaged. Only relevant (and required) for `AveragedPowerspectrum`. - gti: [[gti0_0, gti0_1], [gti1_0, gti1_1], ...] + averaged. Only relevant (and required) for + ``AveragedPowerspectrum``. + gti: ``[[gti0_0, gti0_1], [gti1_0, gti1_1], ...]`` Additional, optional Good Time intervals that get intersected with the GTIs of the input object. Can cause errors if there are overlaps between these GTIs and the input object GTIs. If that @@ -652,27 +661,27 @@ class AveragedPowerspectrum(AveragedCrossspectrum, Powerspectrum): Parameters ---------- - data: :class:`stingray.Lightcurve`object OR iterable of + data: :class:`stingray.Lightcurve`object OR iterable of :class:`stingray.Lightcurve` objects OR :class:`stingray.EventList` object The light curve data to be Fourier-transformed. segment_size: float The size of each segment to average. Note that if the total - duration of each :class:`Lightcurve` object in lc is not an integer multiple - of the ``segment_size``, then any fraction left-over at the end of the - time series will be lost. + duration of each :class:`Lightcurve` object in lc is not an integer + multiple of the ``segment_size``, then any fraction left-over at the + end of the time series will be lost. - norm: {``leahy`` | ``frac`` | ``abs`` | ``none`` }, optional, default ``frac`` - The normaliation of the periodogram to be used. + norm: {"leahy" | "frac" | "abs" | "none" }, optional, default "frac" + The normalization of the periodogram to be used. Other Parameters ---------------- gti: 2-d float array ``[[gti0_0, gti0_1], [gti1_0, gti1_1], ...]`` -- Good Time intervals. This choice overrides the GTIs in the single light curves. Use with - care, especially if these GTIs have overlaps with the input - `Lightcurve` GTIs! If you're getting errors regarding your GTIs, don't - use this and only give GTIs to the `Lightcurve` object before making + care, especially if these GTIs have overlaps with the input + object GTIs! If you're getting errors regarding your GTIs, don't + use this and only give GTIs to the input object before making the power spectrum. silent : bool, default False @@ -684,7 +693,7 @@ class AveragedPowerspectrum(AveragedCrossspectrum, Powerspectrum): light curves in the case where data is of :class:EventList. large_data : bool, default False - Use only for data larger than 10**7 data points!! Uses zarr and dask + Use only for data larger than 10**7 data points!! Uses zarr and dask for computation. save_all : bool, default False @@ -712,8 +721,8 @@ class AveragedPowerspectrum(AveragedCrossspectrum, Powerspectrum): The uncertainties of ``power``. An approximation for each bin given by ``power_err= power/sqrt(m)``. Where ``m`` is the number of power averaged in each bin (by frequency - binning, or averaging power spectra of segments of a light curve). - Note that for a single realization (``m=1``) the error is equal to the + binning, or averaging power spectra of segments of a light curve). + Note that for a single realization (``m=1``) the error is equal to the power. df: float @@ -729,8 +738,8 @@ class AveragedPowerspectrum(AveragedCrossspectrum, Powerspectrum): The total number of photons in the light curve. legacy: bool - Use the legacy machinery of `AveragedPowerspectrum`. This might be - useful to compare with old results, and is also needed to use light + Use the legacy machinery of `AveragedPowerspectrum`. This might be + useful to compare with old results, and is also needed to use light curve lists as an input, to conserve the spectra of each segment, or to use the large_data option. """ @@ -783,8 +792,9 @@ def __init__(self, data=None, segment_size=None, norm="frac", gti=None, # The large_data option requires the legacy interface. if (large_data or save_all) and not legacy: - warnings.warn("The large_data option and the save_all options are only" - "available with the legacy interface (legacy=True).") + warnings.warn("The large_data option and the save_all options are" + " only available with the legacy interface" + " (legacy=True).") legacy = True if not legacy and data is not None: @@ -918,10 +928,10 @@ class DynamicalPowerspectrum(AveragedPowerspectrum): This class will divide a :class:`Lightcurve` object into segments of length ``segment_size``, create a power spectrum for each segment and store - all powers in a matrix as a function of both time (using the mid-point of + all powers in a matrix as a function of both time (using the mid-point of each segment) and frequency. - This is often used to trace changes in period of a (quasi-)periodic signal + This is often used to trace changes in period of a (quasi-)periodic signal over time. Parameters @@ -931,10 +941,10 @@ class DynamicalPowerspectrum(AveragedPowerspectrum): to be calculated. segment_size : float, default 1 - Length of the segment of light curve, default value is 1 (in whatever + Length of the segment of light curve, default value is 1 (in whatever units the ``time`` array in the :class:`Lightcurve`` object uses). - norm: {``leahy`` | ``frac`` | ``abs`` | ``none`` }, optional, default ``frac`` + norm: {"leahy" | "frac" | "abs" | "none" }, optional, default "frac" The normaliation of the periodogram to be used. Other Parameters @@ -942,16 +952,16 @@ class DynamicalPowerspectrum(AveragedPowerspectrum): gti: 2-d float array ``[[gti0_0, gti0_1], [gti1_0, gti1_1], ...]`` -- Good Time intervals. This choice overrides the GTIs in the single light curves. Use with - care, especially if these GTIs have overlaps with the input - `Lightcurve` GTIs! If you're getting errors regarding your GTIs, don't - use this and only give GTIs to the `Lightcurve` object before making + care, especially if these GTIs have overlaps with the input + object GTIs! If you're getting errors regarding your GTIs, don't + use this and only give GTIs to the input object before making the power spectrum. Attributes ---------- segment_size: float The size of each segment to average. Note that if the total - duration of each Lightcurve object in lc is not an integer multiple + duration of each input object in lc is not an integer multiple of the ``segment_size``, then any fraction left-over at the end of the time series will be lost. @@ -964,13 +974,13 @@ class DynamicalPowerspectrum(AveragedPowerspectrum): The normalization of the periodogram. freq: numpy.ndarray - The array of mid-bin frequencies that the Fourier transform samples + The array of mid-bin frequencies that the Fourier transform samples. df: float - The frequency resolution + The frequency resolution. dt: float - The time resolution + The time resolution. """ def __init__(self, lc, segment_size, norm="frac", gti=None, dt=None): @@ -1035,8 +1045,9 @@ def _make_matrix(self, lc): def rebin_frequency(self, df_new, method="sum"): """ - Rebin the Dynamic Power Spectrum to a new frequency resolution. Rebinning is - an in-place operation, i.e. will replace the existing ``dyn_ps`` attribute. + Rebin the Dynamic Power Spectrum to a new frequency resolution. + Rebinning is an in-place operation, i.e. will replace the existing + ``dyn_ps`` attribute. While the new resolution need not be an integer multiple of the previous frequency resolution, be aware that if it is not, the last @@ -1049,7 +1060,7 @@ def rebin_frequency(self, df_new, method="sum"): Must be larger than the frequency resolution of the old dynamical power spectrum! - method: {``sum`` | ``mean`` | ``average``}, optional, default ``sum`` + method: {"sum" | "mean" | "average"}, optional, default "sum" This keyword argument sets whether the counts in the new bins should be summed or averaged. """ @@ -1117,7 +1128,6 @@ def rebin_time(self, dt_new, method='sum'): This keyword argument sets whether the counts in the new bins should be summed or averaged. - Returns ------- time_new: numpy.ndarray @@ -1149,22 +1159,22 @@ def powerspectrum_from_time_array(times, dt, segment_size=None, gti=None, norm="frac", silent=False, use_common_mean=True): """ - Calculate `AveragedPowerspectrum` from an array of event times. + Calculate a power spectrum from an array of event times. Parameters ---------- times : `np.array` - Event arrival times + Event arrival times. dt : float The time resolution of the intermediate light curves - (sets the Nyquist frequency) + (sets the Nyquist frequency). Other parameters ---------------- segment_size : float The length, in seconds, of the light curve segments that will be - averaged. - gti : [[gti0, gti1], ...] + averaged. Only required (and used) for ``AveragedPowerspectrum``. + gti : ``[[gti0_0, gti0_1], [gti1_0, gti1_1], ...]`` Additional, optional Good Time intervals that get intersected with the GTIs of the input object. Can cause errors if there are overlaps between these GTIs and the input object GTIs. If that @@ -1200,12 +1210,13 @@ def powerspectrum_from_time_array(times, dt, segment_size=None, gti=None, def powerspectrum_from_events(events, dt, segment_size=None, gti=None, norm="frac", silent=False, use_common_mean=True): - """Calculate AveragedPowerspectrum from an event list + """ + Calculate a power spectrum from an event list. Parameters ---------- events : `stingray.EventList` - Event list to be analyzed + Event list to be analyzed. dt : float The time resolution of the intermediate light curves (sets the Nyquist frequency) @@ -1214,8 +1225,8 @@ def powerspectrum_from_events(events, dt, segment_size=None, gti=None, ---------------- segment_size : float The length, in seconds, of the light curve segments that will be - averaged. - gti : [[gti0, gti1], ...] + averaged. Only required (and used) for ``AveragedPowerspectrum``. + gti : ``[[gti0_0, gti0_1], [gti1_0, gti1_1], ...]`` Additional, optional Good Time intervals that get intersected with the GTIs of the input object. Can cause errors if there are overlaps between these GTIs and the input object GTIs. If that @@ -1248,7 +1259,7 @@ def powerspectrum_from_events(events, dt, segment_size=None, gti=None, def powerspectrum_from_lightcurve(lc, segment_size=None, gti=None, norm="frac", silent=False, use_common_mean=True): """ - Calculate AveragedPowerspectrum from a light curve + Calculate a power spectrum from a light curve Parameters ---------- @@ -1262,8 +1273,8 @@ def powerspectrum_from_lightcurve(lc, segment_size=None, gti=None, norm="frac", ---------------- segment_size : float The length, in seconds, of the light curve segments that will be - averaged. - gti : [[gti0, gti1], ...] + averaged. Only required (and used) for ``AveragedPowerspectrum``. + gti : ``[[gti0_0, gti0_1], [gti1_0, gti1_1], ...]`` Additional, optional Good Time intervals that get intersected with the GTIs of the input object. Can cause errors if there are overlaps between these GTIs and the input object GTIs. If that @@ -1306,15 +1317,17 @@ def powerspectrum_from_lightcurve(lc, segment_size=None, gti=None, norm="frac", def powerspectrum_from_lc_iterable(iter_lc, dt, segment_size=None, gti=None, norm="frac", silent=False, use_common_mean=True): - """Calculate AveragedCrossspectrum from two light curves + """ + Calculate an average power spectrum from an iterable collection of light + curves. Parameters ---------- iter_lc : iterable of `stingray.Lightcurve` objects or `np.array` - Light curves. If arrays, use them as counts + Light curves. If arrays, use them as counts. dt : float The time resolution of the light curves - (sets the Nyquist frequency) + (sets the Nyquist frequency). Other parameters ---------------- @@ -1322,7 +1335,7 @@ def powerspectrum_from_lc_iterable(iter_lc, dt, segment_size=None, gti=None, The length, in seconds, of the light curve segments that will be averaged. If not ``None``, it will be used to check the segment size of the output. - gti : [[gti0, gti1], ...] + gti : ``[[gti0_0, gti0_1], [gti1_0, gti1_1], ...]`` Additional, optional Good Time intervals that get intersected with the GTIs of the input object. Can cause errors if there are overlaps between these GTIs and the input object GTIs. If that @@ -1382,11 +1395,13 @@ def iterate_lc_counts(iter_lc): use_common_mean=use_common_mean, silent=silent ) - return _create_powerspectrum_from_result_table(table, force_averaged=force_averaged) + return _create_powerspectrum_from_result_table(table, + force_averaged=force_averaged) def _create_powerspectrum_from_result_table(table, force_averaged=False): - """Copy the columns and metadata from the results of + """ + Copy the columns and metadata from the results of ``stingray.fourier.avg_pds_from_XX`` functions into `AveragedPowerspectrum` or `Powerspectrum` objects. From c9c40d5f867eeae2bb10e105c49b22134ad2a728 Mon Sep 17 00:00:00 2001 From: Abbie Stevens Date: Sun, 27 Mar 2022 11:50:21 -0400 Subject: [PATCH 26/45] Improved PEP8 compliance with 79char lines. --- stingray/tests/test_powerspectrum.py | 69 ++++++++++++++++++---------- 1 file changed, 45 insertions(+), 24 deletions(-) diff --git a/stingray/tests/test_powerspectrum.py b/stingray/tests/test_powerspectrum.py index 2a8754e65..aceae9259 100644 --- a/stingray/tests/test_powerspectrum.py +++ b/stingray/tests/test_powerspectrum.py @@ -48,11 +48,12 @@ def setup_class(cls): cls.events = EventList(times, gti=gti) cls.lc = cls.events - cls.leahy_pds = AveragedPowerspectrum( - cls.lc, segment_size=cls.segment_size, dt=cls.dt, norm="leahy", silent=True) + cls.leahy_pds = AveragedPowerspectrum(cls.lc, + segment_size=cls.segment_size, + dt=cls.dt, norm="leahy", + silent=True) - cls.leahy_pds_sng = Powerspectrum( - cls.lc, dt=cls.dt, norm="leahy") + cls.leahy_pds_sng = Powerspectrum(cls.lc, dt=cls.dt, norm="leahy") @pytest.mark.parametrize("norm", ["leahy", "frac", "abs", "none"]) def test_common_mean_gives_comparable_scatter(self, norm): @@ -75,11 +76,13 @@ def test_modulation_upper_limit(self, norm): pds.power[25] = val pds.unnorm_power[25] = unnorm_val pds_norm = pds.to_norm(norm) - assert np.isclose(pds_norm.modulation_upper_limit(2, 5, 0.99), 0.5412103, atol=1e-4) + assert np.isclose(pds_norm.modulation_upper_limit(2, 5, 0.99), + 0.5412103, atol=1e-4) def test_legacy_equivalent(self): leahy_pds = AveragedPowerspectrum( - self.lc, segment_size=self.segment_size, dt=self.dt, norm="leahy", silent=True, legacy=True) + self.lc, segment_size=self.segment_size, dt=self.dt, norm="leahy", + silent=True, legacy=True) for attr in ["power", "unnorm_power"]: assert np.allclose( getattr(leahy_pds, attr), @@ -98,7 +101,8 @@ def test_from_events_works_ps(self): def test_from_events_works_aps(self): pds_ev = AveragedPowerspectrum.from_events( - self.events, segment_size=self.segment_size, dt=self.dt, norm="leahy", silent=True) + self.events, segment_size=self.segment_size, dt=self.dt, + norm="leahy", silent=True) assert np.allclose(self.leahy_pds.power, pds_ev.power) def test_from_lc_iter_works(self): @@ -147,8 +151,10 @@ def iter_lc_with_errs(iter_lc): yield lc lccs = AveragedPowerspectrum.from_lc_iterable( - iter_lc_with_errs(self.events.to_lc_iter(self.dt, self.segment_size)), - segment_size=self.segment_size, dt=self.dt, norm='leahy', silent=True) + iter_lc_with_errs(self.events.to_lc_iter(self.dt, + self.segment_size)), + segment_size=self.segment_size, dt=self.dt, norm='leahy', + silent=True) power1 = lccs.power.real power2 = self.leahy_pds.power.real assert np.allclose(power1, power2, rtol=0.01) @@ -163,8 +169,10 @@ def iter_lc_with_errs(iter_lc): yield lc lccs = AveragedPowerspectrum.from_lc_iterable( - iter_lc_with_errs(self.events.to_lc_iter(self.dt, self.segment_size)), - segment_size=self.segment_size, dt=self.dt, norm='leahy', silent=True) + iter_lc_with_errs(self.events.to_lc_iter(self.dt, + self.segment_size)), + segment_size=self.segment_size, dt=self.dt, norm='leahy', + silent=True) power1 = lccs.power.real power2 = self.leahy_pds.power.real assert np.allclose(power1, power2, rtol=0.01) @@ -175,17 +183,21 @@ def iter_lc_counts_only(iter_lc): yield lc.counts lccs = AveragedPowerspectrum.from_lc_iterable( - iter_lc_counts_only(self.events.to_lc_iter(self.dt, self.segment_size)), - segment_size=self.segment_size, dt=self.dt, norm='leahy', silent=True) + iter_lc_counts_only(self.events.to_lc_iter(self.dt, + self.segment_size)), + segment_size=self.segment_size, dt=self.dt, norm='leahy', + silent=True) power1 = lccs.power.real power2 = self.leahy_pds.power.real assert np.allclose(power1, power2, rtol=0.01) def test_from_time_array_works_with_memmap(self): - with fits.open(os.path.join(datadir, "monol_testA.evt"), memmap=True) as hdul: + with fits.open(os.path.join(datadir, "monol_testA.evt"), + memmap=True) as hdul: times = hdul[1].data["TIME"] - gti = np.array([[hdul[2].data["START"][0], hdul[2].data["STOP"][0]]]) + gti = np.array([[hdul[2].data["START"][0], + hdul[2].data["STOP"][0]]]) _ = AveragedPowerspectrum.from_time_array( times, segment_size=128, dt=self.dt, gti=gti, norm='none', @@ -199,7 +211,8 @@ def test_from_lc_with_err_works(self, norm): lc, segment_size=self.segment_size, norm=norm, silent=True, gti=lc.gti) pds_ev = AveragedPowerspectrum.from_events( - self.events, segment_size=self.segment_size, dt=self.dt, norm=norm, silent=True, gti=self.events.gti) + self.events, segment_size=self.segment_size, dt=self.dt, norm=norm, + silent=True, gti=self.events.gti) for attr in ["power", "freq", "m", "n", "nphots", "segment_size"]: assert np.allclose(getattr(pds, attr), getattr(pds_ev, attr)) @@ -211,7 +224,8 @@ def test_init_without_segment(self): def test_init_with_nonsense_segment(self, legacy): segment_size = "foo" with pytest.raises(TypeError): - assert AveragedPowerspectrum(self.lc, segment_size, dt=self.dt, legacy=legacy) + assert AveragedPowerspectrum(self.lc, segment_size, dt=self.dt, + legacy=legacy) def test_init_with_none_segment(self): segment_size = None @@ -417,11 +431,13 @@ def test_fractional_rms_in_frac_norm_is_consistent(self): gti=[[0, 100]]) ps = Powerspectrum(lc, norm="leahy") rms_ps_l, rms_err_l = ps.compute_rms(min_freq=ps.freq[1], - max_freq=ps.freq[-1], white_noise_offset=0) + max_freq=ps.freq[-1], + white_noise_offset=0) ps = Powerspectrum(lc, norm="frac") rms_ps, rms_err = ps.compute_rms(min_freq=ps.freq[1], - max_freq=ps.freq[-1], white_noise_offset=0) + max_freq=ps.freq[-1], + white_noise_offset=0) assert np.allclose(rms_ps, rms_ps_l, atol=0.01) assert np.allclose(rms_err, rms_err_l, atol=0.01) @@ -436,11 +452,13 @@ def test_fractional_rms_in_frac_norm_is_consistent_averaged(self): ps = AveragedPowerspectrum(lc, norm="leahy", segment_size=100, silent=True) rms_ps_l, rms_err_l = ps.compute_rms(min_freq=ps.freq[1], - max_freq=ps.freq[-1], white_noise_offset=0) + max_freq=ps.freq[-1], + white_noise_offset=0) ps = AveragedPowerspectrum(lc, norm="frac", segment_size=100) rms_ps, rms_err = ps.compute_rms(min_freq=ps.freq[1], - max_freq=ps.freq[-1], white_noise_offset=0) + max_freq=ps.freq[-1], + white_noise_offset=0) assert np.allclose(rms_ps, rms_ps_l, atol=0.01) assert np.allclose(rms_err, rms_err_l, atol=0.01) @@ -980,7 +998,8 @@ def test_rebin_small_dt(self): segment_size = 3 with warnings.catch_warnings(): warnings.simplefilter("ignore", category=UserWarning) - dps = DynamicalPowerspectrum(self.lc_test, segment_size=segment_size) + dps = DynamicalPowerspectrum(self.lc_test, + segment_size=segment_size) with pytest.raises(ValueError): dps.rebin_time(dt_new=2.0) @@ -1028,7 +1047,8 @@ def test_rebin_time_mean_method(self): rebin_dps = np.array([[0.59722222, 0.87301587, 0.21428571]]) with warnings.catch_warnings(): warnings.simplefilter("ignore", category=UserWarning) - dps = DynamicalPowerspectrum(self.lc_test, segment_size=segment_size) + dps = DynamicalPowerspectrum(self.lc_test, + segment_size=segment_size) new_dps = dps.rebin_time(dt_new=dt_new, method='mean') assert np.allclose(new_dps.time, rebin_time) assert np.allclose(new_dps.dyn_ps, rebin_dps) @@ -1060,7 +1080,8 @@ def test_rebin_time_average_method(self): with warnings.catch_warnings(): warnings.simplefilter("ignore", category=UserWarning) - dps = DynamicalPowerspectrum(self.lc_test, segment_size=segment_size) + dps = DynamicalPowerspectrum(self.lc_test, + segment_size=segment_size) new_dps = dps.rebin_time(dt_new=dt_new, method='average') assert np.allclose(new_dps.time, rebin_time) assert np.allclose(new_dps.dyn_ps, rebin_dps) From cc837c88ae0618c260b2215242d06c616b2fceca Mon Sep 17 00:00:00 2001 From: Abbie Stevens Date: Sun, 27 Mar 2022 12:06:46 -0400 Subject: [PATCH 27/45] Changed another line break when looking through trying to figure out why all the tests broke. --- stingray/tests/test_powerspectrum.py | 6 ++---- 1 file changed, 2 insertions(+), 4 deletions(-) diff --git a/stingray/tests/test_powerspectrum.py b/stingray/tests/test_powerspectrum.py index aceae9259..c0457a7ac 100644 --- a/stingray/tests/test_powerspectrum.py +++ b/stingray/tests/test_powerspectrum.py @@ -803,10 +803,8 @@ def iter_lc(lc, n): else: i, t = t, t + t0 with pytest.warns(UserWarning) as record: - cs = AveragedPowerspectrum( - iter_lc(self.lc, 1), - segment_size=1, legacy=legacy, - gti=self.lc.gti) + cs = AveragedPowerspectrum(iter_lc(self.lc, 1), segment_size=1, + legacy=legacy, gti=self.lc.gti) message = "The averaged Power spectrum from a generator " assert np.any([message in r.message.args[0] From 7778505b336a18a3de24104e05edad4425cea48e Mon Sep 17 00:00:00 2001 From: Abbie Stevens Date: Sun, 27 Mar 2022 12:08:11 -0400 Subject: [PATCH 28/45] Fixed the one overhanging line break. --- stingray/tests/test_gti.py | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/stingray/tests/test_gti.py b/stingray/tests/test_gti.py index 550182123..ceb6a318e 100644 --- a/stingray/tests/test_gti.py +++ b/stingray/tests/test_gti.py @@ -211,8 +211,9 @@ def test_join_gtis_nonoverlapping(self): def test_join_gtis_overlapping(self): gti0 = [[0, 1], [2, 3], [4, 8]] gti1 = [[7, 8], [10, 11], [12, 13]] - assert np.all(join_gtis(gti0, gti1) == np.array([[0, 1], [2, 3], [4, 8], - [10, 11], [12, 13]])) + assert np.all(join_gtis(gti0, gti1) == np.array([[0, 1], [2, 3], + [4, 8], [10, 11], + [12, 13]])) def test_time_intervals_from_gtis(self): """Test the division of start and end times to calculate spectra.""" From 66f8db79f1db1e227df5f8d282e1ec5ca4ae228a Mon Sep 17 00:00:00 2001 From: Abbie Stevens Date: Sun, 27 Mar 2022 12:39:27 -0400 Subject: [PATCH 29/45] Removed any fiddlings I'd done aside from the pep8 79-char line stuff. --- stingray/tests/test_powerspectrum.py | 19 ++++++++++++------- 1 file changed, 12 insertions(+), 7 deletions(-) diff --git a/stingray/tests/test_powerspectrum.py b/stingray/tests/test_powerspectrum.py index c0457a7ac..6f7a71c93 100644 --- a/stingray/tests/test_powerspectrum.py +++ b/stingray/tests/test_powerspectrum.py @@ -53,7 +53,8 @@ def setup_class(cls): dt=cls.dt, norm="leahy", silent=True) - cls.leahy_pds_sng = Powerspectrum(cls.lc, dt=cls.dt, norm="leahy") + cls.leahy_pds_sng = Powerspectrum( + cls.lc, dt=cls.dt, norm="leahy") @pytest.mark.parametrize("norm", ["leahy", "frac", "abs", "none"]) def test_common_mean_gives_comparable_scatter(self, norm): @@ -80,9 +81,10 @@ def test_modulation_upper_limit(self, norm): 0.5412103, atol=1e-4) def test_legacy_equivalent(self): - leahy_pds = AveragedPowerspectrum( - self.lc, segment_size=self.segment_size, dt=self.dt, norm="leahy", - silent=True, legacy=True) + leahy_pds = AveragedPowerspectrum(self.lc, + segment_size=self.segment_size, + dt=self.dt, norm="leahy", + silent=True, legacy=True) for attr in ["power", "unnorm_power"]: assert np.allclose( getattr(leahy_pds, attr), @@ -147,7 +149,8 @@ def iter_lc_with_errs(iter_lc): # In order for error bars to be considered, # err_dist has to be gauss. lc.err_dist = "gauss" - lc._counts_err = np.zeros_like(lc.counts) + lc.counts.mean()**0.5 + lc._counts_err = np.zeros_like(lc.counts) + \ + lc.counts.mean()**0.5 yield lc lccs = AveragedPowerspectrum.from_lc_iterable( @@ -803,8 +806,10 @@ def iter_lc(lc, n): else: i, t = t, t + t0 with pytest.warns(UserWarning) as record: - cs = AveragedPowerspectrum(iter_lc(self.lc, 1), segment_size=1, - legacy=legacy, gti=self.lc.gti) + cs = AveragedPowerspectrum( + iter_lc(self.lc, 1), + segment_size=1, legacy=legacy, + gti=self.lc.gti) message = "The averaged Power spectrum from a generator " assert np.any([message in r.message.args[0] From 8975197b3671224500311e6a370c07e0d364bdf7 Mon Sep 17 00:00:00 2001 From: Abbie Stevens Date: Sun, 27 Mar 2022 12:47:49 -0400 Subject: [PATCH 30/45] Put in a TODO comment where counts isn't actually used in the test. --- stingray/tests/test_gti.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/stingray/tests/test_gti.py b/stingray/tests/test_gti.py index ceb6a318e..8ce2b1b8f 100644 --- a/stingray/tests/test_gti.py +++ b/stingray/tests/test_gti.py @@ -249,7 +249,7 @@ def test_bin_intervals_from_gtis_2(self): # Simulate something *clearly* non-constant counts = np.random.poisson( 10000 + 2000 * np.sin(2 * np.pi * times)) - + # TODO: `counts` isn't actually used here. start_bins, stop_bins = bin_intervals_from_gtis(gti, 20, times) assert np.allclose(start_bins, [0, 200, 400, 600, 800]) From e8467a5b473af593c70eff60b9a566b420465573 Mon Sep 17 00:00:00 2001 From: Abbie Stevens Date: Sun, 27 Mar 2022 12:48:43 -0400 Subject: [PATCH 31/45] If there's no GTI in powerspectrum_from_events, use the GTI from the input object. --- stingray/powerspectrum.py | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/stingray/powerspectrum.py b/stingray/powerspectrum.py index df516ad13..f351e84b5 100755 --- a/stingray/powerspectrum.py +++ b/stingray/powerspectrum.py @@ -1249,7 +1249,8 @@ def powerspectrum_from_events(events, dt, segment_size=None, gti=None, spec : `AveragedPowerspectrum` or `Powerspectrum` The output periodogram. """ - + if gti is None: + gti = events.gti return powerspectrum_from_time_array( events.time, dt, segment_size, gti, norm=norm, silent=silent, use_common_mean=use_common_mean From c605fbea5b70e246bfc759d2388853b27e22dda9 Mon Sep 17 00:00:00 2001 From: Abbie Stevens Date: Sun, 27 Mar 2022 12:51:42 -0400 Subject: [PATCH 32/45] If there's no GTI in powerspectrum_from_lightcurve, use the GTI from the input object. --- stingray/powerspectrum.py | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/stingray/powerspectrum.py b/stingray/powerspectrum.py index f351e84b5..2d571241e 100755 --- a/stingray/powerspectrum.py +++ b/stingray/powerspectrum.py @@ -536,7 +536,8 @@ def from_lightcurve(lc, segment_size=None, gti=None, norm="frac", silent : bool, default False Silence the progress bars. """ - + if gti is None: + gti = lc.gti return powerspectrum_from_lightcurve( lc, segment_size=segment_size, gti=gti, norm=norm, silent=silent, use_common_mean=use_common_mean) From 4410c0cc27e39cd1f4786fa7583741653d1bb40a Mon Sep 17 00:00:00 2001 From: Abbie Stevens Date: Sun, 27 Mar 2022 13:05:04 -0400 Subject: [PATCH 33/45] Fixed lots of overhangs and strange line breaks for pep8, added periods to sentences in comments. --- stingray/fourier.py | 276 +++++++++++++++++++++++++------------------- 1 file changed, 157 insertions(+), 119 deletions(-) diff --git a/stingray/fourier.py b/stingray/fourier.py index 9dc770708..5f2936c61 100644 --- a/stingray/fourier.py +++ b/stingray/fourier.py @@ -11,7 +11,8 @@ def positive_fft_bins(n_bin, include_zero=False): - """Give the range of positive frequencies of a complex FFT. + """ + Give the range of positive frequencies of a complex FFT. This assumes we are using Numpy's FFT, or something compatible with it, like ``pyfftw.interfaces.numpy_fft``, where the positive @@ -81,7 +82,8 @@ def positive_fft_bins(n_bin, include_zero=False): def poisson_level(norm="frac", meanrate=None, n_ph=None, backrate=0): - """Poisson (white)-noise level in a periodogram of pure counting noise. + """ + Poisson (white)-noise level in a periodogram of pure counting noise. For Leahy normalization, this is: .. math:: @@ -103,16 +105,18 @@ def poisson_level(norm="frac", meanrate=None, n_ph=None, backrate=0): Parameters ---------- norm : str, default "frac" - Normalization of the periodogram. One of ["abs", "frac", "leahy", "none"] + Normalization of the periodogram. One of ["abs", "frac", "leahy", + "none"]. Other Parameters ---------------- meanrate : float, default None - Mean count rate in counts/s. Needed for r.m.s. norms (``abs`` and ``frac``) + Mean count rate in counts/s. Needed for r.m.s. norms ("abs" and + "frac"). n_ph : float, default None - Total number of counts in the light curve. Needed if ``norm=="none"`` + Total number of counts in the light curve. Needed if ``norm=="none"``. backrate : float, default 0 - Background count rate in counts/s. Optional for fractional r.m.s. norm + Background count rate in counts/s. Optional for fractional r.m.s. norm. Raises ------ @@ -154,7 +158,8 @@ def poisson_level(norm="frac", meanrate=None, n_ph=None, backrate=0): bad_input = bad_input or (norm.lower() == "none" and n_ph is None) if bad_input: - raise ValueError(f"Bad input parameters for norm {norm}: n_ph={n_ph}, meanrate={meanrate}") + raise ValueError(f"Bad input parameters for norm {norm}: n_ph={n_ph}, " + f"meanrate={meanrate}") if norm == "abs": return 2. * meanrate @@ -169,7 +174,8 @@ def poisson_level(norm="frac", meanrate=None, n_ph=None, backrate=0): def normalize_frac(unnorm_power, dt, n_bin, mean_flux, background_flux=0): - """Fractional rms normalization. + """ + Fractional rms normalization. ..math:: P = \frac{P_{Leahy}}{\mu} = \frac{2T}{N_{ph}^2}P_{unnorm} @@ -248,7 +254,8 @@ def normalize_frac(unnorm_power, dt, n_bin, mean_flux, background_flux=0): # norm = 2 / (n_ph * meanrate) = 2 * dt / (mean**2 * n_bin) if background_flux > 0: - power = unnorm_power * 2. * dt / ((mean_flux - background_flux) ** 2 * n_bin) + power = unnorm_power * 2. * dt / ((mean_flux - background_flux) ** 2 * + n_bin) else: # Note: this corresponds to eq. 3 in Uttley+14 power = unnorm_power * 2. * dt / (mean_flux ** 2 * n_bin) @@ -256,7 +263,8 @@ def normalize_frac(unnorm_power, dt, n_bin, mean_flux, background_flux=0): def normalize_abs(unnorm_power, dt, n_bin): - """Absolute rms normalization. + """ + Absolute rms normalization. .. math:: P = P_{frac} * \mu^2 @@ -307,7 +315,8 @@ def normalize_abs(unnorm_power, dt, n_bin): def normalize_leahy_from_variance(unnorm_power, variance, n_bin): - """Leahy+83 normalization, from the variance of the lc. + """ + Leahy+83 normalization, from the variance of the lc. .. math:: P = \frac{P_{unnorm}}{N <\delta{x}^2>} @@ -360,7 +369,8 @@ def normalize_leahy_from_variance(unnorm_power, variance, n_bin): def normalize_leahy_poisson(unnorm_power, n_ph): - """Leahy+83 normalization. + """ + Leahy+83 normalization. .. math:: P = \frac{2}{N_{ph}} P_{unnorm} @@ -400,8 +410,11 @@ def normalize_leahy_poisson(unnorm_power, n_ph): return unnorm_power * 2. / n_ph -def normalize_periodograms(unnorm_power, dt, n_bin, mean_flux=None, n_ph=None, variance=None, background_flux=0., norm="frac", power_type="all"): - """Wrapper around all the normalize_NORM methods. +def normalize_periodograms(unnorm_power, dt, n_bin, mean_flux=None, n_ph=None, + variance=None, background_flux=0., norm="frac", + power_type="all"): + """ + Wrapper around all the normalize_NORM methods. Normalize the real part of the cross spectrum to Leahy, absolute rms^2, fractional rms^2 normalization, or not at all. @@ -429,11 +442,11 @@ def normalize_periodograms(unnorm_power, dt, n_bin, mean_flux=None, n_ph=None, v the unnormalized periodogram. Only relevant for Leahy normalization. variance: float - The average variance of the measurements in light curve (if a cross spectrum, - the geometrical mean of the variances in the two channels). **NOT** the - variance of the light curve, but of each flux measurement (square of - light curve error bar)! Only relevant for the Leahy normalization of - non-Poissonian data. + The average variance of the measurements in light curve (if a cross + spectrum, the geometrical mean of the variances in the two channels). + **NOT** the variance of the light curve, but of each flux measurement + (square of light curve error bar)! Only relevant for the Leahy + normalization of non-Poissonian data. norm : str One of ``leahy`` (Leahy+83), ``frac`` (fractional rms), ``abs`` @@ -477,8 +490,10 @@ def normalize_periodograms(unnorm_power, dt, n_bin, mean_flux=None, n_ph=None, v raise ValueError("Unrecognized power type") -def bias_term(power1, power2, power1_noise, power2_noise, n_ave, intrinsic_coherence=1.0): - """Bias term needed to calculate the coherence. +def bias_term(power1, power2, power1_noise, power2_noise, n_ave, + intrinsic_coherence=1.0): + """ + Bias term needed to calculate the coherence. Introduced by Vaughan & Nowak 1997, ApJ 474, L43 @@ -513,12 +528,15 @@ def bias_term(power1, power2, power1_noise, power2_noise, n_ave, intrinsic_coher """ if n_ave > 500: return 0. * power1 - bsq = power1 * power2 - intrinsic_coherence * (power1 - power1_noise) * (power2 - power2_noise) + bsq = power1 * power2 - intrinsic_coherence * (power1 - power1_noise) * \ + (power2 - power2_noise) return bsq / n_ave -def raw_coherence(cross_power, power1, power2, power1_noise, power2_noise, n_ave, intrinsic_coherence=1): - """Raw coherence estimations from cross and power spectra. +def raw_coherence(cross_power, power1, power2, power1_noise, power2_noise, + n_ave, intrinsic_coherence=1): + """ + Raw coherence estimations from cross and power spectra. Vaughan & Nowak 1997, ApJ 474, L43 @@ -558,8 +576,10 @@ def raw_coherence(cross_power, power1, power2, power1_noise, power2_noise, n_ave return num / den -def _estimate_intrinsic_coherence_single(cross_power, power1, power2, power1_noise, power2_noise, n_ave): - """Estimate intrinsic coherence +def _estimate_intrinsic_coherence_single(cross_power, power1, power2, + power1_noise, power2_noise, n_ave): + """ + Estimate intrinsic coherence. Use the iterative procedure from sec. 5 of @@ -588,7 +608,8 @@ def _estimate_intrinsic_coherence_single(cross_power, power1, power2, power1_noi new_coherence = 1 old_coherence = 0 count = 0 - while not np.isclose(new_coherence, old_coherence, atol=0.01) and count < 40: + while not np.isclose(new_coherence, old_coherence, atol=0.01) and \ + count < 40: old_coherence = new_coherence bsq = bias_term(power1, power2, power1_noise, power2_noise, n_ave, intrinsic_coherence=new_coherence) @@ -602,11 +623,14 @@ def _estimate_intrinsic_coherence_single(cross_power, power1, power2, power1_noi # This is the vectorized version of the function above. -estimate_intrinsic_coherence_vec = np.vectorize(_estimate_intrinsic_coherence_single) +estimate_intrinsic_coherence_vec = np.vectorize( + _estimate_intrinsic_coherence_single) -def estimate_intrinsic_coherence(cross_power, power1, power2, power1_noise, power2_noise, n_ave): - """Estimate intrinsic coherence +def estimate_intrinsic_coherence(cross_power, power1, power2, power1_noise, + power2_noise, n_ave): + """ + Estimate intrinsic coherence Use the iterative procedure from sec. 5 of @@ -637,8 +661,11 @@ def estimate_intrinsic_coherence(cross_power, power1, power2, power1_noise, powe return new_coherence -def error_on_averaged_cross_spectrum(cross_power, seg_power, ref_power, n_ave, seg_power_noise, ref_power_noise, common_ref="False"): - """Error on cross spectral quantities, From Ingram 2019. +def error_on_averaged_cross_spectrum(cross_power, seg_power, ref_power, n_ave, + seg_power_noise, ref_power_noise, + common_ref="False"): + """ + Error on cross spectral quantities, From Ingram 2019. Note: this is only valid for a very large number of averaged powers. Beware if n_ave < 50 or so. @@ -678,11 +705,13 @@ def error_on_averaged_cross_spectrum(cross_power, seg_power, ref_power, n_ave, s if n_ave < 30: warnings.warn("n_ave is below 30. Please note that the error bars " "on the quantities derived from the cross spectrum " - "are only reliable for a large number of averaged powers.") + "are only reliable for a large number of averaged " + "powers.") two_n_ave = 2 * n_ave if common_ref: Gsq = (cross_power * np.conj(cross_power)).real - bsq = bias_term(seg_power, ref_power, seg_power_noise, ref_power_noise, n_ave) + bsq = bias_term(seg_power, ref_power, seg_power_noise, ref_power_noise, + n_ave) frac = (Gsq - bsq) / (ref_power - ref_power_noise) power_over_2n = ref_power / two_n_ave @@ -694,8 +723,10 @@ def error_on_averaged_cross_spectrum(cross_power, seg_power, ref_power, n_ave, s else: PrPs = ref_power * seg_power - dRe = np.sqrt((PrPs + cross_power.real ** 2 - cross_power.imag ** 2) / two_n_ave) - dIm = np.sqrt((PrPs - cross_power.real ** 2 + cross_power.imag ** 2) / two_n_ave) + dRe = np.sqrt((PrPs + cross_power.real ** 2 - cross_power.imag ** 2) / + two_n_ave) + dIm = np.sqrt((PrPs - cross_power.real ** 2 + cross_power.imag ** 2) / + two_n_ave) gsq = raw_coherence(cross_power, seg_power, ref_power, seg_power_noise, ref_power_noise, n_ave) dphi = np.sqrt((1 - gsq) / (2 * gsq * n_ave)) @@ -705,7 +736,8 @@ def error_on_averaged_cross_spectrum(cross_power, seg_power, ref_power, n_ave, s def cross_to_covariance(cross_power, ref_power, ref_power_noise, delta_nu): - """Convert a cross spectrum into a covariance spectrum. + """ + Convert a cross spectrum into a covariance spectrum. Covariance: Wilkinson & Uttley 2009, MNRAS, 397, 666 @@ -735,7 +767,8 @@ def cross_to_covariance(cross_power, ref_power, ref_power_noise, delta_nu): def _which_segment_idx_fun(binned=False, dt=None): - """Select which segment index function from ``gti.py`` to use. + """ + Select which segment index function from ``gti.py`` to use. If ``binned`` is ``False``, call the unbinned function. @@ -751,13 +784,15 @@ def _which_segment_idx_fun(binned=False, dt=None): # Define a new function, so that we can pass the correct dt as an # argument. def fun(*args, **kwargs): - return generate_indices_of_segment_boundaries_binned(*args, dt=dt, **kwargs) + return generate_indices_of_segment_boundaries_binned(*args, dt=dt, + **kwargs) return fun def get_average_ctrate(times, gti, segment_size, counts=None): - """Calculate the average count rate during the observation. + """ + Calculate the average count rate during the observation. This function finds the same segments that the averaged periodogram functions (``avg_cs_from_iterables``, ``avg_pds_from_iterables`` etc) will @@ -810,8 +845,10 @@ def get_average_ctrate(times, gti, segment_size, counts=None): return n_ph / (n_intvs * segment_size) -def get_flux_iterable_from_segments(times, gti, segment_size, n_bin=None, fluxes=None, errors=None): - """Get fluxes from different segments of the observation. +def get_flux_iterable_from_segments(times, gti, segment_size, n_bin=None, + fluxes=None, errors=None): + """ + Get fluxes from different segments of the observation. If ``fluxes`` is ``None``, the input times are interpreted as events, and they are split into many binned series of length ``segment_size`` with @@ -850,9 +887,8 @@ def get_flux_iterable_from_segments(times, gti, segment_size, n_bin=None, fluxes """ if fluxes is None and n_bin is None: - raise ValueError( - "At least one between fluxes (if light curve) and n_bin (if events) has to be set" - ) + raise ValueError("At least one between fluxes (if light curve) and " + "n_bin (if events) has to be set") dt = None binned = fluxes is not None @@ -880,14 +916,15 @@ def get_flux_iterable_from_segments(times, gti, segment_size, n_bin=None, fluxes yield cts -def avg_pds_from_iterable(flux_iterable, dt, norm="frac", use_common_mean=True, silent=False): +def avg_pds_from_iterable(flux_iterable, dt, norm="frac", use_common_mean=True, + silent=False): """Calculate the average periodogram from an iterable of light curves Parameters ---------- flux_iterable : `iterable` of `np.array`s or of tuples (`np.array`, `np.array`) - Iterable providing either equal-length series of count measurements, or - of tuples (fluxes, errors). They must all be of the same length. + Iterable providing either equal-length series of count measurements, + or of tuples (fluxes, errors). They must all be of the same length. dt : float Time resolution of the light curves used to produce periodograms @@ -944,7 +981,8 @@ def local_show_progress(a): if flux is None or np.all(flux == 0): continue - # If the iterable returns the uncertainty, use it to calculate the variance. + # If the iterable returns the uncertainty, use it to calculate the + # variance. variance = None if isinstance(flux, tuple): flux, err = flux @@ -975,8 +1013,8 @@ def local_show_progress(a): # No need for the negative frequencies unnorm_power = unnorm_power[fgt0] - # If the user wants to normalize using the mean of the total lightcurve, - # normalize it here + # If the user wants to normalize using the mean of the total + # lightcurve, normalize it here cs_seg = unnorm_power if not use_common_mean: mean = n_ph / n_bin @@ -988,7 +1026,8 @@ def local_show_progress(a): # Accumulate the total sum cross spectrum cross = sum_if_not_none_or_initialize(cross, cs_seg) - unnorm_cross = sum_if_not_none_or_initialize(unnorm_cross, unnorm_power) + unnorm_cross = sum_if_not_none_or_initialize(unnorm_cross, + unnorm_power) n_ave += 1 @@ -1002,7 +1041,8 @@ def local_show_progress(a): common_mean = n_ph / n_bin if common_variance is not None: - # Note: the variances we summed were means, not sums. Hence M, not M*n_bin + # Note: the variances we summed were means, not sums. + # Hence M, not M*n_bin common_variance /= n_ave # Transform a sum into the average @@ -1107,8 +1147,9 @@ def avg_cs_from_iterables_quick( n_ph1 = flux1.sum() n_ph2 = flux2.sum() - # At the first loop, we define the frequency array and the range of positive - # frequency bins (after the first loop, cross will not be None nymore) + # At the first loop, we define the frequency array and the range of + # positive frequency bins (after the first loop, cross will not be + # None anymore) if unnorm_cross is None: freq = fftfreq(n_bin, dt) fgt0 = positive_fft_bins(n_bin) @@ -1128,7 +1169,8 @@ def avg_cs_from_iterables_quick( unnorm_power = unnorm_power[fgt0] # Initialize or accumulate final averaged spectrum - unnorm_cross = sum_if_not_none_or_initialize(unnorm_cross, unnorm_power) + unnorm_cross = sum_if_not_none_or_initialize(unnorm_cross, + unnorm_power) n_ave += 1 @@ -1234,28 +1276,29 @@ def avg_cs_from_iterables( results : :class:`astropy.table.Table` Table containing the following columns: freq : `np.array` - The frequencies + The frequencies. power : `np.array` - The normalized cross spectral powers + The normalized cross spectral powers. unnorm_power : `np.array` - The unnormalized cross spectral power + The unnormalized cross spectral power. unnorm_pds1 : `np.array` - The unnormalized auxiliary PDS from channel 1. Only returned if ``return_auxil`` is ``True`` + The unnormalized auxiliary PDS from channel 1. Only returned if + ``return_auxil`` is ``True``. unnorm_pds2 : `np.array` - The unnormalized auxiliary PDS from channel 2. Only returned if ``return_auxil`` is ``True`` + The unnormalized auxiliary PDS from channel 2. Only returned if + ``return_auxil`` is ``True``. And a number of other useful diagnostics in the metadata, including all attributes needed to allocate Crossspectrum objects, such as all the input arguments of this function (``dt``, ``segment_size``), and, e.g. n : int - the number of bins in the light curves used in each segment + The number of bins in the light curves used in each segment. m : int - the number of averaged periodograms + The number of averaged periodograms. mean : float - the mean flux (geometrical average of the mean fluxes in the two - channels) - + The mean flux (geometrical average of the mean fluxes in the two + channels). """ local_show_progress = show_progress @@ -1270,7 +1313,8 @@ def local_show_progress(a): sum_of_photons1 = sum_of_photons2 = 0 common_variance1 = common_variance2 = common_variance = None - for flux1, flux2 in local_show_progress(zip(flux_iterable1, flux_iterable2)): + for flux1, flux2 in local_show_progress(zip(flux_iterable1, + flux_iterable2)): if flux1 is None or flux2 is None or \ np.all(flux1 == 0) or np.all(flux2 == 0): continue @@ -1289,13 +1333,16 @@ def local_show_progress(a): if variance1 is None or variance2 is None: variance1 = variance2 = None else: - common_variance1 = sum_if_not_none_or_initialize(common_variance1, variance1) - common_variance2 = sum_if_not_none_or_initialize(common_variance2, variance2) + common_variance1 = sum_if_not_none_or_initialize(common_variance1, + variance1) + common_variance2 = sum_if_not_none_or_initialize(common_variance2, + variance2) n_bin = flux1.size - # At the first loop, we define the frequency array and the range of positive - # frequency bins (after the first loop, cross will not be None nymore) + # At the first loop, we define the frequency array and the range of + # positive frequency bins (after the first loop, cross will not be + # None anymore) if cross is None: freq = fftfreq(n_bin, dt) fgt0 = positive_fft_bins(n_bin) @@ -1322,7 +1369,8 @@ def local_show_progress(a): sum_of_photons1 += n_ph1 sum_of_photons2 += n_ph2 - # Take only positive frequencies unless the user wants the full spectrum + # Take only positive frequencies unless the user wants the full + # spectrum if not fullspec: unnorm_power = unnorm_power[fgt0] if return_auxil: @@ -1344,22 +1392,28 @@ def local_show_progress(a): variance = np.sqrt(variance1 * variance2) cs_seg = normalize_periodograms( - unnorm_power, dt, n_bin, mean, n_ph=n_ph, norm=norm, power_type=power_type, variance=variance + unnorm_power, dt, n_bin, mean, n_ph=n_ph, norm=norm, + power_type=power_type, variance=variance ) p1_seg = normalize_periodograms( - unnorm_pd1, dt, n_bin, mean1, n_ph=n_ph1, norm=norm, power_type=power_type, variance=variance1 + unnorm_pd1, dt, n_bin, mean1, n_ph=n_ph1, norm=norm, + power_type=power_type, variance=variance1 ) p2_seg = normalize_periodograms( - unnorm_pd2, dt, n_bin, mean2, n_ph=n_ph2, norm=norm, power_type=power_type, variance=variance2 + unnorm_pd2, dt, n_bin, mean2, n_ph=n_ph2, norm=norm, + power_type=power_type, variance=variance2 ) # Initialize or accumulate final averaged spectra cross = sum_if_not_none_or_initialize(cross, cs_seg) - unnorm_cross = sum_if_not_none_or_initialize(unnorm_cross, unnorm_power) + unnorm_cross = sum_if_not_none_or_initialize(unnorm_cross, + unnorm_power) if return_auxil: - unnorm_pds1 = sum_if_not_none_or_initialize(unnorm_pds1, unnorm_pd1) - unnorm_pds2 = sum_if_not_none_or_initialize(unnorm_pds2, unnorm_pd2) + unnorm_pds1 = sum_if_not_none_or_initialize(unnorm_pds1, + unnorm_pd1) + unnorm_pds2 = sum_if_not_none_or_initialize(unnorm_pds2, + unnorm_pd2) pds1 = sum_if_not_none_or_initialize(pds1, p1_seg) pds2 = sum_if_not_none_or_initialize(pds2, p2_seg) @@ -1460,18 +1514,12 @@ def local_show_progress(a): return results -def avg_pds_from_events( - times, - gti, - segment_size, - dt, - norm="frac", - use_common_mean=True, - silent=False, - fluxes=None, - errors=None, -): - """Calculate the average periodogram from a list of event times or a light curve. +def avg_pds_from_events(times, gti, segment_size, dt, norm="frac", + use_common_mean=True, silent=False, fluxes=None, + errors=None): + """ + Calculate the average periodogram from a list of event times or a light + curve. If the input is a light curve, the time array needs to be uniformly sampled inside GTIs (it can have gaps outside), and the fluxes need to be passed @@ -1481,13 +1529,13 @@ def avg_pds_from_events( Parameters ---------- times : float `np.array` - Array of times + Array of times. gti : [[gti00, gti01], [gti10, gti11], ...] - good time intervals + Good time intervals. segment_size : float - length of segments + Length of segments. dt : float - Time resolution of the light curves used to produce periodograms + Time resolution of the light curves used to produce periodograms. Other Parameters ---------------- @@ -1526,35 +1574,24 @@ def avg_pds_from_events( n_bin = np.rint(segment_size / dt).astype(int) dt = segment_size / n_bin - flux_iterable = get_flux_iterable_from_segments( - times, gti, segment_size, n_bin, fluxes=fluxes, errors=errors - ) - cross = avg_pds_from_iterable( - flux_iterable, dt, norm=norm, use_common_mean=use_common_mean, silent=silent - ) + flux_iterable = get_flux_iterable_from_segments(times, gti, segment_size, + n_bin, fluxes=fluxes, + errors=errors) + cross = avg_pds_from_iterable(flux_iterable, dt, norm=norm, + use_common_mean=use_common_mean, + silent=silent) if cross is not None: cross.meta["gti"] = gti return cross -def avg_cs_from_events( - times1, - times2, - gti, - segment_size, - dt, - norm="frac", - use_common_mean=True, - fullspec=False, - silent=False, - power_type="all", - fluxes1=None, - fluxes2=None, - errors1=None, - errors2=None, - return_auxil=False, -): - """Calculate the average cross spectrum from a list of event times or a light curve. +def avg_cs_from_events(times1, times2, gti, segment_size, dt, norm="frac", + use_common_mean=True, fullspec=False, silent=False, + power_type="all", fluxes1=None, fluxes2=None, + errors1=None, errors2=None, return_auxil=False): + """ + Calculate the average cross spectrum from a list of event times or a light + curve. If the input is a light curve, the time arrays need to be uniformly sampled inside GTIs (they can have gaps outside), and the fluxes need to be passed @@ -1626,7 +1663,8 @@ def avg_cs_from_events( times2, gti, segment_size, n_bin, fluxes=fluxes2, errors=errors2 ) - is_events = np.all([val is None for val in (fluxes1, fluxes2, errors1, errors2)]) + is_events = np.all([val is None for val in (fluxes1, fluxes2, errors1, + errors2)]) if (is_events and silent From 9d0e136aee3a917891714feddccaf0a8bb4a398f Mon Sep 17 00:00:00 2001 From: Abbie Stevens Date: Sun, 27 Mar 2022 13:13:37 -0400 Subject: [PATCH 34/45] Added gti None check in powerspectrum_from_lightcurve --- stingray/powerspectrum.py | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/stingray/powerspectrum.py b/stingray/powerspectrum.py index 2d571241e..b87b61647 100755 --- a/stingray/powerspectrum.py +++ b/stingray/powerspectrum.py @@ -1305,12 +1305,12 @@ def powerspectrum_from_lightcurve(lc, segment_size=None, gti=None, norm="frac", err = None if lc.err_dist == "gauss": err = lc.counts_err + if gti is None: + gti = lc.gti - table = avg_pds_from_events( - lc.time, gti, segment_size, lc.dt, - norm=norm, use_common_mean=use_common_mean, - silent=silent, - fluxes=lc.counts, errors=err) + table = avg_pds_from_events(lc.time, gti, segment_size, lc.dt, norm=norm, + use_common_mean=use_common_mean, silent=silent, + fluxes=lc.counts, errors=err) return _create_powerspectrum_from_result_table(table, force_averaged=force_averaged) From c74149a30cd992f19b061b86693432561dabb719 Mon Sep 17 00:00:00 2001 From: Abbie Stevens Date: Sun, 27 Mar 2022 13:14:01 -0400 Subject: [PATCH 35/45] Consistency in the three quotes being in their own line. --- stingray/fourier.py | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/stingray/fourier.py b/stingray/fourier.py index 5f2936c61..0413a6336 100644 --- a/stingray/fourier.py +++ b/stingray/fourier.py @@ -918,7 +918,8 @@ def get_flux_iterable_from_segments(times, gti, segment_size, n_bin=None, def avg_pds_from_iterable(flux_iterable, dt, norm="frac", use_common_mean=True, silent=False): - """Calculate the average periodogram from an iterable of light curves + """ + Calculate the average periodogram from an iterable of light curves Parameters ---------- From 70e8c224cae25a4d6901da003b47584eb2667277 Mon Sep 17 00:00:00 2001 From: Abbie Stevens Date: Sun, 27 Mar 2022 13:17:43 -0400 Subject: [PATCH 36/45] Added another gti None check for events. --- stingray/powerspectrum.py | 32 +++++++++++++++----------------- 1 file changed, 15 insertions(+), 17 deletions(-) diff --git a/stingray/powerspectrum.py b/stingray/powerspectrum.py index b87b61647..2496b6fe4 100755 --- a/stingray/powerspectrum.py +++ b/stingray/powerspectrum.py @@ -451,9 +451,10 @@ def from_time_array(times, dt, segment_size=None, gti=None, norm="frac", Silence the progress bars. """ - return powerspectrum_from_time_array( - times, dt, segment_size=segment_size, gti=gti, norm=norm, - silent=silent, use_common_mean=use_common_mean) + return powerspectrum_from_time_array(times, dt, + segment_size=segment_size, + gti=gti, norm=norm, silent=silent, + use_common_mean=use_common_mean) @staticmethod def from_events(events, dt, segment_size=None, gti=None, norm="frac", @@ -493,10 +494,11 @@ def from_events(events, dt, segment_size=None, gti=None, norm="frac", silent : bool, default False Silence the progress bars. """ - - return powerspectrum_from_events( - events, dt, segment_size=segment_size, gti=gti, norm=norm, - silent=silent, use_common_mean=use_common_mean) + if gti is None: + gti = events.gti + return powerspectrum_from_events(events, dt, segment_size=segment_size, + gti=gti, norm=norm, silent=silent, + use_common_mean=use_common_mean) @staticmethod def from_lightcurve(lc, segment_size=None, gti=None, norm="frac", @@ -538,9 +540,9 @@ def from_lightcurve(lc, segment_size=None, gti=None, norm="frac", """ if gti is None: gti = lc.gti - return powerspectrum_from_lightcurve( - lc, segment_size=segment_size, gti=gti, norm=norm, - silent=silent, use_common_mean=use_common_mean) + return powerspectrum_from_lightcurve(lc, segment_size=segment_size, + gti=gti, norm=norm, silent=silent, + use_common_mean=use_common_mean) @staticmethod def from_lc_iterable(iter_lc, dt, segment_size=None, gti=None, norm="frac", @@ -1390,13 +1392,9 @@ def iterate_lc_counts(iter_lc): "The inputs to `powerspectrum_from_lc_iterable`" " must be Lightcurve objects or arrays") - table = avg_pds_from_iterable( - iterate_lc_counts(iter_lc), - dt, - norm=norm, - use_common_mean=use_common_mean, - silent=silent - ) + table = avg_pds_from_iterable(iterate_lc_counts(iter_lc), dt, norm=norm, + use_common_mean=use_common_mean, + silent=silent) return _create_powerspectrum_from_result_table(table, force_averaged=force_averaged) From 5176fdd536bedbe5e111fa1b2f53515669378e32 Mon Sep 17 00:00:00 2001 From: Abbie Stevens Date: Sun, 27 Mar 2022 14:06:38 -0400 Subject: [PATCH 37/45] Added bit on how to run a specific test file. --- docs/install.rst | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-) diff --git a/docs/install.rst b/docs/install.rst index fe9685293..3e4c905cf 100644 --- a/docs/install.rst +++ b/docs/install.rst @@ -88,6 +88,11 @@ You may need to install tox first:: $ pip install tox +To run a specific test file (e.g., test_io.py), try:: + + $ cd stingray + $ py.test tests/test_io.py + If you have installed Stingray via pip or conda, the source directory might not be easily accessible. Once installed, you can also run the tests using:: @@ -115,7 +120,7 @@ directory and running the ``Makefile``: :: If that doesn't work on your system, you can invoke ``sphinx-build`` itself from the stingray source directory: :: $ cd stingray - $ $ sphinx-build docs docs/_build + $ sphinx-build docs docs/_build The documentation should be located in ``stingray/docs/_build``. Try opening ``./docs/_build/index.rst`` from the stingray source directory. From fa264a0c8e88fdc278c77da1112988ec6752829d Mon Sep 17 00:00:00 2001 From: Abbie Stevens Date: Sun, 27 Mar 2022 14:07:21 -0400 Subject: [PATCH 38/45] Fixed a line break in a docstring it might not have liked, and used english instead of code in the top part of the docstring for from_lightcurve. --- stingray/powerspectrum.py | 9 ++++----- 1 file changed, 4 insertions(+), 5 deletions(-) diff --git a/stingray/powerspectrum.py b/stingray/powerspectrum.py index 2496b6fe4..ab9841fcb 100755 --- a/stingray/powerspectrum.py +++ b/stingray/powerspectrum.py @@ -504,7 +504,7 @@ def from_events(events, dt, segment_size=None, gti=None, norm="frac", def from_lightcurve(lc, segment_size=None, gti=None, norm="frac", silent=False, use_common_mean=True): """ - Calculate ``AveragedPowerspectrum`` from a light curve. + Calculate a power spectrum from a light curve. Parameters ---------- @@ -664,8 +664,7 @@ class AveragedPowerspectrum(AveragedCrossspectrum, Powerspectrum): Parameters ---------- - data: :class:`stingray.Lightcurve`object OR iterable of - :class:`stingray.Lightcurve` objects OR :class:`stingray.EventList` object + data: :class:`stingray.Lightcurve`object OR iterable of :class:`stingray.Lightcurve` objects OR :class:`stingray.EventList` object The light curve data to be Fourier-transformed. segment_size: float @@ -741,7 +740,7 @@ class AveragedPowerspectrum(AveragedCrossspectrum, Powerspectrum): The total number of photons in the light curve. legacy: bool - Use the legacy machinery of `AveragedPowerspectrum`. This might be + Use the legacy machinery of ``AveragedPowerspectrum``. This might be useful to compare with old results, and is also needed to use light curve lists as an input, to conserve the spectra of each segment, or to use the large_data option. @@ -785,7 +784,7 @@ def __init__(self, data=None, segment_size=None, norm="frac", gti=None, if isinstance(data, Generator): warnings.warn( - "The averaged Power spectrum from a generator of " + "The averaged power spectrum from a generator of " "light curves pre-allocates the full list of light " "curves, losing all advantage of lazy loading. If it " "is important for you, use the " From 7d7e257e1c0eb9a03a7b72b832ae81c0e0c192e9 Mon Sep 17 00:00:00 2001 From: Matteo Bachetti Date: Tue, 29 Mar 2022 09:25:11 +0200 Subject: [PATCH 39/45] Version gets created from tags, no need to add it here. Add jinja2 dependency --- setup.cfg | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/setup.cfg b/setup.cfg index 7cc856c26..9f2e7a5c6 100644 --- a/setup.cfg +++ b/setup.cfg @@ -5,7 +5,6 @@ author_email = spectraltiming-stingray@googlegroups.com license = MIT license_file = LICENSE.rst url = https://docs.stingray.science -version = 1.0 description = Time Series Methods For Astronomical X-ray Data long_description = file: README.rst long_description_content_type = text/x-rst @@ -50,6 +49,7 @@ test = pytest pytest-astropy docs = + jinja2<=3.0.0 docutils sphinx-astropy nbsphinx>=0.8.3,!=0.8.8 From 5e1428964222dc535e6c27ff5bf4c3914d00552e Mon Sep 17 00:00:00 2001 From: Matteo Bachetti Date: Tue, 29 Mar 2022 09:29:57 +0200 Subject: [PATCH 40/45] The modified warning needed to be also used in tests --- stingray/tests/test_powerspectrum.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/stingray/tests/test_powerspectrum.py b/stingray/tests/test_powerspectrum.py index 6f7a71c93..1507345cb 100644 --- a/stingray/tests/test_powerspectrum.py +++ b/stingray/tests/test_powerspectrum.py @@ -810,7 +810,7 @@ def iter_lc(lc, n): iter_lc(self.lc, 1), segment_size=1, legacy=legacy, gti=self.lc.gti) - message = "The averaged Power spectrum from a generator " + message = "The averaged power spectrum from a generator" assert np.any([message in r.message.args[0] for r in record]) From 2cb3f6d5a47bb6f2c3668b576dae757c10ec5f72 Mon Sep 17 00:00:00 2001 From: Matteo Bachetti Date: Tue, 29 Mar 2022 14:13:40 +0200 Subject: [PATCH 41/45] Fix minor bug --- stingray/crossspectrum.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/stingray/crossspectrum.py b/stingray/crossspectrum.py index c1fb8b22f..147f7a511 100644 --- a/stingray/crossspectrum.py +++ b/stingray/crossspectrum.py @@ -2763,7 +2763,7 @@ def _create_crossspectrum_from_result_table(table, force_averaged=False): "Defaulting to sqrt(2 / M) in Leahy norm, rescaled to the appropriate norm." ) - Nph = np.sqrt(cs.nphots1 * cs.nphots1) + Nph = np.sqrt(cs.nphots1 * cs.nphots2) default_err = np.sqrt(2 / cs.m) * Nph / 2 dRe[bad] = default_err From ce7eeb1772f6654d71395ef7ed45a31f957245fe Mon Sep 17 00:00:00 2001 From: Abbie Stevens Date: Tue, 29 Mar 2022 12:34:28 -0400 Subject: [PATCH 42/45] Fixed with John's suggestions. --- CONTRIBUTING.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index b7bdde339..a1910679d 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -147,4 +147,4 @@ run the tests, you will need to make sure you have the pytest package (version --- -- Please refer to [CODE_OF_CONDUCT.md](https://github.com/StingraySoftware/stingray/blob/master/CODE_OF_CONDUCT.md) to read about the Community Guidelines +- Please refer to [CODE_OF_CONDUCT.md](https://github.com/StingraySoftware/stingray/blob/main/CODE_OF_CONDUCT.md) to read about the Community Guidelines From 0d075cd1deebb625ddd751768ffd70bb8cb385e9 Mon Sep 17 00:00:00 2001 From: Abbie Stevens Date: Tue, 29 Mar 2022 12:34:39 -0400 Subject: [PATCH 43/45] Fixed with John's suggestions and new date. --- CHANGELOG.rst | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/CHANGELOG.rst b/CHANGELOG.rst index df9545679..0d8b872f7 100644 --- a/CHANGELOG.rst +++ b/CHANGELOG.rst @@ -1,7 +1,7 @@ Changelog ========= -v1.0 (2022-03-28) +v1.0 (2022-03-29) --------------------- TL,DR: these things will break your code with v1.0: @@ -52,7 +52,7 @@ Bug fixes `Full list of changes`__ -__ https://github.com/StingraySoftware/stingray/compare/v0.3...v1.0-beta +__ https://github.com/StingraySoftware/stingray/compare/v0.3...v1.0 v1.0beta was released on 2022-02-25. From c0bff66eb6bab7248f1c900d58e8c575e2159893 Mon Sep 17 00:00:00 2001 From: Abbie Stevens Date: Tue, 29 Mar 2022 12:34:52 -0400 Subject: [PATCH 44/45] Fixed with John's suggestions. --- CREDITS.rst | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/CREDITS.rst b/CREDITS.rst index e08a4c5dc..f22a015ce 100644 --- a/CREDITS.rst +++ b/CREDITS.rst @@ -50,7 +50,7 @@ Contributors If you have contributed to Stingray and your name is missing, please send an email to the coordinators, or -`open a pull request for this page `_ +`open a pull request for this page `_ in the `Stingray repository `_. Acknowledgements From 0947a283b6791510df98f05ecf3db3eb016f3df0 Mon Sep 17 00:00:00 2001 From: Abbie Stevens Date: Tue, 29 Mar 2022 12:35:18 -0400 Subject: [PATCH 45/45] Fixed with John's suggestions. --- docs/citing.rst | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/citing.rst b/docs/citing.rst index d20bae673..e5afeaa87 100644 --- a/docs/citing.rst +++ b/docs/citing.rst @@ -17,7 +17,7 @@ If this isn't possible — for example, because you worked with an unreleased v Papers ====== -Please cite both of the following papers from ApJ and JOSS: +Please cite both of the following papers: .. raw:: html