diff --git a/CHANGELOG.rst b/CHANGELOG.rst index 54e9cb767..0d8b872f7 100644 --- a/CHANGELOG.rst +++ b/CHANGELOG.rst @@ -1,9 +1,9 @@ Changelog ========= -v1.0beta (2022-02-25) +v1.0 (2022-03-29) --------------------- -TL,DR: these things will break your code with v1.0beta: +TL,DR: these things will break your code with v1.0: - Python version < 3.8 - The ``gtis`` keyword in `pulse/pulsar.py` (it is now ``gti``, without the 's') @@ -18,7 +18,7 @@ New - New infrastructure for converting ``EventList`` and ``LightCurve`` objects into Astropy ``TimeSeries`` - New infrastructure for converting most Stingray classes into Astropy ``Table`` objects, Xarray and Pandas data frames. - Save and load of most Stingray classes to/from many different file formats (``pickle``, ``ECSV``, ``HDF5``, ``FITS``, and all formats compatible with Astropy Table) -- Accept input ``EventList`` in ``DynamicalPowerSpectrum`` +- Accept input ``EventList`` in ``DynamicalPowerSpectrum`` - New ``stingray.fourier`` module containing the basic timing products, usable on ``numpy`` arrays, and centralizes fft import - New methods in ``Crossspectrum`` and ``Powerspectrum`` to load data from specific inputs: ``from_events``, ``from_lightcurve``, ``from_time_array``, ``from_lc_list`` (``from_time_array`` was also tested using memory-mapped event lists as inputs: useful in very large datasets) - New and improved spectral timing methods: ``ComplexCovarianceSpectrum``, ``CovarianceSpectrum``, ``LagSpectrum``, ``RmsSpectrum`` @@ -52,7 +52,9 @@ Bug fixes `Full list of changes`__ -__ https://github.com/StingraySoftware/stingray/compare/v0.3...v1.0-beta +__ https://github.com/StingraySoftware/stingray/compare/v0.3...v1.0 + +v1.0beta was released on 2022-02-25. v0.3 (2021-05-31) ----------------- diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index 02ba12dfc..a1910679d 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -4,21 +4,33 @@ > All great things have small beginnings. -Hello there! We love and appreciate every small contribution you can make to improve stingray! -We are proudly open source and believe our(yes! yours as well) work will help enhance the quality of research around the world. We want to make contributing to stingray as easy and transparent as possible, whether it's: +Hello there! We love and appreciate every small contribution you can make to +improve Stingray! We are proudly open source and believe our(yes! yours as +well) work will help enhance the quality of research around the world. We +want to make contributing to stingray as easy and transparent as possible, +whether it's: - Reporting a bug - Discussing the current state of the code - Submitting a fix - Proposing new features -A successful project is not just built by amazing programmers but by the combined, unrelenting efforts of coders, testers, reviewers, and documentation writers. There are a few guidelines that we need all contributors to follow so that we can have a chance of keeping on top of things. +A successful project is not just built by amazing programmers but by the +combined, unrelenting efforts of coders, testers, reviewers, and documentation +writers. There are a few guidelines that we need all contributors to follow so +that we can have a chance of keeping on top of things. ## Contribution Guidelines --- -Contributions from everyone, experienced and inexperienced, are welcome! If you don't know where to start, look at the [Open Issues](https://github.com/StingraySoftware/stingray/issues) and/or get involved in our [Slack channel](http://slack-invite.timelabtechnologies.com/) . This code is written in Python 3.8+, but in general we will follow the Astropy/Numpy minimum Python versions. Tests run at each commit during Pull Requests, so it is easy to single out points in the code that break this compatibility. +Contributions from everyone, experienced and inexperienced, are welcome! If +you don't know where to start, look at the +[Open Issues](https://github.com/StingraySoftware/stingray/issues) and/or get +involved in our [Slack channel](http://slack-invite.timelabtechnologies.com/). +This code is written in Python 3.8+, but in general we will follow the Astropy/ +Numpy minimum Python versions. Tests run at each commit during Pull Requests, +so it is easy to single out points in the code that break this compatibility. - **Branches:** - Don't use your main **branch (forked) for anything. Consider deleting your main** branch. @@ -36,7 +48,10 @@ Contributions from everyone, experienced and inexperienced, are welcome! If you ### Contribution Workflow -These, conceptually, are the steps you will follow in contributing to stingray. These steps keep work well organized, with readable history. This in turn makes it easier for project maintainers (that might be you) to see what you’ve done, and why you did it: +These, conceptually, are the steps you will follow in contributing to +Stingray. These steps keep work well organized, with readable history. This in +turn makes it easier for project maintainers (that might be you) to see what +you’ve done, and why you did it: 1. Regularly fetch latest stingray development version `stingray/main` from GitHub. 2. Make a new feature branch. **Recommended:** Use virtual environments to work on branch. @@ -118,7 +133,9 @@ Code Reviews are super-useful: another contributor can review the code, which me --- -The testing framework used by stingray is the pytest framework with tox. To run the tests, you will need to make sure you have the pytest package (version 3.1 or later) as well as the tox tool installed. +The testing framework used by stingray is the pytest framework with tox. To +run the tests, you will need to make sure you have the pytest package (version +3.1 or later) as well as the tox tool installed. - Execute tests using the ```tox -e ``` command. - All tests should be py.test compliant: [http://pytest.org/latest/](http://pytest.org/latest/). @@ -130,4 +147,4 @@ The testing framework used by stingray is the pytest framework with tox. To run --- -- Please refer [CODE_OF_CONDUCT.md](https://github.com/StingraySoftware/stingray/blob/master/CODE_OF_CONDUCT.md) to read about the Community Guidelines +- Please refer to [CODE_OF_CONDUCT.md](https://github.com/StingraySoftware/stingray/blob/main/CODE_OF_CONDUCT.md) to read about the Community Guidelines diff --git a/CREDITS.rst b/CREDITS.rst index 325243b93..f22a015ce 100644 --- a/CREDITS.rst +++ b/CREDITS.rst @@ -14,10 +14,44 @@ Stingray Project Coordinators Contributors ============ +* Riccardo Campana +* Meg Davis +* Amogh Desai +* Omar Gamal +* Nitish Garg +* Nick Gorgone +* Anurag Hota +* Ajit Jadhav +* Usman Khan +* Sambhav Kothari +* Sandeep Kumar +* Max Mahlke +* Evandro Martinez Ribeiro +* Himanshu Mishra +* Sashank Mishra +* Stuart Mumford +* paopaofi +* parkma99 +* Francesco Pisanu +* Rashmi Raj +* Haroon Rashid +* Achilles Rasquinha +* Saurav Sachidanand +* Parul Sethi +* Swapnil Sharma +* Brigitta Sipocz +* Arfon Smith +* John Swinbank +* Akash Tandon +* tappina +* Mihir Tripathi +* Ricardo Vallés Blanco +* Dhruv Vats + If you have contributed to Stingray and your name is missing, please send an email to the coordinators, or -`open a pull request for this page `_ -in the `Stingray repository `_) +`open a pull request for this page `_ +in the `Stingray repository `_. Acknowledgements ================ diff --git a/docs/api.rst b/docs/api.rst index 30c97e58f..8a84549f0 100644 --- a/docs/api.rst +++ b/docs/api.rst @@ -88,6 +88,8 @@ Dynamical Powerspectrum .. autoclass:: stingray.DynamicalPowerspectrum :members: :inherited-members: + +---- CrossCorrelation ---------------- diff --git a/docs/citing.rst b/docs/citing.rst index 4443335e5..e5afeaa87 100644 --- a/docs/citing.rst +++ b/docs/citing.rst @@ -3,7 +3,7 @@ Citing Stingray *************** Citations are still the main currency of the academic world, and *the* best way to ensure that Stingray continues to be supported and we can continue to work on it. -If you use Stingray in data analysis leading to a publication, we ask that you cite *both* a `DOI `_ which points to the software itself *and* our papers describing the Stingray project. +If you use Stingray in data analysis leading to a publication, we ask that you cite *both* a `DOI `_, which points to the software itself, *and* our papers describing the Stingray project. DOI === @@ -74,27 +74,6 @@ Please cite both of the following papers: document.body.removeChild(el); } - function copyAsclBib() { - var bibtex = `@MISC{2016ascl.soft08001H, - author = {{Huppenkothen}, Daniela and {Bachetti}, Matteo and {Stevens}, Abigail L. and {Migliari}, Simone and {Balm}, Paul}, - title = "{Stingray: Spectral-timing software}", - keywords = {Software}, - year = 2016, - month = aug, - eid = {ascl:1608.001}, - pages = {ascl:1608.001}, - archivePrefix = {ascl}, - eprint = {1608.001}, - adsurl = {https://ui.adsabs.harvard.edu/abs/2016ascl.soft08001H}, - adsnote = {Provided by the SAO/NASA Astrophysics Data System} - }`; - const el = document.createElement('textarea'); - el.value = bibtex; - document.body.appendChild(el); - el.select(); - document.execCommand('copy'); - document.body.removeChild(el); - }
    diff --git a/docs/contributing.rst b/docs/contributing.rst index 752d33730..aedd2b725 100644 --- a/docs/contributing.rst +++ b/docs/contributing.rst @@ -20,7 +20,6 @@ We encourage you to get involved with Stingray in any way you can! First, read through the `README `_. Then, fork the `stingray `_ and `notebooks `_ repositories (if you need a primer on GitHub and git version control, `look here `_) and work your way through the Jupyter notebook tutorials for the main modules. Once you've familiarized yourself with the basics of Stingray, go to the `Stingray issues page `_ and try to tackle one! -Other ways to get involved are outlined on the `project ideas `_ page, along with some astrophysical background/motivation. -Finally, you can read `these slides `_ from an early talk on Stingray at the Python in Astronomy 2016 conference. +Finally, you can read `these slides `_ from a talk on Stingray in 2021 at the 9th Microquasar Workshop. For organizing and coordinating the software development, we have a Slack group and a mailing list -- please use `this link `_ for Slack or send `one of us `_ an email to join. diff --git a/docs/dataexplo.rst b/docs/dataexplo.rst new file mode 100644 index 000000000..7183275c1 --- /dev/null +++ b/docs/dataexplo.rst @@ -0,0 +1,28 @@ +Data Exploration +**************** + +These notebook tutorials show some ways to explore data with +Stingray. + +A quick look at a NuSTAR observation +==================================== + +Stingray transparently loads datasets from many HEASOFT-supported missions. +In this Tutorial, we will show an example quicklook of a NuSTAR observation. + +.. toctree:: + :maxdepth: 2 + + notebooks/DataQuickLook/Quicklook NuSTAR data with Stingray.ipynb + + +Spectral timing exploration with NICER +====================================== + +In this Tutorial, we will show an example spectral timing exploration of a +black hole binary using NICER data. + +.. toctree:: + :maxdepth: 2 + + notebooks/Spectral Timing/Spectral Timing Exploration.ipynb diff --git a/docs/history.rst b/docs/history.rst index cbc8c463d..9ef2d6949 100644 --- a/docs/history.rst +++ b/docs/history.rst @@ -4,13 +4,6 @@ History .. include:: ../CHANGELOG.rst -Performance improvements -======================== - -Version 0.2 introduced a few performance improvements when ``Lightcurve`` objects are created. -Once the user defines either the counts per bin or the count rates, the other quantity will be evaluated _lazily_, the first time it is requested. -Also, we introduce a new ``low_memory`` option in ``Lightcurve``: if selected, and users define e.g. ``counts``, ``countrate`` will be calculated _every_time it is requested, and will not be stored in order to free up RAM. - Previous projects merged to Stingray ==================================== diff --git a/docs/index.rst b/docs/index.rst index e3a78e92a..5940564fa 100644 --- a/docs/index.rst +++ b/docs/index.rst @@ -14,7 +14,7 @@ There are a number of official software packages for X-ray spectral fitting (XSP Such a widely used and standard software package does not exist for X-ray timing, so until now it has mainly been the domain of custom, proprietary software. Stingray originated during the 2016 workshop `The X-ray Spectral-Timing Revolution `_: a group of X-ray astronomers and developers decided to agree on a common platform to develop a new software package. The goals were to merge existing efforts towards a timing package in Python, following the best guidelines for modern open-source programming, thereby providing the basis for developing spectral-timing analysis tools. -This software provides an easily accessible scripting interface (possibly a GUI) and an API for power users. +This software provides an easily accessible scripting interface, a GUI, and an API for experienced coders. Stingray's ultimate goal is to provide the community with a package that eases the learning curve for advanced spectral-timing techniques, with a correct statistical framework. Further spectral-timing functionality, in particularly command line scripts based on the API defined within Stingray, is available in the package `HENDRICS `_. @@ -33,9 +33,8 @@ Contents intro install core - quicklook + dataexplo modeling - spectime simulator pulsar deadtime diff --git a/docs/install.rst b/docs/install.rst index 8ba8d5aa3..3e4c905cf 100644 --- a/docs/install.rst +++ b/docs/install.rst @@ -88,6 +88,11 @@ You may need to install tox first:: $ pip install tox +To run a specific test file (e.g., test_io.py), try:: + + $ cd stingray + $ py.test tests/test_io.py + If you have installed Stingray via pip or conda, the source directory might not be easily accessible. Once installed, you can also run the tests using:: @@ -104,7 +109,7 @@ Documentation ------------- The documentation including tutorials is hosted `here `_. -The documentation uses `sphinx `_ to build and requires the extension sphinx-astropy `_. +The documentation uses `sphinx `_ to build and requires the extensions `sphinx-astropy `_ and `nbsphinx `_. You can build the API reference yourself by going into the ``docs`` folder within the ``stingray`` root directory and running the ``Makefile``: :: @@ -115,7 +120,7 @@ directory and running the ``Makefile``: :: If that doesn't work on your system, you can invoke ``sphinx-build`` itself from the stingray source directory: :: $ cd stingray - $ $ sphinx-build docs docs/_build + $ sphinx-build docs docs/_build The documentation should be located in ``stingray/docs/_build``. Try opening ``./docs/_build/index.rst`` from the stingray source directory. diff --git a/docs/intro.rst b/docs/intro.rst index bef5f86b6..0c733052f 100644 --- a/docs/intro.rst +++ b/docs/intro.rst @@ -15,7 +15,7 @@ Current Capabilities Currently implemented functionality in this library comprises: -* loading event lists from fits files of a few missions (RXTE/PCA, NuSTAR/FPM, XMM-Newton/EPIC) +* loading event lists from fits files of a few missions (RXTE/PCA, NuSTAR/FPM, XMM-Newton/EPIC, NICER/XTI) * constructing light curves from event data, various operations on light curves (e.g. addition, subtraction, joining, and truncation) * Good Time Interval operations * power spectra in Leahy, rms normalization, absolute rms and no normalization @@ -60,7 +60,8 @@ Presentations Members of the Stingray team have given a number of presentations which introduce Stingray. These include: - +- `2nd Severo Ochoa School on Statistics, Data Mining, and Machine Learning (2021) `_ +- `9th Microquasar Workshop (2021) `_ - `European Week of Astronomy and Space Science (2018) `_ - `ADASS (Astronomical Data Analysis Software and Systems; meeting 2017, proceedings 2020) `_ - `AAS 16th High-Energy Astrophysics Division meeting (2017) `_ diff --git a/docs/quicklook.rst b/docs/quicklook.rst deleted file mode 100644 index 64b50eea3..000000000 --- a/docs/quicklook.rst +++ /dev/null @@ -1,10 +0,0 @@ -Quicklook of an X-ray observation -********************************* - -Stingray transparently loads datasets from many HEASOFT-supported missions. -In this Tutorial, we will show an example quicklook of a NuSTAR observation. - -.. toctree:: - :maxdepth: 2 - - notebooks/DataQuickLook/Quicklook NuSTAR data with Stingray.ipynb diff --git a/docs/spectime.rst b/docs/spectime.rst deleted file mode 100644 index 647ba62c7..000000000 --- a/docs/spectime.rst +++ /dev/null @@ -1,10 +0,0 @@ -Spectral timing exploration of a black hole candidate with NICER -**************************************************************** - -In this Tutorial, we will show an example spectral timing exploration of a -black hole binary using NICER data - -.. toctree:: - :maxdepth: 2 - - notebooks/Spectral Timing/Spectral Timing Exploration.ipynb diff --git a/setup.cfg b/setup.cfg index 25496e316..9f2e7a5c6 100644 --- a/setup.cfg +++ b/setup.cfg @@ -4,7 +4,7 @@ author = Stingray Developers author_email = spectraltiming-stingray@googlegroups.com license = MIT license_file = LICENSE.rst -url = https://stingray.readthedocs.io/ +url = https://docs.stingray.science description = Time Series Methods For Astronomical X-ray Data long_description = file: README.rst long_description_content_type = text/x-rst @@ -49,6 +49,7 @@ test = pytest pytest-astropy docs = + jinja2<=3.0.0 docutils sphinx-astropy nbsphinx>=0.8.3,!=0.8.8 diff --git a/stingray/crossspectrum.py b/stingray/crossspectrum.py index 9ed00d7e4..147f7a511 100644 --- a/stingray/crossspectrum.py +++ b/stingray/crossspectrum.py @@ -503,7 +503,10 @@ class Crossspectrum(StingrayObject): ---------------- gti: [[gti0_0, gti0_1], [gti1_0, gti1_1], ...] Good Time intervals. Defaults to the common GTIs from the two input - objects + objects. Could throw errors if these GTIs have overlaps with the input + `Lightcurve` GTIs! If you're getting errors regarding your GTIs, don't + use this and only give GTIs to the `Lightcurve` objects before making + the cross spectrum. lc1: :class:`stingray.Lightcurve`object OR iterable of :class:`stingray.Lightcurve` objects For backwards compatibility only. Like ``data1``, but no @@ -1402,10 +1405,14 @@ def from_time_array( Other parameters ---------------- segment_size : float - The length, in seconds, of the light curve segments that will be averaged. - Only relevant (and required) for AveragedCrossspectrum + The length, in seconds, of the light curve segments that will be + averaged. Only relevant (and required) for `AveragedCrossspectrum`. gti : [[gti0, gti1], ...] - Common Good Time intervals + Good Time intervals. Defaults to the common GTIs from the two input + objects. Could throw errors if these GTIs have overlaps with the + input object GTIs! If you're getting errors regarding your GTIs, + don't use this and only give GTIs to the input objects before + making the cross spectrum. norm : str, default "frac" The normalization of the periodogram. "abs" is absolute rms, "frac" is fractional rms, "leahy" is Leahy+83 normalization, and "none" is the @@ -1485,7 +1492,10 @@ def from_events( the real part gti: [[gti0_0, gti0_1], [gti1_0, gti1_1], ...] Good Time intervals. Defaults to the common GTIs from the two input - objects + objects. Could throw errors if these GTIs have overlaps with the + input object GTIs! If you're getting errors regarding your GTIs, + don't use this and only give GTIs to the input objects before + making the cross spectrum. """ return crossspectrum_from_events( @@ -1544,7 +1554,10 @@ def from_lightcurve( the real part gti: [[gti0_0, gti0_1], [gti1_0, gti1_1], ...] Good Time intervals. Defaults to the common GTIs from the two input - objects + objects. Could throw errors if these GTIs have overlaps with the + input object GTIs! If you're getting errors regarding your GTIs, + don't use this and only give GTIs to the input objects before + making the cross spectrum. """ return crossspectrum_from_lightcurve( lc1, @@ -1605,7 +1618,10 @@ def from_lc_iterable( the real part gti: [[gti0_0, gti0_1], [gti1_0, gti1_1], ...] Good Time intervals. Defaults to the common GTIs from the two input - objects + objects. Could throw errors if these GTIs have overlaps with the + input object GTIs! If you're getting errors regarding your GTIs, + don't use this and only give GTIs to the input objects before + making the cross spectrum. """ return crossspectrum_from_lc_iterable( @@ -1706,16 +1722,18 @@ class AveragedCrossspectrum(Crossspectrum): Parameters ---------- data1: :class:`stingray.Lightcurve`object OR iterable of :class:`stingray.Lightcurve` objects OR :class:`stingray.EventList` object - A light curve from which to compute the cross spectrum. In some cases, this would - be the light curve of the wavelength/energy/frequency band of interest. + A light curve from which to compute the cross spectrum. In some cases, + this would be the light curve of the wavelength/energy/frequency band + of interest. data2: :class:`stingray.Lightcurve`object OR iterable of :class:`stingray.Lightcurve` objects OR :class:`stingray.EventList` object - A second light curve to use in the cross spectrum. In some cases, this would be - the wavelength/energy/frequency reference band to compare the band of interest with. + A second light curve to use in the cross spectrum. In some cases, this + would be the wavelength/energy/frequency reference band to compare the + band of interest with. segment_size: float - The size of each segment to average. Note that if the total - duration of each :class:`Lightcurve` object in ``lc1`` or ``lc2`` is not an + The size of each segment to average. Note that if the total duration of + each :class:`Lightcurve` object in ``lc1`` or ``lc2`` is not an integer multiple of the ``segment_size``, then any fraction left-over at the end of the time series will be lost. Otherwise you introduce artifacts. @@ -1727,7 +1745,10 @@ class AveragedCrossspectrum(Crossspectrum): ---------------- gti: [[gti0_0, gti0_1], [gti1_0, gti1_1], ...] Good Time intervals. Defaults to the common GTIs from the two input - objects + objects. Could throw errors if these GTIs have overlaps with the + input object GTIs! If you're getting errors regarding your GTIs, + don't use this and only give GTIs to the input objects before + making the cross spectrum. dt : float The time resolution of the light curve. Only needed when constructing @@ -1767,53 +1788,57 @@ class AveragedCrossspectrum(Crossspectrum): inputs!) use_common_mean: bool - Averaged Cross spectra are normalized in two possible ways: one is by normalizing + Averaged cross spectra are normalized in two possible ways: one is by normalizing each of the single spectra that get averaged, the other is by normalizing after the averaging. If `use_common_mean` is selected, the spectrum will be normalized after the average. legacy: bool - Use the legacy machinery of AveragedCrossspectrum. This might be useful to compare - with old results, and is also needed to use light curve lists as an input, to - conserve the spectra of each segment, or to use the large_data option. + Use the legacy machinery of `AveragedCrossspectrum`. This might be + useful to compare with old results, and is also needed to use light + curve lists as an input, to conserve the spectra of each segment, or + to use the large_data option. gti: [[gti0_0, gti0_1], [gti1_0, gti1_1], ...] Good Time intervals. Defaults to the common GTIs from the two input - objects + objects. Could throw errors if these GTIs have overlaps with the + input object GTIs! If you're getting errors regarding your GTIs, + don't use this and only give GTIs to the input objects before + making the cross spectrum. Attributes ---------- freq: numpy.ndarray - The array of mid-bin frequencies that the Fourier transform samples + The array of mid-bin frequencies that the Fourier transform samples. power: numpy.ndarray - The array of cross spectra + The array of cross spectra. power_err: numpy.ndarray The uncertainties of ``power``. An approximation for each bin given by ``power_err= power/sqrt(m)``. Where ``m`` is the number of power averaged in each bin (by frequency - binning, or averaging powerspectrum). Note that for a single - realization (``m=1``) the error is equal to the power. + binning, or averaging power spectra of segments of a light curve). + Note that for a single realization (``m=1``) the error is equal to the + power. df: float - The frequency resolution + The frequency resolution. m: int - The number of averaged cross spectra + The number of averaged cross spectra. n: int - The number of time bins per segment of light curve + The number of time bins per segment of light curve. nphots1: float - The total number of photons in the first (interest) light curve + The total number of photons in the first (interest) light curve. nphots2: float - The total number of photons in the second (reference) light curve + The total number of photons in the second (reference) light curve. gti: [[gti0_0, gti0_1], [gti1_0, gti1_1], ...] - Good Time intervals. Defaults to the common GTIs from the two input - objects + Good Time intervals. """ def __init__( @@ -2738,7 +2763,7 @@ def _create_crossspectrum_from_result_table(table, force_averaged=False): "Defaulting to sqrt(2 / M) in Leahy norm, rescaled to the appropriate norm." ) - Nph = np.sqrt(cs.nphots1 * cs.nphots1) + Nph = np.sqrt(cs.nphots1 * cs.nphots2) default_err = np.sqrt(2 / cs.m) * Nph / 2 dRe[bad] = default_err diff --git a/stingray/fourier.py b/stingray/fourier.py index 9dc770708..0413a6336 100644 --- a/stingray/fourier.py +++ b/stingray/fourier.py @@ -11,7 +11,8 @@ def positive_fft_bins(n_bin, include_zero=False): - """Give the range of positive frequencies of a complex FFT. + """ + Give the range of positive frequencies of a complex FFT. This assumes we are using Numpy's FFT, or something compatible with it, like ``pyfftw.interfaces.numpy_fft``, where the positive @@ -81,7 +82,8 @@ def positive_fft_bins(n_bin, include_zero=False): def poisson_level(norm="frac", meanrate=None, n_ph=None, backrate=0): - """Poisson (white)-noise level in a periodogram of pure counting noise. + """ + Poisson (white)-noise level in a periodogram of pure counting noise. For Leahy normalization, this is: .. math:: @@ -103,16 +105,18 @@ def poisson_level(norm="frac", meanrate=None, n_ph=None, backrate=0): Parameters ---------- norm : str, default "frac" - Normalization of the periodogram. One of ["abs", "frac", "leahy", "none"] + Normalization of the periodogram. One of ["abs", "frac", "leahy", + "none"]. Other Parameters ---------------- meanrate : float, default None - Mean count rate in counts/s. Needed for r.m.s. norms (``abs`` and ``frac``) + Mean count rate in counts/s. Needed for r.m.s. norms ("abs" and + "frac"). n_ph : float, default None - Total number of counts in the light curve. Needed if ``norm=="none"`` + Total number of counts in the light curve. Needed if ``norm=="none"``. backrate : float, default 0 - Background count rate in counts/s. Optional for fractional r.m.s. norm + Background count rate in counts/s. Optional for fractional r.m.s. norm. Raises ------ @@ -154,7 +158,8 @@ def poisson_level(norm="frac", meanrate=None, n_ph=None, backrate=0): bad_input = bad_input or (norm.lower() == "none" and n_ph is None) if bad_input: - raise ValueError(f"Bad input parameters for norm {norm}: n_ph={n_ph}, meanrate={meanrate}") + raise ValueError(f"Bad input parameters for norm {norm}: n_ph={n_ph}, " + f"meanrate={meanrate}") if norm == "abs": return 2. * meanrate @@ -169,7 +174,8 @@ def poisson_level(norm="frac", meanrate=None, n_ph=None, backrate=0): def normalize_frac(unnorm_power, dt, n_bin, mean_flux, background_flux=0): - """Fractional rms normalization. + """ + Fractional rms normalization. ..math:: P = \frac{P_{Leahy}}{\mu} = \frac{2T}{N_{ph}^2}P_{unnorm} @@ -248,7 +254,8 @@ def normalize_frac(unnorm_power, dt, n_bin, mean_flux, background_flux=0): # norm = 2 / (n_ph * meanrate) = 2 * dt / (mean**2 * n_bin) if background_flux > 0: - power = unnorm_power * 2. * dt / ((mean_flux - background_flux) ** 2 * n_bin) + power = unnorm_power * 2. * dt / ((mean_flux - background_flux) ** 2 * + n_bin) else: # Note: this corresponds to eq. 3 in Uttley+14 power = unnorm_power * 2. * dt / (mean_flux ** 2 * n_bin) @@ -256,7 +263,8 @@ def normalize_frac(unnorm_power, dt, n_bin, mean_flux, background_flux=0): def normalize_abs(unnorm_power, dt, n_bin): - """Absolute rms normalization. + """ + Absolute rms normalization. .. math:: P = P_{frac} * \mu^2 @@ -307,7 +315,8 @@ def normalize_abs(unnorm_power, dt, n_bin): def normalize_leahy_from_variance(unnorm_power, variance, n_bin): - """Leahy+83 normalization, from the variance of the lc. + """ + Leahy+83 normalization, from the variance of the lc. .. math:: P = \frac{P_{unnorm}}{N <\delta{x}^2>} @@ -360,7 +369,8 @@ def normalize_leahy_from_variance(unnorm_power, variance, n_bin): def normalize_leahy_poisson(unnorm_power, n_ph): - """Leahy+83 normalization. + """ + Leahy+83 normalization. .. math:: P = \frac{2}{N_{ph}} P_{unnorm} @@ -400,8 +410,11 @@ def normalize_leahy_poisson(unnorm_power, n_ph): return unnorm_power * 2. / n_ph -def normalize_periodograms(unnorm_power, dt, n_bin, mean_flux=None, n_ph=None, variance=None, background_flux=0., norm="frac", power_type="all"): - """Wrapper around all the normalize_NORM methods. +def normalize_periodograms(unnorm_power, dt, n_bin, mean_flux=None, n_ph=None, + variance=None, background_flux=0., norm="frac", + power_type="all"): + """ + Wrapper around all the normalize_NORM methods. Normalize the real part of the cross spectrum to Leahy, absolute rms^2, fractional rms^2 normalization, or not at all. @@ -429,11 +442,11 @@ def normalize_periodograms(unnorm_power, dt, n_bin, mean_flux=None, n_ph=None, v the unnormalized periodogram. Only relevant for Leahy normalization. variance: float - The average variance of the measurements in light curve (if a cross spectrum, - the geometrical mean of the variances in the two channels). **NOT** the - variance of the light curve, but of each flux measurement (square of - light curve error bar)! Only relevant for the Leahy normalization of - non-Poissonian data. + The average variance of the measurements in light curve (if a cross + spectrum, the geometrical mean of the variances in the two channels). + **NOT** the variance of the light curve, but of each flux measurement + (square of light curve error bar)! Only relevant for the Leahy + normalization of non-Poissonian data. norm : str One of ``leahy`` (Leahy+83), ``frac`` (fractional rms), ``abs`` @@ -477,8 +490,10 @@ def normalize_periodograms(unnorm_power, dt, n_bin, mean_flux=None, n_ph=None, v raise ValueError("Unrecognized power type") -def bias_term(power1, power2, power1_noise, power2_noise, n_ave, intrinsic_coherence=1.0): - """Bias term needed to calculate the coherence. +def bias_term(power1, power2, power1_noise, power2_noise, n_ave, + intrinsic_coherence=1.0): + """ + Bias term needed to calculate the coherence. Introduced by Vaughan & Nowak 1997, ApJ 474, L43 @@ -513,12 +528,15 @@ def bias_term(power1, power2, power1_noise, power2_noise, n_ave, intrinsic_coher """ if n_ave > 500: return 0. * power1 - bsq = power1 * power2 - intrinsic_coherence * (power1 - power1_noise) * (power2 - power2_noise) + bsq = power1 * power2 - intrinsic_coherence * (power1 - power1_noise) * \ + (power2 - power2_noise) return bsq / n_ave -def raw_coherence(cross_power, power1, power2, power1_noise, power2_noise, n_ave, intrinsic_coherence=1): - """Raw coherence estimations from cross and power spectra. +def raw_coherence(cross_power, power1, power2, power1_noise, power2_noise, + n_ave, intrinsic_coherence=1): + """ + Raw coherence estimations from cross and power spectra. Vaughan & Nowak 1997, ApJ 474, L43 @@ -558,8 +576,10 @@ def raw_coherence(cross_power, power1, power2, power1_noise, power2_noise, n_ave return num / den -def _estimate_intrinsic_coherence_single(cross_power, power1, power2, power1_noise, power2_noise, n_ave): - """Estimate intrinsic coherence +def _estimate_intrinsic_coherence_single(cross_power, power1, power2, + power1_noise, power2_noise, n_ave): + """ + Estimate intrinsic coherence. Use the iterative procedure from sec. 5 of @@ -588,7 +608,8 @@ def _estimate_intrinsic_coherence_single(cross_power, power1, power2, power1_noi new_coherence = 1 old_coherence = 0 count = 0 - while not np.isclose(new_coherence, old_coherence, atol=0.01) and count < 40: + while not np.isclose(new_coherence, old_coherence, atol=0.01) and \ + count < 40: old_coherence = new_coherence bsq = bias_term(power1, power2, power1_noise, power2_noise, n_ave, intrinsic_coherence=new_coherence) @@ -602,11 +623,14 @@ def _estimate_intrinsic_coherence_single(cross_power, power1, power2, power1_noi # This is the vectorized version of the function above. -estimate_intrinsic_coherence_vec = np.vectorize(_estimate_intrinsic_coherence_single) +estimate_intrinsic_coherence_vec = np.vectorize( + _estimate_intrinsic_coherence_single) -def estimate_intrinsic_coherence(cross_power, power1, power2, power1_noise, power2_noise, n_ave): - """Estimate intrinsic coherence +def estimate_intrinsic_coherence(cross_power, power1, power2, power1_noise, + power2_noise, n_ave): + """ + Estimate intrinsic coherence Use the iterative procedure from sec. 5 of @@ -637,8 +661,11 @@ def estimate_intrinsic_coherence(cross_power, power1, power2, power1_noise, powe return new_coherence -def error_on_averaged_cross_spectrum(cross_power, seg_power, ref_power, n_ave, seg_power_noise, ref_power_noise, common_ref="False"): - """Error on cross spectral quantities, From Ingram 2019. +def error_on_averaged_cross_spectrum(cross_power, seg_power, ref_power, n_ave, + seg_power_noise, ref_power_noise, + common_ref="False"): + """ + Error on cross spectral quantities, From Ingram 2019. Note: this is only valid for a very large number of averaged powers. Beware if n_ave < 50 or so. @@ -678,11 +705,13 @@ def error_on_averaged_cross_spectrum(cross_power, seg_power, ref_power, n_ave, s if n_ave < 30: warnings.warn("n_ave is below 30. Please note that the error bars " "on the quantities derived from the cross spectrum " - "are only reliable for a large number of averaged powers.") + "are only reliable for a large number of averaged " + "powers.") two_n_ave = 2 * n_ave if common_ref: Gsq = (cross_power * np.conj(cross_power)).real - bsq = bias_term(seg_power, ref_power, seg_power_noise, ref_power_noise, n_ave) + bsq = bias_term(seg_power, ref_power, seg_power_noise, ref_power_noise, + n_ave) frac = (Gsq - bsq) / (ref_power - ref_power_noise) power_over_2n = ref_power / two_n_ave @@ -694,8 +723,10 @@ def error_on_averaged_cross_spectrum(cross_power, seg_power, ref_power, n_ave, s else: PrPs = ref_power * seg_power - dRe = np.sqrt((PrPs + cross_power.real ** 2 - cross_power.imag ** 2) / two_n_ave) - dIm = np.sqrt((PrPs - cross_power.real ** 2 + cross_power.imag ** 2) / two_n_ave) + dRe = np.sqrt((PrPs + cross_power.real ** 2 - cross_power.imag ** 2) / + two_n_ave) + dIm = np.sqrt((PrPs - cross_power.real ** 2 + cross_power.imag ** 2) / + two_n_ave) gsq = raw_coherence(cross_power, seg_power, ref_power, seg_power_noise, ref_power_noise, n_ave) dphi = np.sqrt((1 - gsq) / (2 * gsq * n_ave)) @@ -705,7 +736,8 @@ def error_on_averaged_cross_spectrum(cross_power, seg_power, ref_power, n_ave, s def cross_to_covariance(cross_power, ref_power, ref_power_noise, delta_nu): - """Convert a cross spectrum into a covariance spectrum. + """ + Convert a cross spectrum into a covariance spectrum. Covariance: Wilkinson & Uttley 2009, MNRAS, 397, 666 @@ -735,7 +767,8 @@ def cross_to_covariance(cross_power, ref_power, ref_power_noise, delta_nu): def _which_segment_idx_fun(binned=False, dt=None): - """Select which segment index function from ``gti.py`` to use. + """ + Select which segment index function from ``gti.py`` to use. If ``binned`` is ``False``, call the unbinned function. @@ -751,13 +784,15 @@ def _which_segment_idx_fun(binned=False, dt=None): # Define a new function, so that we can pass the correct dt as an # argument. def fun(*args, **kwargs): - return generate_indices_of_segment_boundaries_binned(*args, dt=dt, **kwargs) + return generate_indices_of_segment_boundaries_binned(*args, dt=dt, + **kwargs) return fun def get_average_ctrate(times, gti, segment_size, counts=None): - """Calculate the average count rate during the observation. + """ + Calculate the average count rate during the observation. This function finds the same segments that the averaged periodogram functions (``avg_cs_from_iterables``, ``avg_pds_from_iterables`` etc) will @@ -810,8 +845,10 @@ def get_average_ctrate(times, gti, segment_size, counts=None): return n_ph / (n_intvs * segment_size) -def get_flux_iterable_from_segments(times, gti, segment_size, n_bin=None, fluxes=None, errors=None): - """Get fluxes from different segments of the observation. +def get_flux_iterable_from_segments(times, gti, segment_size, n_bin=None, + fluxes=None, errors=None): + """ + Get fluxes from different segments of the observation. If ``fluxes`` is ``None``, the input times are interpreted as events, and they are split into many binned series of length ``segment_size`` with @@ -850,9 +887,8 @@ def get_flux_iterable_from_segments(times, gti, segment_size, n_bin=None, fluxes """ if fluxes is None and n_bin is None: - raise ValueError( - "At least one between fluxes (if light curve) and n_bin (if events) has to be set" - ) + raise ValueError("At least one between fluxes (if light curve) and " + "n_bin (if events) has to be set") dt = None binned = fluxes is not None @@ -880,14 +916,16 @@ def get_flux_iterable_from_segments(times, gti, segment_size, n_bin=None, fluxes yield cts -def avg_pds_from_iterable(flux_iterable, dt, norm="frac", use_common_mean=True, silent=False): - """Calculate the average periodogram from an iterable of light curves +def avg_pds_from_iterable(flux_iterable, dt, norm="frac", use_common_mean=True, + silent=False): + """ + Calculate the average periodogram from an iterable of light curves Parameters ---------- flux_iterable : `iterable` of `np.array`s or of tuples (`np.array`, `np.array`) - Iterable providing either equal-length series of count measurements, or - of tuples (fluxes, errors). They must all be of the same length. + Iterable providing either equal-length series of count measurements, + or of tuples (fluxes, errors). They must all be of the same length. dt : float Time resolution of the light curves used to produce periodograms @@ -944,7 +982,8 @@ def local_show_progress(a): if flux is None or np.all(flux == 0): continue - # If the iterable returns the uncertainty, use it to calculate the variance. + # If the iterable returns the uncertainty, use it to calculate the + # variance. variance = None if isinstance(flux, tuple): flux, err = flux @@ -975,8 +1014,8 @@ def local_show_progress(a): # No need for the negative frequencies unnorm_power = unnorm_power[fgt0] - # If the user wants to normalize using the mean of the total lightcurve, - # normalize it here + # If the user wants to normalize using the mean of the total + # lightcurve, normalize it here cs_seg = unnorm_power if not use_common_mean: mean = n_ph / n_bin @@ -988,7 +1027,8 @@ def local_show_progress(a): # Accumulate the total sum cross spectrum cross = sum_if_not_none_or_initialize(cross, cs_seg) - unnorm_cross = sum_if_not_none_or_initialize(unnorm_cross, unnorm_power) + unnorm_cross = sum_if_not_none_or_initialize(unnorm_cross, + unnorm_power) n_ave += 1 @@ -1002,7 +1042,8 @@ def local_show_progress(a): common_mean = n_ph / n_bin if common_variance is not None: - # Note: the variances we summed were means, not sums. Hence M, not M*n_bin + # Note: the variances we summed were means, not sums. + # Hence M, not M*n_bin common_variance /= n_ave # Transform a sum into the average @@ -1107,8 +1148,9 @@ def avg_cs_from_iterables_quick( n_ph1 = flux1.sum() n_ph2 = flux2.sum() - # At the first loop, we define the frequency array and the range of positive - # frequency bins (after the first loop, cross will not be None nymore) + # At the first loop, we define the frequency array and the range of + # positive frequency bins (after the first loop, cross will not be + # None anymore) if unnorm_cross is None: freq = fftfreq(n_bin, dt) fgt0 = positive_fft_bins(n_bin) @@ -1128,7 +1170,8 @@ def avg_cs_from_iterables_quick( unnorm_power = unnorm_power[fgt0] # Initialize or accumulate final averaged spectrum - unnorm_cross = sum_if_not_none_or_initialize(unnorm_cross, unnorm_power) + unnorm_cross = sum_if_not_none_or_initialize(unnorm_cross, + unnorm_power) n_ave += 1 @@ -1234,28 +1277,29 @@ def avg_cs_from_iterables( results : :class:`astropy.table.Table` Table containing the following columns: freq : `np.array` - The frequencies + The frequencies. power : `np.array` - The normalized cross spectral powers + The normalized cross spectral powers. unnorm_power : `np.array` - The unnormalized cross spectral power + The unnormalized cross spectral power. unnorm_pds1 : `np.array` - The unnormalized auxiliary PDS from channel 1. Only returned if ``return_auxil`` is ``True`` + The unnormalized auxiliary PDS from channel 1. Only returned if + ``return_auxil`` is ``True``. unnorm_pds2 : `np.array` - The unnormalized auxiliary PDS from channel 2. Only returned if ``return_auxil`` is ``True`` + The unnormalized auxiliary PDS from channel 2. Only returned if + ``return_auxil`` is ``True``. And a number of other useful diagnostics in the metadata, including all attributes needed to allocate Crossspectrum objects, such as all the input arguments of this function (``dt``, ``segment_size``), and, e.g. n : int - the number of bins in the light curves used in each segment + The number of bins in the light curves used in each segment. m : int - the number of averaged periodograms + The number of averaged periodograms. mean : float - the mean flux (geometrical average of the mean fluxes in the two - channels) - + The mean flux (geometrical average of the mean fluxes in the two + channels). """ local_show_progress = show_progress @@ -1270,7 +1314,8 @@ def local_show_progress(a): sum_of_photons1 = sum_of_photons2 = 0 common_variance1 = common_variance2 = common_variance = None - for flux1, flux2 in local_show_progress(zip(flux_iterable1, flux_iterable2)): + for flux1, flux2 in local_show_progress(zip(flux_iterable1, + flux_iterable2)): if flux1 is None or flux2 is None or \ np.all(flux1 == 0) or np.all(flux2 == 0): continue @@ -1289,13 +1334,16 @@ def local_show_progress(a): if variance1 is None or variance2 is None: variance1 = variance2 = None else: - common_variance1 = sum_if_not_none_or_initialize(common_variance1, variance1) - common_variance2 = sum_if_not_none_or_initialize(common_variance2, variance2) + common_variance1 = sum_if_not_none_or_initialize(common_variance1, + variance1) + common_variance2 = sum_if_not_none_or_initialize(common_variance2, + variance2) n_bin = flux1.size - # At the first loop, we define the frequency array and the range of positive - # frequency bins (after the first loop, cross will not be None nymore) + # At the first loop, we define the frequency array and the range of + # positive frequency bins (after the first loop, cross will not be + # None anymore) if cross is None: freq = fftfreq(n_bin, dt) fgt0 = positive_fft_bins(n_bin) @@ -1322,7 +1370,8 @@ def local_show_progress(a): sum_of_photons1 += n_ph1 sum_of_photons2 += n_ph2 - # Take only positive frequencies unless the user wants the full spectrum + # Take only positive frequencies unless the user wants the full + # spectrum if not fullspec: unnorm_power = unnorm_power[fgt0] if return_auxil: @@ -1344,22 +1393,28 @@ def local_show_progress(a): variance = np.sqrt(variance1 * variance2) cs_seg = normalize_periodograms( - unnorm_power, dt, n_bin, mean, n_ph=n_ph, norm=norm, power_type=power_type, variance=variance + unnorm_power, dt, n_bin, mean, n_ph=n_ph, norm=norm, + power_type=power_type, variance=variance ) p1_seg = normalize_periodograms( - unnorm_pd1, dt, n_bin, mean1, n_ph=n_ph1, norm=norm, power_type=power_type, variance=variance1 + unnorm_pd1, dt, n_bin, mean1, n_ph=n_ph1, norm=norm, + power_type=power_type, variance=variance1 ) p2_seg = normalize_periodograms( - unnorm_pd2, dt, n_bin, mean2, n_ph=n_ph2, norm=norm, power_type=power_type, variance=variance2 + unnorm_pd2, dt, n_bin, mean2, n_ph=n_ph2, norm=norm, + power_type=power_type, variance=variance2 ) # Initialize or accumulate final averaged spectra cross = sum_if_not_none_or_initialize(cross, cs_seg) - unnorm_cross = sum_if_not_none_or_initialize(unnorm_cross, unnorm_power) + unnorm_cross = sum_if_not_none_or_initialize(unnorm_cross, + unnorm_power) if return_auxil: - unnorm_pds1 = sum_if_not_none_or_initialize(unnorm_pds1, unnorm_pd1) - unnorm_pds2 = sum_if_not_none_or_initialize(unnorm_pds2, unnorm_pd2) + unnorm_pds1 = sum_if_not_none_or_initialize(unnorm_pds1, + unnorm_pd1) + unnorm_pds2 = sum_if_not_none_or_initialize(unnorm_pds2, + unnorm_pd2) pds1 = sum_if_not_none_or_initialize(pds1, p1_seg) pds2 = sum_if_not_none_or_initialize(pds2, p2_seg) @@ -1460,18 +1515,12 @@ def local_show_progress(a): return results -def avg_pds_from_events( - times, - gti, - segment_size, - dt, - norm="frac", - use_common_mean=True, - silent=False, - fluxes=None, - errors=None, -): - """Calculate the average periodogram from a list of event times or a light curve. +def avg_pds_from_events(times, gti, segment_size, dt, norm="frac", + use_common_mean=True, silent=False, fluxes=None, + errors=None): + """ + Calculate the average periodogram from a list of event times or a light + curve. If the input is a light curve, the time array needs to be uniformly sampled inside GTIs (it can have gaps outside), and the fluxes need to be passed @@ -1481,13 +1530,13 @@ def avg_pds_from_events( Parameters ---------- times : float `np.array` - Array of times + Array of times. gti : [[gti00, gti01], [gti10, gti11], ...] - good time intervals + Good time intervals. segment_size : float - length of segments + Length of segments. dt : float - Time resolution of the light curves used to produce periodograms + Time resolution of the light curves used to produce periodograms. Other Parameters ---------------- @@ -1526,35 +1575,24 @@ def avg_pds_from_events( n_bin = np.rint(segment_size / dt).astype(int) dt = segment_size / n_bin - flux_iterable = get_flux_iterable_from_segments( - times, gti, segment_size, n_bin, fluxes=fluxes, errors=errors - ) - cross = avg_pds_from_iterable( - flux_iterable, dt, norm=norm, use_common_mean=use_common_mean, silent=silent - ) + flux_iterable = get_flux_iterable_from_segments(times, gti, segment_size, + n_bin, fluxes=fluxes, + errors=errors) + cross = avg_pds_from_iterable(flux_iterable, dt, norm=norm, + use_common_mean=use_common_mean, + silent=silent) if cross is not None: cross.meta["gti"] = gti return cross -def avg_cs_from_events( - times1, - times2, - gti, - segment_size, - dt, - norm="frac", - use_common_mean=True, - fullspec=False, - silent=False, - power_type="all", - fluxes1=None, - fluxes2=None, - errors1=None, - errors2=None, - return_auxil=False, -): - """Calculate the average cross spectrum from a list of event times or a light curve. +def avg_cs_from_events(times1, times2, gti, segment_size, dt, norm="frac", + use_common_mean=True, fullspec=False, silent=False, + power_type="all", fluxes1=None, fluxes2=None, + errors1=None, errors2=None, return_auxil=False): + """ + Calculate the average cross spectrum from a list of event times or a light + curve. If the input is a light curve, the time arrays need to be uniformly sampled inside GTIs (they can have gaps outside), and the fluxes need to be passed @@ -1626,7 +1664,8 @@ def avg_cs_from_events( times2, gti, segment_size, n_bin, fluxes=fluxes2, errors=errors2 ) - is_events = np.all([val is None for val in (fluxes1, fluxes2, errors1, errors2)]) + is_events = np.all([val is None for val in (fluxes1, fluxes2, errors1, + errors2)]) if (is_events and silent diff --git a/stingray/gti.py b/stingray/gti.py index 1ea26aa42..fcbf479d6 100644 --- a/stingray/gti.py +++ b/stingray/gti.py @@ -19,30 +19,34 @@ 'cross_two_gtis', 'cross_gtis', 'get_btis', 'get_gti_extensions_from_pattern', 'get_gti_from_all_extensions', 'get_gti_from_hdu', 'get_gti_lengths', 'get_total_gti_length', - 'check_separate', 'append_gtis', 'join_gtis', 'generate_indices_of_gti_boundaries', + 'check_separate', 'append_gtis', 'join_gtis', + 'generate_indices_of_gti_boundaries', 'time_intervals_from_gtis', 'bin_intervals_from_gtis', - 'gti_border_bins', 'generate_indices_of_segment_boundaries_unbinned', 'generate_indices_of_segment_boundaries_binned'] + 'gti_border_bins', + 'generate_indices_of_segment_boundaries_unbinned', + 'generate_indices_of_segment_boundaries_binned'] def gti_len(gti): - """Deprecated. Use get_total_gti_length.""" - warnings.warn("This function is deprecated. Use get_total_gti_length instead", - DeprecationWarning) + """Deprecated, will be removed in version 2.0. Use get_total_gti_length.""" + warnings.warn("This function is deprecated. Use get_total_gti_length "\ + "instead", DeprecationWarning) return get_total_gti_length(gti, minlen=0) def get_gti_lengths(gti): - """Calculate the length of each Good Time Interval. + """ + Calculate the length of each Good Time Interval. Parameters ---------- gti : [[gti00, gti01], [gti10, gti11], ...] - The list of good time intervals + The list of good time intervals. Returns ------- lengths : `np.ndarray` - List of GTI lengths + List of GTI lengths. Examples -------- @@ -54,19 +58,20 @@ def get_gti_lengths(gti): def get_total_gti_length(gti, minlen=0): - """Calculate the total exposure during Good Time Intervals. + """ + Calculate the total exposure during Good Time Intervals. Parameters ---------- gti : [[gti00, gti01], [gti10, gti11], ...] - The list of good time intervals + The list of good time intervals. minlen : float - Minimum GTI length to consider + Minimum GTI length to consider. Returns ------- length : float - total exposure during GTIs + The total exposure during GTIs. Examples -------- @@ -88,18 +93,16 @@ def load_gtis(fits_file, gtistring=None): Parameters ---------- fits_file : str - File name and path for the fits file with the GTIs to be loaded + File name and path for the FITS file with the GTIs to be loaded. gtistring : str - If the name of the extension with the GTIs is not ``GTI``, the alternative - name can be set with this parameter + If the name of the FITS extension with the GTIs is not ``GTI``, the + alternative name can be set with this parameter. Returns ------- gti_list : list A list of GTI ``(start, stop)`` pairs extracted from the FITS file. - - """ gtistring = assign_value_if_none(gtistring, 'GTI') @@ -117,19 +120,20 @@ def load_gtis(fits_file, gtistring=None): def get_gti_extensions_from_pattern(lchdulist, name_pattern="GTI"): - """Gets the GTI extensions that match a given pattern. + """ + Gets the GTI extensions that match a given pattern. Parameters ---------- lchdulist: `:class:astropy.io.fits.HDUList` object The full content of a FITS file. name_pattern: str - Pattern indicating all the GTI extensions + Pattern indicating all the GTI extensions. Returns ------- ext_list: list - List of GTI extension numbers whose name matches the input pattern + List of GTI extension numbers whose name matches the input pattern. Examples -------- @@ -156,7 +160,8 @@ def get_gti_extensions_from_pattern(lchdulist, name_pattern="GTI"): def hdu_contains_gti(hdu): - """Test if a given hdu contains a list of GTIs + """ + Test if a given FITS HDU contains a list of GTIs. Examples -------- @@ -178,17 +183,18 @@ def hdu_contains_gti(hdu): def get_gti_from_hdu(gtihdu): - """Get the GTIs from a given extension. + """ + Get the GTIs from a given FITS extension. Parameters ---------- gtihdu: `:class:astropy.io.fits.TableHDU` object - The GTI hdu + The GTI HDU. Returns ------- gti_list: [[gti00, gti01], [gti10, gti11], ...] - List of good time intervals + List of good time intervals. Examples -------- @@ -218,10 +224,10 @@ def get_gti_from_hdu(gtihdu): return gti_list -def get_gti_from_all_extensions( - lchdulist, accepted_gtistrings=["GTI"], det_numbers=None -): - """Intersect the GTIs from the all accepted extensions. +def get_gti_from_all_extensions(lchdulist, accepted_gtistrings=["GTI"], + det_numbers=None): + """ + Intersect the GTIs from the all accepted extensions. Parameters ---------- @@ -278,7 +284,8 @@ def get_gti_from_all_extensions( def check_gtis(gti): - """Check if GTIs are well-behaved. + """ + Check if GTIs are well-behaved. Check that: @@ -299,13 +306,13 @@ def check_gtis(gti): If GTIs have overlapping or displaced values """ if len(gti) < 1: - raise ValueError("Empty GTIs") + raise ValueError("Empty GTIs.") for g in gti: if np.size(g) != 2 or np.ndim(g) != 1: raise TypeError( - "Please check the formatting of GTIs. They need to be" - " provided as [[gti00, gti01], [gti10, gti11], ...]") + "Please check the formatting of the GTIs. They need to be" + " provided as [[gti00, gti01], [gti10, gti11], ...].") gti = np.array(gti) gti_start = gti[:, 0] @@ -313,20 +320,21 @@ def check_gtis(gti): # Check that GTIs are well-behaved if not np.all(gti_end >= gti_start): - raise ValueError('This GTI end times must be larger than ' - 'GTI start times') + raise ValueError('The GTI end times must be larger than the ' + 'GTI start times.') # Check that there are no overlaps in GTIs if not np.all(gti_start[1:] >= gti_end[:-1]): - raise ValueError('This GTI has overlaps') + raise ValueError('This GTI has overlaps.') return @jit(nopython=True) -def create_gti_mask_jit(time, gtis, mask, gti_mask, min_length=0): # pragma: no cover +def create_gti_mask_jit(time, gtis, mask, gti_mask, + min_length=0): # pragma: no cover """ - Compiled and fast function to create gti mask. + Compiled and fast function to create GTI mask. Parameters ---------- @@ -334,18 +342,19 @@ def create_gti_mask_jit(time, gtis, mask, gti_mask, min_length=0): # pragma: no An array of time stamps gtis : iterable of ``(start, stop)`` pairs - The list of GTIs + The list of GTIs. mask : numpy.ndarray A pre-assigned array of zeros of the same shape as ``time`` Records whether a time stamp is part of the GTIs. gti_mask : numpy.ndarray - A pre-assigned array zeros in the same shape as ``time``; records start/stop of GTIs + A pre-assigned array zeros in the same shape as ``time``; records + start/stop of GTIs. min_length : float - An optional minimum length for the GTIs to be applied. Only GTIs longer than ``min_length`` will - be considered when creating the mask. + An optional minimum length for the GTIs to be applied. Only GTIs longer + than ``min_length`` will be considered when creating the mask. """ gti_el = -1 @@ -376,7 +385,8 @@ def create_gti_mask_jit(time, gtis, mask, gti_mask, min_length=0): # pragma: no def create_gti_mask(time, gtis, safe_interval=None, min_length=0, return_new_gtis=False, dt=None, epsilon=0.001): - """Create GTI mask. + """ + Create GTI mask. Assumes that no overlaps are present between GTIs @@ -404,18 +414,19 @@ def create_gti_mask(time, gtis, safe_interval=None, min_length=0, If ``True```, return the list of new GTIs (if ``min_length > 0``) dt : float - Time resolution of the data, i.e. the interval between time stamps + Time resolution of the data, i.e. the interval between time stamps. epsilon : float - fraction of ``dt`` that is tolerated at the borders of a GTI + Fraction of ``dt`` that is tolerated at the borders of a GTI. Returns ------- mask : bool array - A mask labelling all time stamps that are included in the GTIs versus those that are not. + A mask labelling all time stamps that are included in the GTIs versus + those that are not. new_gtis : ``Nx2`` array - An array of new GTIs created by this function + An array of new GTIs created by this function. """ gtis = np.array(gtis, dtype=np.longdouble) if time.size == 0: @@ -480,17 +491,18 @@ def create_gti_mask(time, gtis, safe_interval=None, min_length=0, def create_gti_mask_complete(time, gtis, safe_interval=0, min_length=0, return_new_gtis=False, dt=None, epsilon=0.001): - """Create GTI mask, allowing for non-constant ``dt``. + """ + Create GTI mask, allowing for non-constant ``dt``. - Assumes that no overlaps are present between GTIs + Assumes that no overlaps are present between GTIs. Parameters ---------- time : numpy.ndarray - An array of time stamps + An array of time stamps. gtis : ``[[g0_0, g0_1], [g1_0, g1_1], ...]``, float array-like - The list of GTIs + The list of GTIs. Other parameters @@ -500,25 +512,26 @@ def create_gti_mask_complete(time, gtis, safe_interval=0, min_length=0, and the end (if pair of values) of GTIs. min_length : float - An optional minimum length for the GTIs to be applied. Only GTIs longer than ``min_length`` will - be considered when creating the mask. + An optional minimum length for the GTIs to be applied. Only GTIs longer + than ``min_length`` will be considered when creating the mask. return_new_gtis : bool - If ``True``, return the list of new GTIs (if ``min_length > 0``) + If ``True``, return the list of new GTIs (if ``min_length > 0``). dt : float - Time resolution of the data, i.e. the interval between time stamps + Time resolution of the data, i.e. the interval between time stamps. epsilon : float - fraction of ``dt`` that is tolerated at the borders of a GTI + Fraction of ``dt`` that is tolerated at the borders of a GTI. Returns ------- mask : bool array - A mask labelling all time stamps that are included in the GTIs versus those that are not. + A mask labelling all time stamps that are included in the GTIs versus + those that are not. new_gtis : Nx2 array - An array of new GTIs created by this function + An array of new GTIs created by this function. """ check_gtis(gtis) @@ -560,12 +573,13 @@ def create_gti_mask_complete(time, gtis, safe_interval=0, min_length=0, def create_gti_from_condition(time, condition, safe_interval=0, dt=None): - """Create a GTI list from a time array and a boolean mask (``condition``). + """ + Create a GTI list from a time array and a boolean mask (``condition``). Parameters ---------- time : array-like - Array containing time stamps + Array containing time stamps. condition : array-like An array of bools, of the same length of time. @@ -574,7 +588,7 @@ def create_gti_from_condition(time, condition, Returns ------- gtis : ``[[gti0_0, gti0_1], [gti1_0, gti1_1], ...]`` - The newly created GTIs + The newly created GTIs. Other parameters ---------------- @@ -612,7 +626,8 @@ def create_gti_from_condition(time, condition, def cross_two_gtis(gti0, gti1): - """Extract the common intervals from two GTI lists *EXACTLY*. + """ + Extract the common intervals from two GTI lists *EXACTLY*. Parameters ---------- @@ -623,7 +638,7 @@ def cross_two_gtis(gti0, gti1): Returns ------- gtis : ``[[gti0_0, gti0_1], [gti1_0, gti1_1], ...]`` - The newly created GTIs + The newly created GTIs. See Also -------- @@ -718,13 +733,14 @@ def cross_gtis(gti_list): Parameters ---------- gti_list : array-like - List of GTI arrays, each one in the usual format ``[[gti0_0, gti0_1], [gti1_0, gti1_1], ...]`` + List of GTI arrays, each one in the usual format + ``[[gti0_0, gti0_1], [gti1_0, gti1_1], ...]``. Returns ------- gti0: 2-d float array ``[[gti0_0, gti0_1], [gti1_0, gti1_1], ...]`` - The newly created GTIs + The newly created GTIs. See Also -------- @@ -769,20 +785,20 @@ def get_btis(gtis, start_time=None, stop_time=None): Parameters ---------- gtis : iterable - A list of GTIs + A list of GTIs. start_time : float - Optional start time of the overall observation (e.g. can be earlier than the first time stamp - in ``gtis``. + Optional start time of the overall observation (e.g. can be earlier + than the first time stamp in ``gtis``). stop_time : float - Optional stop time of the overall observation (e.g. can be later than the last time stamp in - ``gtis``. + Optional stop time of the overall observation (e.g. can be later than + the last time stamp in``gtis``). Returns ------- btis : numpy.ndarray - A list of bad time intervals + A list of bad time intervals. """ # Check GTIs if len(gtis) == 0: @@ -840,15 +856,15 @@ def check_separate(gti0, gti1): Parameters ---------- gti0: 2-d float array - List of GTIs of form ``[[gti0_0, gti0_1], [gti1_0, gti1_1], ...]`` + List of GTIs of form ``[[gti0_0, gti0_1], [gti1_0, gti1_1], ...]``. gti1: 2-d float array - List of GTIs of form ``[[gti0_0, gti0_1], [gti1_0, gti1_1], ...]`` + List of GTIs of form ``[[gti0_0, gti0_1], [gti1_0, gti1_1], ...]``. Returns ------- separate: bool - ``True`` if GTIs are mutually exclusive, ``False`` if not + ``True`` if GTIs are mutually exclusive, ``False`` if not. Examples -------- @@ -892,8 +908,8 @@ def check_separate(gti0, gti1): def join_equal_gti_boundaries(gti): - """If the start of a GTI is right at the end of another, join them. - + """ + If the start of a GTI is right at the end of another, join them. """ new_gtis = [] for l in gti: @@ -912,7 +928,8 @@ def join_equal_gti_boundaries(gti): def append_gtis(gti0, gti1): - """Union of two non-overlapping GTIs. + """ + Union of two non-overlapping GTIs. If the two GTIs "touch", this is tolerated and the touching GTIs are joined in a single one. @@ -920,15 +937,15 @@ def append_gtis(gti0, gti1): Parameters ---------- gti0: 2-d float array - List of GTIs of the form ``[[gti0_0, gti0_1], [gti1_0, gti1_1], ...]`` + List of GTIs of the form ``[[gti0_0, gti0_1], [gti1_0, gti1_1], ...]``. gti1: 2-d float array - List of GTIs of the form ``[[gti0_0, gti0_1], [gti1_0, gti1_1], ...]`` + List of GTIs of the form ``[[gti0_0, gti0_1], [gti1_0, gti1_1], ...]``. Returns ------- gti: 2-d float array - The newly created GTI + The newly created GTI array. Examples -------- @@ -958,7 +975,8 @@ def append_gtis(gti0, gti1): def join_gtis(gti0, gti1): - """Union of two GTIs. + """ + Union of two GTIs. If GTIs are mutually exclusive, it calls ``append_gtis``. Otherwise we put the extremes of partially overlapping GTIs on an ideal line and look at the @@ -1036,7 +1054,8 @@ def join_gtis(gti0, gti1): def time_intervals_from_gtis(gtis, segment_size, fraction_step=1, epsilon=1e-5): - """Compute start/stop times of equal time intervals, compatible with GTIs. + """ + Compute start/stop times of equal time intervals, compatible with GTIs. Used to start each FFT/PDS/cospectrum from the start of a GTI, and stop before the next gap in data (end of GTI). @@ -1050,9 +1069,10 @@ def time_intervals_from_gtis(gtis, segment_size, fraction_step=1, Length of the time segments fraction_step : float - If the step is not a full ``segment_size`` but less (e.g. a moving window), - this indicates the ratio between step step and ``segment_size`` (e.g. - 0.5 means that the window shifts of half ``segment_size``) + If the step is not a full ``segment_size`` but less (e.g. a moving + window), this indicates the ratio between step step and + ``segment_size`` (e.g. ``0.5`` means that the window shifts by half + ``segment_size``). Returns ------- @@ -1093,20 +1113,20 @@ def calculate_segment_bin_start(startbin, stopbin, nbin, fraction_step=1): Parameters ---------- startbin : int - Starting bin of the interval + Starting bin of the interval. stopbin : int - Last bin of the interval + Last bin of the interval. nbin : int - number of bins in each interval + Number of bins in each interval. Other Parameters ---------------- fraction_step : float If the step is not a full ``nbin`` but less (e.g. a moving window), this indicates the ratio between the step and ``nbin`` (e.g. - ``0.5`` means that the window shifts of half ``nbin``) + ``0.5`` means that the window shifts by half ``nbin``). Returns ------- @@ -1134,25 +1154,26 @@ def calculate_segment_bin_start(startbin, stopbin, nbin, fraction_step=1): def bin_intervals_from_gtis(gtis, segment_size, time, dt=None, fraction_step=1, epsilon=0.001): - """Compute start/stop times of equal time intervals, compatible with GTIs, and map them - to the indices of an array of time stamps. + """ + Compute start/stop times of equal time intervals, compatible with GTIs, + and map them to the indices of an array of time stamps. Used to start each FFT/PDS/cospectrum from the start of a GTI, and stop before the next gap in data (end of GTI). In this case, it is necessary to specify the time array containing the times of the light curve bins. - Returns start and stop bins of the intervals to use for the PDS + Returns start and stop bins of the intervals to use for the PDS. Parameters ---------- gtis : 2-d float array - List of GTIs of the form ``[[gti0_0, gti0_1], [gti1_0, gti1_1], ...]`` + List of GTIs of the form ``[[gti0_0, gti0_1], [gti1_0, gti1_1], ...]``. segment_size : float - Length of each time segment + Length of each time segment. time : array-like - Array of time stamps + Array of time stamps. Other Parameters ---------------- @@ -1160,12 +1181,14 @@ def bin_intervals_from_gtis(gtis, segment_size, time, dt=None, fraction_step=1, Time resolution of the light curve. epsilon : float, default 0.001 - The tolerance, in fraction of ``dt``, for the comparisons at the borders + The tolerance, in fraction of ``dt``, for the comparisons at the + borders. fraction_step : float - If the step is not a full ``segment_size`` but less (e.g. a moving window), - this indicates the ratio between step step and ``segment_size`` (e.g. - ``0.5`` means that the window shifts of half ``segment_size``) + If the step is not a full ``segment_size`` but less (e.g. a moving + window), this indicates the ratio between step step and + ``segment_size`` (e.g. ``0.5`` means that the window shifts by half + ``segment_size``). Returns ------- @@ -1237,17 +1260,18 @@ def bin_intervals_from_gtis(gtis, segment_size, time, dt=None, fraction_step=1, def gti_border_bins(gtis, time, dt=None, epsilon=0.001): - """Find the indices in a time array corresponding to the borders of GTIs. + """ + Find the indices in a time array corresponding to the borders of GTIs. GTIs shorter than the bin time are not returned. Parameters ---------- gtis : 2-d float array - List of GTIs of the form ``[[gti0_0, gti0_1], [gti1_0, gti1_1], ...]`` + List of GTIs of the form ``[[gti0_0, gti0_1], [gti1_0, gti1_1], ...]``. time : array-like - Array of time stamps + Array of time stamps. Other Parameters ---------------- @@ -1255,12 +1279,14 @@ def gti_border_bins(gtis, time, dt=None, epsilon=0.001): Time resolution of the light curve. epsilon : float, default 0.001 - The tolerance, in fraction of ``dt``, for the comparisons at the borders + The tolerance, in fraction of ``dt``, for the comparisons at the + borders. fraction_step : float - If the step is not a full ``segment_size`` but less (e.g. a moving window), - this indicates the ratio between step step and ``segment_size`` (e.g. - ``0.5`` means that the window shifts of half ``segment_size``) + If the step is not a full ``segment_size`` but less (e.g. a moving + window), this indicates the ratio between step step and + ``segment_size`` (e.g. ``0.5`` means that the window shifts by half + ``segment_size``). Returns ------- @@ -1328,13 +1354,15 @@ def gti_border_bins(gtis, time, dt=None, epsilon=0.001): def generate_indices_of_boundaries(times, gti, segment_size=None, dt=0): - """Get index boundaries and times from different parts of the observation. + """ + Get index boundaries and times from different parts of the observation. It wraps around `generate_indices_of_gti_boundaries`, `generate_indices_of_segment_boundaries_binned`, and `generate_indices_of_segment_boundaries_unbinned` depending on: - + ``segment_size`` being ``None`` (give GTI boundaries, segment boundaries otherwise) + + ``segment_size`` being ``None`` (give GTI boundaries, segment boundaries + otherwise) + ``dt`` being 0 or nonzero (unevenly sampled, evenly sampled otherwise) Examples @@ -1369,17 +1397,18 @@ def generate_indices_of_boundaries(times, gti, segment_size=None, dt=0): def generate_indices_of_gti_boundaries(times, gti, dt=0): - """Get the indices of events from different GTIs of the observation. + """ + Get the indices of events from different GTIs of the observation. This is a generator, yielding the boundaries of each GTI and the - corresponding indices in the time array + corresponding indices in the time array. Parameters ---------- times : float `np.array` - Array of times + Array of times. gti : [[gti00, gti01], [gti10, gti11], ...] - good time intervals + Good time intervals. Other parameters ---------------- @@ -1389,11 +1418,11 @@ def generate_indices_of_gti_boundaries(times, gti, dt=0): Yields ------- g0: float - Start time of current GTI + Start time of current GTI. g1: float - End time of current GTI + End time of current GTI. startidx: int - Start index of the current GTI in the time array + Start index of the current GTI in the time array. stopidx: int End index of the current GTI in the time array. Note that this is larger by one, so that `time[startidx:stopidx]` returns the correct @@ -1419,28 +1448,29 @@ def generate_indices_of_gti_boundaries(times, gti, dt=0): def generate_indices_of_segment_boundaries_unbinned(times, gti, segment_size): - """Get the indices of events from different segments of the observation. + """ + Get the indices of events from different segments of the observation. This is a generator, yielding the boundaries of each segment and the - corresponding indices in the time array + corresponding indices in the time array. Parameters ---------- times : float `np.array` - Array of times + Array of times. gti : [[gti00, gti01], [gti10, gti11], ...] - good time intervals + Good time intervals. segment_size : float - length of segments + Length of segments. Yields ------- t0: float - Start time of current segment + Start time of current segment. t1: float - End time of current segment + End time of current segment. startidx: int - Start index of the current segment in the time array + Start index of the current segment in the time array. stopidx: int End index of the current segment in the time array. Note that this is larger by one, so that `time[startidx:stopidx]` returns the correct @@ -1476,8 +1506,10 @@ def generate_indices_of_segment_boundaries_unbinned(times, gti, segment_size): yield s, e, idx0, idx1 -def generate_indices_of_segment_boundaries_binned(times, gti, segment_size, dt=None): - """Get the indices of binned times from different segments of the observation. +def generate_indices_of_segment_boundaries_binned(times, gti, segment_size, + dt=None): + """ + Get the indices of binned times from different segments of the observation. This is a generator, yielding the boundaries of each segment and the corresponding indices in the time array @@ -1517,9 +1549,11 @@ def generate_indices_of_segment_boundaries_binned(times, gti, segment_size, dt=N """ gti = np.asarray(gti) times = np.asarray(times) - startidx, stopidx = bin_intervals_from_gtis(gti, segment_size, times, dt=dt) + startidx, stopidx = bin_intervals_from_gtis(gti, segment_size, times, + dt=dt) if dt is None: dt = 0 for idx0, idx1 in zip(startidx, stopidx): - yield times[idx0] - dt / 2, times[min(idx1, times.size - 1)] - dt / 2, idx0, idx1 + yield times[idx0] - dt / 2, times[min(idx1, times.size - 1)] - dt / 2,\ + idx0, idx1 diff --git a/stingray/powerspectrum.py b/stingray/powerspectrum.py index e6aecf83e..ab9841fcb 100755 --- a/stingray/powerspectrum.py +++ b/stingray/powerspectrum.py @@ -34,12 +34,12 @@ def show_progress(a, **kwargs): class Powerspectrum(Crossspectrum): type = "powerspectrum" """ - Make a :class:`Powerspectrum` (also called periodogram) from a (binned) light curve. - Periodograms can be normalized by either Leahy normalization, fractional rms - normalizaation, absolute rms normalization, or not at all. + Make a :class:`Powerspectrum` (also called periodogram) from a (binned) + light curve. Periodograms can be normalized by either Leahy normalization, + fractional rms normalization, absolute rms normalization, or not at all. - You can also make an empty :class:`Powerspectrum` object to populate with your - own fourier-transformed data (this can sometimes be useful when making + You can also make an empty :class:`Powerspectrum` object to populate with + your own fourier-transformed data (this can sometimes be useful when making binned power spectra). Parameters @@ -47,56 +47,61 @@ class Powerspectrum(Crossspectrum): data: :class:`stingray.Lightcurve` object, optional, default ``None`` The light curve data to be Fourier-transformed. - norm: {``leahy`` | ``frac`` | ``abs`` | ``none`` }, optional, default ``frac`` + norm: {"leahy" | "frac" | "abs" | "none" }, optional, default "frac" The normaliation of the power spectrum to be used. Options are - ``leahy``, ``frac``, ``abs`` and ``none``, default is ``frac``. + "leahy", "frac", "abs" and "none", default is "frac". Other Parameters ---------------- gti: 2-d float array ``[[gti0_0, gti0_1], [gti1_0, gti1_1], ...]`` -- Good Time intervals. This choice overrides the GTIs in the single light curves. Use with - care! + care, especially if these GTIs have overlaps with the input + object GTIs! If you're getting errors regarding your GTIs, don't + use this and only give GTIs to the input object before making + the power spectrum. skip_checks: bool Skip initial checks, for speed or other reasons (you need to trust your - inputs!) + inputs!). Attributes ---------- - norm: {``leahy`` | ``frac`` | ``abs`` | ``none`` } - the normalization of the power spectrun + norm: {"leahy" | "frac" | "abs" | "none" } + The normalization of the power spectrum. freq: numpy.ndarray - The array of mid-bin frequencies that the Fourier transform samples + The array of mid-bin frequencies that the Fourier transform samples. power: numpy.ndarray The array of normalized squared absolute values of Fourier - amplitudes + amplitudes. power_err: numpy.ndarray The uncertainties of ``power``. An approximation for each bin given by ``power_err= power/sqrt(m)``. Where ``m`` is the number of power averaged in each bin (by frequency - binning, or averaging power spectrum). Note that for a single - realization (``m=1``) the error is equal to the power. + binning, or averaging power spectra of segments of a light curve). + Note that for a single realization (``m=1``) the error is equal to the + power. df: float - The frequency resolution + The frequency resolution. m: int - The number of averaged powers in each bin + The number of averaged powers in each bin. n: int - The number of data points in the light curve + The number of data points in the light curve. nphots: float - The total number of photons in the light curve + The total number of photons in the light curve. legacy: bool - Use the legacy machinery of AveragedPowerspectrum. This might be useful to compare - with old results, and is also needed to use light curve lists as an input, to - conserve the spectra of each segment, or to use the large_data option. + Use the legacy machinery of ``AveragedPowerspectrum``. This might be + useful to compare with old results, and is also needed to use light + curve lists as an input, to conserve the spectra of each segment, or + to use the large_data option. """ def __init__(self, data=None, norm="frac", gti=None, @@ -131,8 +136,8 @@ def __init__(self, data=None, norm="frac", gti=None, if not legacy and data is not None: return self._initialize_from_any_input(data, dt=dt, norm=norm) - Crossspectrum.__init__(self, data1=data, data2=data, norm=norm, gti=gti, - dt=dt, skip_checks=True, legacy=legacy) + Crossspectrum.__init__(self, data1=data, data2=data, norm=norm, + gti=gti, dt=dt, skip_checks=True, legacy=legacy) self.nphots = self.nphots1 self.dt = dt @@ -143,12 +148,13 @@ def rebin(self, df=None, f=None, method="mean"): Parameters ---------- df: float - The new frequency resolution + The new frequency resolution. Other Parameters ---------------- f: float - the rebin factor. If specified, it substitutes ``df`` with ``f*self.df`` + The rebin factor. If specified, it substitutes ``df`` with + ``f*self.df``, so ``f>1`` is recommended. Returns ------- @@ -168,10 +174,10 @@ def compute_rms(self, min_freq, max_freq, white_noise_offset=0.): Parameters ---------- min_freq: float - The lower frequency bound for the calculation + The lower frequency bound for the calculation. max_freq: float - The upper frequency bound for the calculation + The upper frequency bound for the calculation. Other parameters ---------------- @@ -185,10 +191,10 @@ def compute_rms(self, min_freq, max_freq, white_noise_offset=0.): ------- rms: float The fractional rms amplitude contained between ``min_freq`` and - ``max_freq`` + ``max_freq``. rms_err: float - The error on the fractional rms amplitude + The error on the fractional rms amplitude. """ minind = self.freq.searchsorted(min_freq) @@ -233,7 +239,7 @@ def _rms_error(self, powers): Returns ------- delta_rms: float - The error on the fractional rms amplitude + The error on the fractional rms amplitude. """ nphots = self.nphots p_err = scipy.stats.chi2(2.0 * self.m).var() * powers / self.m / nphots @@ -259,10 +265,11 @@ def classical_significances(self, threshold=1, trial_correction=False): 1. The power spectrum is Leahy-normalized 2. There is no source of variability in the data other than the - periodic signal to be determined with this method. This is important! - If there are other sources of (aperiodic) variability in the data, this - method will *not* produce correct results, but instead produce a large - number of spurious false positive detections! + periodic signal to be determined with this method. This is + important! If there are other sources of (aperiodic) variability in + the data, this method will *not* produce correct results, but + instead produce a large number of spurious false positive + detections! 3. There are no significant instrumental effects changing the statistical distribution of the powers (e.g. pile-up or dead time) @@ -270,9 +277,9 @@ def classical_significances(self, threshold=1, trial_correction=False): the power spectrum, where index is the numerical index of the power in question. If a ``threshold`` is set, then only powers with p-values *below* that threshold with their respective indices. If - ``trial_correction`` is set to ``True``, then the threshold will be corrected - for the number of trials (frequencies) in the power spectrum before - being used. + ``trial_correction`` is set to ``True``, then the threshold will be + corrected for the number of trials (frequencies) in the power spectrum + before being used. Parameters ---------- @@ -282,18 +289,18 @@ def classical_significances(self, threshold=1, trial_correction=False): Default is ``1`` (all p-values will be reported). trial_correction : bool, optional, default ``False`` - A Boolean flag that sets whether the ``threshold`` will be corrected - by the number of frequencies before being applied. This decreases - the ``threshold`` (p-values need to be lower to count as significant). - Default is ``False`` (report all powers) though for any application - where `threshold`` is set to something meaningful, this should also - be applied! + A Boolean flag that sets whether the ``threshold`` will be + corrected by the number of frequencies before being applied. This + decreases the ``threshold`` (p-values need to be lower to count as + significant). Default is ``False`` (report all powers) though for + any application where `threshold`` is set to something meaningful, + this should also be applied! Returns ------- pvals : iterable - A list of ``(p-value, index)`` tuples for all powers that have p-values - lower than the threshold specified in ``threshold``. + A list of ``(p-value, index)`` tuples for all powers that have + p-values lower than the threshold specified in ``threshold``. """ if not self.norm == "leahy": @@ -307,7 +314,8 @@ def classical_significances(self, threshold=1, trial_correction=False): if np.size(self.m) == 1: # calculate p-values for all powers - # leave out zeroth power since it just encodes the number of photons! + # leave out zeroth power since it just encodes the number of + # photons! pv = pds_probability(self.power, n_summed_spectra=self.m, ntrial=ntrial) else: @@ -324,9 +332,11 @@ def classical_significances(self, threshold=1, trial_correction=False): return pvals def modulation_upper_limit(self, fmin=None, fmax=None, c=0.95): - """Upper limit on a sinusoidal modulation. + """ + Upper limit on a sinusoidal modulation. - To understand the meaning of this amplitude: if the modulation is described by: + To understand the meaning of this amplitude: if the modulation is + described by: ..math:: p = \overline{p} (1 + a * \sin(x)) @@ -336,12 +346,14 @@ def modulation_upper_limit(self, fmin=None, fmax=None, c=0.95): ..math:: p = \overline{p} (1 + \sum_l a_l * \sin(lx)) a is equivalent to :math:`\sqrt(\sum_l a_l^2)`. - See `stingray.stats.power_upper_limit`, `stingray.stats.amplitude_upper_limit` + See `stingray.stats.power_upper_limit`, + `stingray.stats.amplitude_upper_limit` for more information. - - The formula used to calculate the upper limit assumes the Leahy normalization. - If the periodogram is in another normalization, we will internally convert - it to Leahy before calculating the upper limit. + + The formula used to calculate the upper limit assumes the Leahy + normalization. + If the periodogram is in another normalization, we will internally + convert it to Leahy before calculating the upper limit. Parameters ---------- @@ -359,7 +371,8 @@ def modulation_upper_limit(self, fmin=None, fmax=None, c=0.95): Returns ------- a: float - The modulation amplitude that could produce P>pmeas with 1 - c probability + The modulation amplitude that could produce P>pmeas with 1 - c + probability. Examples -------- @@ -402,130 +415,146 @@ def modulation_upper_limit(self, fmin=None, fmax=None, c=0.95): @staticmethod def from_time_array(times, dt, segment_size=None, gti=None, norm="frac", silent=False, use_common_mean=True): - """Calculate AveragedPowerspectrum from an array of event times. + """ + Calculate an average power spectrum from an array of event times. Parameters ---------- times : `np.array` - Event arrival times + Event arrival times. dt : float The time resolution of the intermediate light curves - (sets the Nyquist frequency) + (sets the Nyquist frequency). Other parameters ---------------- segment_size : float - The length, in seconds, of the light curve segments that will be averaged - Only relevant (and required) for AveragedPowerspectrum - gti: [[gti0_0, gti0_1], [gti1_0, gti1_1], ...] - Good Time intervals. + The length, in seconds, of the light curve segments that will be + averaged. Only relevant (and required) for + ``AveragedPowerspectrum``. + gti: ``[[gti0_0, gti0_1], [gti1_0, gti1_1], ...]`` + Additional, optional Good Time intervals that get intersected with + the GTIs of the input object. Can cause errors if there are + overlaps between these GTIs and the input object GTIs. If that + happens, assign the desired GTIs to the input object. norm : str, default "frac" - The normalization of the periodogram. "abs" is absolute rms, "frac" is - fractional rms, "leahy" is Leahy+83 normalization, and "none" is the - unnormalized periodogram + The normalization of the periodogram. `abs` is absolute rms, `frac` + is fractional rms, `leahy` is Leahy+83 normalization, and `none` is + the unnormalized periodogram. use_common_mean : bool, default True - The mean of the light curve can be estimated in each interval, or on - the full light curve. This gives different results (Alston+2013). - Here we assume the mean is calculated on the full light curve, but - the user can set ``use_common_mean`` to False to calculate it on a - per-segment basis. + The mean of the light curve can be estimated in each interval, or + on the full light curve. This gives different results + (Alston+2013). By default, we assume the mean is calculated on the + full light curve, but the user can set ``use_common_mean`` to False + to calculate it on a per-segment basis. silent : bool, default False - Silence the progress bars + Silence the progress bars. """ - return powerspectrum_from_time_array( - times, dt, segment_size=segment_size, gti=gti, norm=norm, - silent=silent, use_common_mean=use_common_mean) + return powerspectrum_from_time_array(times, dt, + segment_size=segment_size, + gti=gti, norm=norm, silent=silent, + use_common_mean=use_common_mean) @staticmethod - def from_events(events, dt, segment_size=None, norm="frac", - silent=False, use_common_mean=True, gti=None): - """Calculate AveragedPowerspectrum from an event list + def from_events(events, dt, segment_size=None, gti=None, norm="frac", + silent=False, use_common_mean=True): + """ + Calculate an average power spectrum from an event list. Parameters ---------- events : `stingray.EventList` - Event list to be analyzed + Event list to be analyzed. dt : float The time resolution of the intermediate light curves - (sets the Nyquist frequency) + (sets the Nyquist frequency). Other parameters ---------------- segment_size : float - The length, in seconds, of the light curve segments that will be averaged - Only relevant (and required) for AveragedPowerspectrum + The length, in seconds, of the light curve segments that will be + averaged. Only relevant (and required) for + ``AveragedPowerspectrum``. + gti: ``[[gti0_0, gti0_1], [gti1_0, gti1_1], ...]`` + Additional, optional Good Time intervals that get intersected with + the GTIs of the input object. Can cause errors if there are + overlaps between these GTIs and the input object GTIs. If that + happens, assign the desired GTIs to the input object. norm : str, default "frac" - The normalization of the periodogram. "abs" is absolute rms, "frac" is - fractional rms, "leahy" is Leahy+83 normalization, and "none" is the - unnormalized periodogram + The normalization of the periodogram. `abs` is absolute rms, `frac` + is fractional rms, `leahy` is Leahy+83 normalization, and `none` is + the unnormalized periodogram. use_common_mean : bool, default True - The mean of the light curve can be estimated in each interval, or on - the full light curve. This gives different results (Alston+2013). - Here we assume the mean is calculated on the full light curve, but - the user can set ``use_common_mean`` to False to calculate it on a - per-segment basis. + The mean of the light curve can be estimated in each interval, or + on the full light curve. This gives different results + (Alston+2013). By default, we assume the mean is calculated on the + full light curve, but the user can set ``use_common_mean`` to False + to calculate it on a per-segment basis. silent : bool, default False - Silence the progress bars - gti: [[gti0_0, gti0_1], [gti1_0, gti1_1], ...] - Additional, optional, Good Time intervals, that get interesected with the - GTIs of the input object. + Silence the progress bars. """ - - return powerspectrum_from_events( - events, dt, segment_size=segment_size, norm=norm, - silent=silent, use_common_mean=use_common_mean, gti=gti) + if gti is None: + gti = events.gti + return powerspectrum_from_events(events, dt, segment_size=segment_size, + gti=gti, norm=norm, silent=silent, + use_common_mean=use_common_mean) @staticmethod - def from_lightcurve(lc, segment_size=None, norm="frac", - silent=False, use_common_mean=True, gti=None): - """Calculate AveragedPowerspectrum from a light curve + def from_lightcurve(lc, segment_size=None, gti=None, norm="frac", + silent=False, use_common_mean=True): + """ + Calculate a power spectrum from a light curve. Parameters ---------- events : `stingray.Lightcurve` - Light curve to be analyzed + Light curve to be analyzed. dt : float The time resolution of the intermediate light curves - (sets the Nyquist frequency) + (sets the Nyquist frequency). Other parameters ---------------- segment_size : float - The length, in seconds, of the light curve segments that will be averaged - Only relevant (and required) for AveragedPowerspectrum + The length, in seconds, of the light curve segments that will be + averaged. Only relevant (and required) for + ``AveragedPowerspectrum``. + gti: ``[[gti0_0, gti0_1], [gti1_0, gti1_1], ...]`` + Additional, optional Good Time intervals that get intersected with + the GTIs of the input object. Can cause errors if there are + overlaps between these GTIs and the input object GTIs. If that + happens, assign the desired GTIs to the input object. norm : str, default "frac" - The normalization of the periodogram. "abs" is absolute rms, "frac" is - fractional rms, "leahy" is Leahy+83 normalization, and "none" is the - unnormalized periodogram + The normalization of the periodogram. `abs` is absolute rms, `frac` + is fractional rms, `leahy` is Leahy+83 normalization, and `none` is + the unnormalized periodogram. use_common_mean : bool, default True - The mean of the light curve can be estimated in each interval, or on - the full light curve. This gives different results (Alston+2013). - Here we assume the mean is calculated on the full light curve, but - the user can set ``use_common_mean`` to False to calculate it on a - per-segment basis. + The mean of the light curve can be estimated in each interval, or + on the full light curve. This gives different results + (Alston+2013). By default, we assume the mean is calculated on the + full light curve, but the user can set ``use_common_mean`` to False + to calculate it on a per-segment basis. silent : bool, default False - Silence the progress bars - gti: [[gti0_0, gti0_1], [gti1_0, gti1_1], ...] - Additional, optional, Good Time intervals, that get interesected with the - GTIs of the input object. + Silence the progress bars. """ - - return powerspectrum_from_lightcurve( - lc, segment_size=segment_size, norm=norm, - silent=silent, use_common_mean=use_common_mean, gti=gti) + if gti is None: + gti = lc.gti + return powerspectrum_from_lightcurve(lc, segment_size=segment_size, + gti=gti, norm=norm, silent=silent, + use_common_mean=use_common_mean) @staticmethod - def from_lc_iterable(iter_lc, dt, segment_size=None, norm="frac", - silent=False, use_common_mean=True, gti=None): - """Calculate AveragedCrossspectrum from two light curves + def from_lc_iterable(iter_lc, dt, segment_size=None, gti=None, norm="frac", + silent=False, use_common_mean=True): + """ + Calculate the average power spectrum of an iterable collection of + light curves. Parameters ---------- - iter_lc1 : iterable of `stingray.Lightcurve` objects or `np.array` - Light curves from channel 1. If arrays, use them as counts - iter_lc1 : iterable of `stingray.Lightcurve` objects or `np.array` - Light curves from channel 2. If arrays, use them as counts + iter_lc : iterable of `stingray.Lightcurve` objects or `np.array` + Light curves. If arrays, use them as counts. dt : float The time resolution of the light curves (sets the Nyquist frequency) @@ -533,32 +562,37 @@ def from_lc_iterable(iter_lc, dt, segment_size=None, norm="frac", Other parameters ---------------- segment_size : float - The length, in seconds, of the light curve segments that will be averaged - Only relevant (and required) for AveragedPowerspectrum + The length, in seconds, of the light curve segments that will be + averaged. Only relevant (and required) for + ``AveragedPowerspectrum``. + gti: ``[[gti0_0, gti0_1], [gti1_0, gti1_1], ...]`` + Additional, optional Good Time intervals that get intersected with + the GTIs of the input object. Can cause errors if there are + overlaps between these GTIs and the input object GTIs. If that + happens, assign the desired GTIs to the input object. norm : str, default "frac" - The normalization of the periodogram. "abs" is absolute rms, "frac" is - fractional rms, "leahy" is Leahy+83 normalization, and "none" is the - unnormalized periodogram + The normalization of the periodogram. `abs` is absolute rms, `frac` + is fractional rms, `leahy` is Leahy+83 normalization, and `none` is + the unnormalized periodogram. use_common_mean : bool, default True - The mean of the light curve can be estimated in each interval, or on - the full light curve. This gives different results (Alston+2013). - Here we assume the mean is calculated on the full light curve, but - the user can set ``use_common_mean`` to False to calculate it on a - per-segment basis. + The mean of the light curve can be estimated in each interval, or + on the full light curve. This gives different results + (Alston+2013). By default, we assume the mean is calculated on the + full light curve, but the user can set ``use_common_mean`` to False + to calculate it on a per-segment basis. silent : bool, default False - Silence the progress bars - gti: [[gti0_0, gti0_1], [gti1_0, gti1_1], ...] - Good Time intervals. + Silence the progress bars. """ return powerspectrum_from_lc_iterable( - iter_lc, dt, segment_size=segment_size, norm=norm, - silent=silent, use_common_mean=use_common_mean, gti=gti) + iter_lc, dt, segment_size=segment_size, gti=gti, norm=norm, + silent=silent, use_common_mean=use_common_mean) def _initialize_from_any_input( - self, data, dt=None, segment_size=None, norm="frac", - silent=False, use_common_mean=True, gti=None): - """Initialize the class, trying to understand the input types. + self, data, dt=None, segment_size=None, gti=None, norm="frac", + silent=False, use_common_mean=True): + """ + Initialize the class, trying to understand the input types. The input arguments are the same as ``__init__()``. Based on the type of ``data``, this method will call the appropriate @@ -635,30 +669,34 @@ class AveragedPowerspectrum(AveragedCrossspectrum, Powerspectrum): segment_size: float The size of each segment to average. Note that if the total - duration of each :class:`Lightcurve` object in lc is not an integer multiple - of the ``segment_size``, then any fraction left-over at the end of the - time series will be lost. + duration of each :class:`Lightcurve` object in lc is not an integer + multiple of the ``segment_size``, then any fraction left-over at the + end of the time series will be lost. - norm: {``leahy`` | ``frac`` | ``abs`` | ``none`` }, optional, default ``frac`` - The normaliation of the periodogram to be used. + norm: {"leahy" | "frac" | "abs" | "none" }, optional, default "frac" + The normalization of the periodogram to be used. Other Parameters ---------------- gti: 2-d float array ``[[gti0_0, gti0_1], [gti1_0, gti1_1], ...]`` -- Good Time intervals. This choice overrides the GTIs in the single light curves. Use with - care! + care, especially if these GTIs have overlaps with the input + object GTIs! If you're getting errors regarding your GTIs, don't + use this and only give GTIs to the input object before making + the power spectrum. silent : bool, default False Do not show a progress bar when generating an averaged cross spectrum. - Useful for the batch execution of many spectra + Useful for the batch execution of many spectra. dt: float The time resolution of the light curve. Only needed when constructing - light curves in the case where data is of :class:EventList + light curves in the case where data is of :class:EventList. large_data : bool, default False - Use only for data larger than 10**7 data points!! Uses zarr and dask for computation. + Use only for data larger than 10**7 data points!! Uses zarr and dask + for computation. save_all : bool, default False Save all intermediate PDSs used for the final average. Use with care. @@ -667,43 +705,45 @@ class AveragedPowerspectrum(AveragedCrossspectrum, Powerspectrum): skip_checks: bool Skip initial checks, for speed or other reasons (you need to trust your - inputs!) + inputs!). Attributes ---------- norm: {``leahy`` | ``frac`` | ``abs`` | ``none`` } - the normalization of the periodogram + The normalization of the periodogram. freq: numpy.ndarray - The array of mid-bin frequencies that the Fourier transform samples + The array of mid-bin frequencies that the Fourier transform samples. power: numpy.ndarray The array of normalized squared absolute values of Fourier - amplitudes + amplitudes. power_err: numpy.ndarray The uncertainties of ``power``. An approximation for each bin given by ``power_err= power/sqrt(m)``. Where ``m`` is the number of power averaged in each bin (by frequency - binning, or averaging powerspectrum). Note that for a single - realization (``m=1``) the error is equal to the power. + binning, or averaging power spectra of segments of a light curve). + Note that for a single realization (``m=1``) the error is equal to the + power. df: float - The frequency resolution + The frequency resolution. m: int - The number of averaged periodograms + The number of averaged periodograms. n: int - The number of data points in the light curve + The number of data points in the light curve. nphots: float - The total number of photons in the light curve + The total number of photons in the light curve. legacy: bool - Use the legacy machinery of AveragedPowerspectrum. This might be useful to compare - with old results, and is also needed to use light curve lists as an input, to - conserve the spectra of each segment, or to use the large_data option. + Use the legacy machinery of ``AveragedPowerspectrum``. This might be + useful to compare with old results, and is also needed to use light + curve lists as an input, to conserve the spectra of each segment, or to + use the large_data option. """ def __init__(self, data=None, segment_size=None, norm="frac", gti=None, @@ -744,7 +784,7 @@ def __init__(self, data=None, segment_size=None, norm="frac", gti=None, if isinstance(data, Generator): warnings.warn( - "The averaged Power spectrum from a generator of " + "The averaged power spectrum from a generator of " "light curves pre-allocates the full list of light " "curves, losing all advantage of lazy loading. If it " "is important for you, use the " @@ -754,8 +794,9 @@ def __init__(self, data=None, segment_size=None, norm="frac", gti=None, # The large_data option requires the legacy interface. if (large_data or save_all) and not legacy: - warnings.warn("The large_data option and the save_all options are only" - "available with the legacy interface (legacy=True).") + warnings.warn("The large_data option and the save_all options are" + " only available with the legacy interface" + " (legacy=True).") legacy = True if not legacy and data is not None: @@ -815,8 +856,8 @@ def _make_segment_spectrum(self, lc, segment_size, silent=False): Parameters ---------- - lc : :class:`stingray.Lightcurve` objects\ - The input light curve + lc : :class:`stingray.Lightcurve` objects + The input light curve. segment_size : ``numpy.float`` Size of each light curve segment to use for averaging. @@ -824,15 +865,17 @@ def _make_segment_spectrum(self, lc, segment_size, silent=False): Other parameters ---------------- silent : bool, default False - Suppress progress bars + Suppress progress bars. Returns ------- power_all : list of :class:`Powerspectrum` objects - A list of power spectra calculated independently from each light curve segment + A list of power spectra calculated independently from each light + curve segment. nphots_all : ``numpy.ndarray`` - List containing the number of photons for all segments calculated from ``lc`` + List containing the number of photons for all segments calculated + from ``lc``. """ if not isinstance(lc, Lightcurve): raise TypeError("lc must be a Lightcurve object") @@ -848,7 +891,8 @@ def _make_segment_spectrum(self, lc, segment_size, silent=False): check_gtis(self.gti) start_inds, end_inds = \ - bin_intervals_from_gtis(current_gtis, segment_size, lc.time, dt=lc.dt) + bin_intervals_from_gtis(current_gtis, segment_size, lc.time, + dt=lc.dt) power_all = [] nphots_all = [] @@ -886,23 +930,23 @@ class DynamicalPowerspectrum(AveragedPowerspectrum): This class will divide a :class:`Lightcurve` object into segments of length ``segment_size``, create a power spectrum for each segment and store - all powers in a matrix as a function of both time (using the mid-point of each - segment) and frequency. + all powers in a matrix as a function of both time (using the mid-point of + each segment) and frequency. - This is often used to trace changes in period of a (quasi-)periodic signal over - time. + This is often used to trace changes in period of a (quasi-)periodic signal + over time. Parameters ---------- lc : :class:`stingray.Lightcurve` or :class:`stingray.EventList` object - The time series or event list of which the Dynamical powerspectrum is + The time series or event list of which the dynamical power spectrum is to be calculated. segment_size : float, default 1 - Length of the segment of light curve, default value is 1 (in whatever units - the ``time`` array in the :class:`Lightcurve`` object uses). + Length of the segment of light curve, default value is 1 (in whatever + units the ``time`` array in the :class:`Lightcurve`` object uses). - norm: {``leahy`` | ``frac`` | ``abs`` | ``none`` }, optional, default ``frac`` + norm: {"leahy" | "frac" | "abs" | "none" }, optional, default "frac" The normaliation of the periodogram to be used. Other Parameters @@ -910,32 +954,35 @@ class DynamicalPowerspectrum(AveragedPowerspectrum): gti: 2-d float array ``[[gti0_0, gti0_1], [gti1_0, gti1_1], ...]`` -- Good Time intervals. This choice overrides the GTIs in the single light curves. Use with - care! + care, especially if these GTIs have overlaps with the input + object GTIs! If you're getting errors regarding your GTIs, don't + use this and only give GTIs to the input object before making + the power spectrum. Attributes ---------- segment_size: float The size of each segment to average. Note that if the total - duration of each Lightcurve object in lc is not an integer multiple + duration of each input object in lc is not an integer multiple of the ``segment_size``, then any fraction left-over at the end of the time series will be lost. dyn_ps : np.ndarray The matrix of normalized squared absolute values of Fourier amplitudes. The axis are given by the ``freq`` - and ``time`` attributes + and ``time`` attributes. norm: {``leahy`` | ``frac`` | ``abs`` | ``none``} - the normalization of the periodogram + The normalization of the periodogram. freq: numpy.ndarray - The array of mid-bin frequencies that the Fourier transform samples + The array of mid-bin frequencies that the Fourier transform samples. df: float - The frequency resolution + The frequency resolution. dt: float - The time resolution + The time resolution. """ def __init__(self, lc, segment_size, norm="frac", gti=None, dt=None): @@ -966,7 +1013,7 @@ def _make_matrix(self, lc): ---------- lc : :class:`Lightcurve` object The :class:`Lightcurve` object from which to generate the dynamical - power spectrum + power spectrum. """ ps_all, _ = AveragedPowerspectrum._make_segment_spectrum( self, lc, self.segment_size) @@ -1000,8 +1047,9 @@ def _make_matrix(self, lc): def rebin_frequency(self, df_new, method="sum"): """ - Rebin the Dynamic Power Spectrum to a new frequency resolution. Rebinning is - an in-place operation, i.e. will replace the existing ``dyn_ps`` attribute. + Rebin the Dynamic Power Spectrum to a new frequency resolution. + Rebinning is an in-place operation, i.e. will replace the existing + ``dyn_ps`` attribute. While the new resolution need not be an integer multiple of the previous frequency resolution, be aware that if it is not, the last @@ -1010,11 +1058,11 @@ def rebin_frequency(self, df_new, method="sum"): Parameters ---------- df_new: float - The new frequency resolution of the Dynamical Power Spectrum. - Must be larger than the frequency resolution of the old Dynamical - Power Spectrum! + The new frequency resolution of the dynamical power spectrum. + Must be larger than the frequency resolution of the old dynamical + power spectrum! - method: {``sum`` | ``mean`` | ``average``}, optional, default ``sum`` + method: {"sum" | "mean" | "average"}, optional, default "sum" This keyword argument sets whether the counts in the new bins should be summed or averaged. """ @@ -1033,8 +1081,8 @@ def rebin_frequency(self, df_new, method="sum"): def trace_maximum(self, min_freq=None, max_freq=None): """ - Return the indices of the maximum powers in each segment :class:`Powerspectrum` - between specified frequencies. + Return the indices of the maximum powers in each segment + :class:`Powerspectrum` between specified frequencies. Parameters ---------- @@ -1074,15 +1122,14 @@ def rebin_time(self, dt_new, method='sum'): Parameters ---------- dt_new: float - The new time resolution of the Dynamical Power Spectrum. - Must be larger than the time resolution of the old Dynamical Power - Spectrum! + The new time resolution of the dynamical power spectrum. + Must be larger than the time resolution of the old dynamical power + spectrum! method: {"sum" | "mean" | "average"}, optional, default "sum" This keyword argument sets whether the counts in the new bins should be summed or averaged. - Returns ------- time_new: numpy.ndarray @@ -1110,36 +1157,42 @@ def rebin_time(self, dt_new, method='sum'): return new_dynspec_object -def powerspectrum_from_time_array(times, dt, segment_size=None, gti=None, norm="frac", - silent=False, use_common_mean=True): - """Calculate AveragedPowerspectrum from an array of event times. +def powerspectrum_from_time_array(times, dt, segment_size=None, gti=None, + norm="frac", silent=False, + use_common_mean=True): + """ + Calculate a power spectrum from an array of event times. Parameters ---------- times : `np.array` - Event arrival times + Event arrival times. dt : float The time resolution of the intermediate light curves - (sets the Nyquist frequency) + (sets the Nyquist frequency). Other parameters ---------------- segment_size : float - The length, in seconds, of the light curve segments that will be averaged - gti : [[gti0, gti1], ...] - Good Time intervals + The length, in seconds, of the light curve segments that will be + averaged. Only required (and used) for ``AveragedPowerspectrum``. + gti : ``[[gti0_0, gti0_1], [gti1_0, gti1_1], ...]`` + Additional, optional Good Time intervals that get intersected with + the GTIs of the input object. Can cause errors if there are + overlaps between these GTIs and the input object GTIs. If that + happens, assign the desired GTIs to the input object. norm : str, default "frac" - The normalization of the periodogram. "abs" is absolute rms, "frac" is - fractional rms, "leahy" is Leahy+83 normalization, and "none" is the - unnormalized periodogram + The normalization of the periodogram. `abs` is absolute rms, `frac` + is fractional rms, `leahy` is Leahy+83 normalization, and `none` is + the unnormalized periodogram. use_common_mean : bool, default True - The mean of the light curve can be estimated in each interval, or on - the full light curve. This gives different results (Alston+2013). - Here we assume the mean is calculated on the full light curve, but - the user can set ``use_common_mean`` to False to calculate it on a - per-segment basis. + The mean of the light curve can be estimated in each interval, or + on the full light curve. This gives different results + (Alston+2013). By default, we assume the mean is calculated on the + full light curve, but the user can set ``use_common_mean`` to False + to calculate it on a per-segment basis. silent : bool, default False - Silence the progress bars + Silence the progress bars. Returns ------- @@ -1157,14 +1210,15 @@ def powerspectrum_from_time_array(times, dt, segment_size=None, gti=None, norm=" return _create_powerspectrum_from_result_table(table, force_averaged=force_averaged) -def powerspectrum_from_events(events, dt, segment_size=None, norm="frac", - silent=False, use_common_mean=True, gti=None): - """Calculate AveragedPowerspectrum from an event list +def powerspectrum_from_events(events, dt, segment_size=None, gti=None, + norm="frac", silent=False, use_common_mean=True): + """ + Calculate a power spectrum from an event list. Parameters ---------- events : `stingray.EventList` - Event list to be analyzed + Event list to be analyzed. dt : float The time resolution of the intermediate light curves (sets the Nyquist frequency) @@ -1172,44 +1226,48 @@ def powerspectrum_from_events(events, dt, segment_size=None, norm="frac", Other parameters ---------------- segment_size : float - The length, in seconds, of the light curve segments that will be averaged + The length, in seconds, of the light curve segments that will be + averaged. Only required (and used) for ``AveragedPowerspectrum``. + gti : ``[[gti0_0, gti0_1], [gti1_0, gti1_1], ...]`` + Additional, optional Good Time intervals that get intersected with + the GTIs of the input object. Can cause errors if there are + overlaps between these GTIs and the input object GTIs. If that + happens, assign the desired GTIs to the input object. norm : str, default "frac" - The normalization of the periodogram. "abs" is absolute rms, "frac" is - fractional rms, "leahy" is Leahy+83 normalization, and "none" is the - unnormalized periodogram + The normalization of the periodogram. `abs` is absolute rms, `frac` + is fractional rms, `leahy` is Leahy+83 normalization, and `none` is + the unnormalized periodogram. use_common_mean : bool, default True - The mean of the light curve can be estimated in each interval, or on - the full light curve. This gives different results (Alston+2013). - Here we assume the mean is calculated on the full light curve, but - the user can set ``use_common_mean`` to False to calculate it on a - per-segment basis. + The mean of the light curve can be estimated in each interval, or + on the full light curve. This gives different results + (Alston+2013). By default, we assume the mean is calculated on the + full light curve, but the user can set ``use_common_mean`` to False + to calculate it on a per-segment basis. silent : bool, default False - Silence the progress bars - gti: [[gti0_0, gti0_1], [gti1_0, gti1_1], ...] - Additional, optional, Good Time intervals, that get interesected with the - GTIs of the input object. + Silence the progress bars. Returns ------- spec : `AveragedPowerspectrum` or `Powerspectrum` The output periodogram. """ - + if gti is None: + gti = events.gti return powerspectrum_from_time_array( - events.time, dt, segment_size, events.gti, norm=norm, + events.time, dt, segment_size, gti, norm=norm, silent=silent, use_common_mean=use_common_mean ) -def powerspectrum_from_lightcurve(lc, segment_size=None, norm="frac", - silent=False, use_common_mean=True, - gti=None): - """Calculate AveragedPowerspectrum from a light curve +def powerspectrum_from_lightcurve(lc, segment_size=None, gti=None, norm="frac", + silent=False, use_common_mean=True): + """ + Calculate a power spectrum from a light curve Parameters ---------- events : `stingray.Lightcurve` - Light curve to be analyzed + Light curve to be analyzed. dt : float The time resolution of the intermediate light curves (sets the Nyquist frequency) @@ -1217,22 +1275,25 @@ def powerspectrum_from_lightcurve(lc, segment_size=None, norm="frac", Other parameters ---------------- segment_size : float - The length, in seconds, of the light curve segments that will be averaged + The length, in seconds, of the light curve segments that will be + averaged. Only required (and used) for ``AveragedPowerspectrum``. + gti : ``[[gti0_0, gti0_1], [gti1_0, gti1_1], ...]`` + Additional, optional Good Time intervals that get intersected with + the GTIs of the input object. Can cause errors if there are + overlaps between these GTIs and the input object GTIs. If that + happens, assign the desired GTIs to the input object. norm : str, default "frac" - The normalization of the periodogram. "abs" is absolute rms, "frac" is - fractional rms, "leahy" is Leahy+83 normalization, and "none" is the - unnormalized periodogram + The normalization of the periodogram. `abs` is absolute rms, `frac` + is fractional rms, `leahy` is Leahy+83 normalization, and `none` is + the unnormalized periodogram. use_common_mean : bool, default True - The mean of the light curve can be estimated in each interval, or on - the full light curve. This gives different results (Alston+2013). - Here we assume the mean is calculated on the full light curve, but - the user can set ``use_common_mean`` to False to calculate it on a - per-segment basis. + The mean of the light curve can be estimated in each interval, or + on the full light curve. This gives different results + (Alston+2013). By default, we assume the mean is calculated on the + full light curve, but the user can set ``use_common_mean`` to False + to calculate it on a per-segment basis. silent : bool, default False - Silence the progress bars - gti: [[gti0_0, gti0_1], [gti1_0, gti1_1], ...] - Additional, optional, Good Time intervals, that get interesected with the - GTIs of the input object. + Silence the progress bars. Returns ------- @@ -1245,49 +1306,55 @@ def powerspectrum_from_lightcurve(lc, segment_size=None, norm="frac", err = None if lc.err_dist == "gauss": err = lc.counts_err + if gti is None: + gti = lc.gti - table = avg_pds_from_events( - lc.time, lc.gti, segment_size, lc.dt, - norm=norm, use_common_mean=use_common_mean, - silent=silent, - fluxes=lc.counts, errors=err) + table = avg_pds_from_events(lc.time, gti, segment_size, lc.dt, norm=norm, + use_common_mean=use_common_mean, silent=silent, + fluxes=lc.counts, errors=err) - return _create_powerspectrum_from_result_table(table, force_averaged=force_averaged) + return _create_powerspectrum_from_result_table(table, + force_averaged=force_averaged) -def powerspectrum_from_lc_iterable(iter_lc, dt, segment_size=None, norm="frac", - silent=False, use_common_mean=True, gti=None): - """Calculate AveragedCrossspectrum from two light curves +def powerspectrum_from_lc_iterable(iter_lc, dt, segment_size=None, gti=None, + norm="frac", silent=False, + use_common_mean=True): + """ + Calculate an average power spectrum from an iterable collection of light + curves. Parameters ---------- - iter_lc1 : iterable of `stingray.Lightcurve` objects or `np.array` - Light curves from channel 1. If arrays, use them as counts - iter_lc1 : iterable of `stingray.Lightcurve` objects or `np.array` - Light curves from channel 2. If arrays, use them as counts + iter_lc : iterable of `stingray.Lightcurve` objects or `np.array` + Light curves. If arrays, use them as counts. dt : float The time resolution of the light curves - (sets the Nyquist frequency) + (sets the Nyquist frequency). Other parameters ---------------- segment_size : float, default None - The length, in seconds, of the light curve segments that will be averaged. - If not ``None``, it will be used to check the segment size of the output. + The length, in seconds, of the light curve segments that will be + averaged. If not ``None``, it will be used to check the segment size of + the output. + gti : ``[[gti0_0, gti0_1], [gti1_0, gti1_1], ...]`` + Additional, optional Good Time intervals that get intersected with + the GTIs of the input object. Can cause errors if there are + overlaps between these GTIs and the input object GTIs. If that + happens, assign the desired GTIs to the input object. norm : str, default "frac" - The normalization of the periodogram. "abs" is absolute rms, "frac" is - fractional rms, "leahy" is Leahy+83 normalization, and "none" is the - unnormalized periodogram + The normalization of the periodogram. `abs` is absolute rms, `frac` + is fractional rms, `leahy` is Leahy+83 normalization, and `none` is + the unnormalized periodogram. use_common_mean : bool, default True - The mean of the light curve can be estimated in each interval, or on - the full light curve. This gives different results (Alston+2013). - Here we assume the mean is calculated on the full light curve, but - the user can set ``use_common_mean`` to False to calculate it on a - per-segment basis. + The mean of the light curve can be estimated in each interval, or + on the full light curve. This gives different results + (Alston+2013). By default, we assume the mean is calculated on the + full light curve, but the user can set ``use_common_mean`` to False + to calculate it on a per-segment basis. silent : bool, default False - Silence the progress bars - gti: [[gti0_0, gti0_1], [gti1_0, gti1_1], ...] - Good Time intervals. + Silence the progress bars. Returns ------- @@ -1324,18 +1391,16 @@ def iterate_lc_counts(iter_lc): "The inputs to `powerspectrum_from_lc_iterable`" " must be Lightcurve objects or arrays") - table = avg_pds_from_iterable( - iterate_lc_counts(iter_lc), - dt, - norm=norm, - use_common_mean=use_common_mean, - silent=silent - ) - return _create_powerspectrum_from_result_table(table, force_averaged=force_averaged) + table = avg_pds_from_iterable(iterate_lc_counts(iter_lc), dt, norm=norm, + use_common_mean=use_common_mean, + silent=silent) + return _create_powerspectrum_from_result_table(table, + force_averaged=force_averaged) def _create_powerspectrum_from_result_table(table, force_averaged=False): - """Copy the columns and metadata from the results of + """ + Copy the columns and metadata from the results of ``stingray.fourier.avg_pds_from_XX`` functions into `AveragedPowerspectrum` or `Powerspectrum` objects. diff --git a/stingray/tests/test_gti.py b/stingray/tests/test_gti.py index 550182123..8ce2b1b8f 100644 --- a/stingray/tests/test_gti.py +++ b/stingray/tests/test_gti.py @@ -211,8 +211,9 @@ def test_join_gtis_nonoverlapping(self): def test_join_gtis_overlapping(self): gti0 = [[0, 1], [2, 3], [4, 8]] gti1 = [[7, 8], [10, 11], [12, 13]] - assert np.all(join_gtis(gti0, gti1) == np.array([[0, 1], [2, 3], [4, 8], - [10, 11], [12, 13]])) + assert np.all(join_gtis(gti0, gti1) == np.array([[0, 1], [2, 3], + [4, 8], [10, 11], + [12, 13]])) def test_time_intervals_from_gtis(self): """Test the division of start and end times to calculate spectra.""" @@ -248,7 +249,7 @@ def test_bin_intervals_from_gtis_2(self): # Simulate something *clearly* non-constant counts = np.random.poisson( 10000 + 2000 * np.sin(2 * np.pi * times)) - + # TODO: `counts` isn't actually used here. start_bins, stop_bins = bin_intervals_from_gtis(gti, 20, times) assert np.allclose(start_bins, [0, 200, 400, 600, 800]) diff --git a/stingray/tests/test_powerspectrum.py b/stingray/tests/test_powerspectrum.py index 2a8754e65..1507345cb 100644 --- a/stingray/tests/test_powerspectrum.py +++ b/stingray/tests/test_powerspectrum.py @@ -48,8 +48,10 @@ def setup_class(cls): cls.events = EventList(times, gti=gti) cls.lc = cls.events - cls.leahy_pds = AveragedPowerspectrum( - cls.lc, segment_size=cls.segment_size, dt=cls.dt, norm="leahy", silent=True) + cls.leahy_pds = AveragedPowerspectrum(cls.lc, + segment_size=cls.segment_size, + dt=cls.dt, norm="leahy", + silent=True) cls.leahy_pds_sng = Powerspectrum( cls.lc, dt=cls.dt, norm="leahy") @@ -75,11 +77,14 @@ def test_modulation_upper_limit(self, norm): pds.power[25] = val pds.unnorm_power[25] = unnorm_val pds_norm = pds.to_norm(norm) - assert np.isclose(pds_norm.modulation_upper_limit(2, 5, 0.99), 0.5412103, atol=1e-4) + assert np.isclose(pds_norm.modulation_upper_limit(2, 5, 0.99), + 0.5412103, atol=1e-4) def test_legacy_equivalent(self): - leahy_pds = AveragedPowerspectrum( - self.lc, segment_size=self.segment_size, dt=self.dt, norm="leahy", silent=True, legacy=True) + leahy_pds = AveragedPowerspectrum(self.lc, + segment_size=self.segment_size, + dt=self.dt, norm="leahy", + silent=True, legacy=True) for attr in ["power", "unnorm_power"]: assert np.allclose( getattr(leahy_pds, attr), @@ -98,7 +103,8 @@ def test_from_events_works_ps(self): def test_from_events_works_aps(self): pds_ev = AveragedPowerspectrum.from_events( - self.events, segment_size=self.segment_size, dt=self.dt, norm="leahy", silent=True) + self.events, segment_size=self.segment_size, dt=self.dt, + norm="leahy", silent=True) assert np.allclose(self.leahy_pds.power, pds_ev.power) def test_from_lc_iter_works(self): @@ -143,12 +149,15 @@ def iter_lc_with_errs(iter_lc): # In order for error bars to be considered, # err_dist has to be gauss. lc.err_dist = "gauss" - lc._counts_err = np.zeros_like(lc.counts) + lc.counts.mean()**0.5 + lc._counts_err = np.zeros_like(lc.counts) + \ + lc.counts.mean()**0.5 yield lc lccs = AveragedPowerspectrum.from_lc_iterable( - iter_lc_with_errs(self.events.to_lc_iter(self.dt, self.segment_size)), - segment_size=self.segment_size, dt=self.dt, norm='leahy', silent=True) + iter_lc_with_errs(self.events.to_lc_iter(self.dt, + self.segment_size)), + segment_size=self.segment_size, dt=self.dt, norm='leahy', + silent=True) power1 = lccs.power.real power2 = self.leahy_pds.power.real assert np.allclose(power1, power2, rtol=0.01) @@ -163,8 +172,10 @@ def iter_lc_with_errs(iter_lc): yield lc lccs = AveragedPowerspectrum.from_lc_iterable( - iter_lc_with_errs(self.events.to_lc_iter(self.dt, self.segment_size)), - segment_size=self.segment_size, dt=self.dt, norm='leahy', silent=True) + iter_lc_with_errs(self.events.to_lc_iter(self.dt, + self.segment_size)), + segment_size=self.segment_size, dt=self.dt, norm='leahy', + silent=True) power1 = lccs.power.real power2 = self.leahy_pds.power.real assert np.allclose(power1, power2, rtol=0.01) @@ -175,17 +186,21 @@ def iter_lc_counts_only(iter_lc): yield lc.counts lccs = AveragedPowerspectrum.from_lc_iterable( - iter_lc_counts_only(self.events.to_lc_iter(self.dt, self.segment_size)), - segment_size=self.segment_size, dt=self.dt, norm='leahy', silent=True) + iter_lc_counts_only(self.events.to_lc_iter(self.dt, + self.segment_size)), + segment_size=self.segment_size, dt=self.dt, norm='leahy', + silent=True) power1 = lccs.power.real power2 = self.leahy_pds.power.real assert np.allclose(power1, power2, rtol=0.01) def test_from_time_array_works_with_memmap(self): - with fits.open(os.path.join(datadir, "monol_testA.evt"), memmap=True) as hdul: + with fits.open(os.path.join(datadir, "monol_testA.evt"), + memmap=True) as hdul: times = hdul[1].data["TIME"] - gti = np.array([[hdul[2].data["START"][0], hdul[2].data["STOP"][0]]]) + gti = np.array([[hdul[2].data["START"][0], + hdul[2].data["STOP"][0]]]) _ = AveragedPowerspectrum.from_time_array( times, segment_size=128, dt=self.dt, gti=gti, norm='none', @@ -199,7 +214,8 @@ def test_from_lc_with_err_works(self, norm): lc, segment_size=self.segment_size, norm=norm, silent=True, gti=lc.gti) pds_ev = AveragedPowerspectrum.from_events( - self.events, segment_size=self.segment_size, dt=self.dt, norm=norm, silent=True, gti=self.events.gti) + self.events, segment_size=self.segment_size, dt=self.dt, norm=norm, + silent=True, gti=self.events.gti) for attr in ["power", "freq", "m", "n", "nphots", "segment_size"]: assert np.allclose(getattr(pds, attr), getattr(pds_ev, attr)) @@ -211,7 +227,8 @@ def test_init_without_segment(self): def test_init_with_nonsense_segment(self, legacy): segment_size = "foo" with pytest.raises(TypeError): - assert AveragedPowerspectrum(self.lc, segment_size, dt=self.dt, legacy=legacy) + assert AveragedPowerspectrum(self.lc, segment_size, dt=self.dt, + legacy=legacy) def test_init_with_none_segment(self): segment_size = None @@ -417,11 +434,13 @@ def test_fractional_rms_in_frac_norm_is_consistent(self): gti=[[0, 100]]) ps = Powerspectrum(lc, norm="leahy") rms_ps_l, rms_err_l = ps.compute_rms(min_freq=ps.freq[1], - max_freq=ps.freq[-1], white_noise_offset=0) + max_freq=ps.freq[-1], + white_noise_offset=0) ps = Powerspectrum(lc, norm="frac") rms_ps, rms_err = ps.compute_rms(min_freq=ps.freq[1], - max_freq=ps.freq[-1], white_noise_offset=0) + max_freq=ps.freq[-1], + white_noise_offset=0) assert np.allclose(rms_ps, rms_ps_l, atol=0.01) assert np.allclose(rms_err, rms_err_l, atol=0.01) @@ -436,11 +455,13 @@ def test_fractional_rms_in_frac_norm_is_consistent_averaged(self): ps = AveragedPowerspectrum(lc, norm="leahy", segment_size=100, silent=True) rms_ps_l, rms_err_l = ps.compute_rms(min_freq=ps.freq[1], - max_freq=ps.freq[-1], white_noise_offset=0) + max_freq=ps.freq[-1], + white_noise_offset=0) ps = AveragedPowerspectrum(lc, norm="frac", segment_size=100) rms_ps, rms_err = ps.compute_rms(min_freq=ps.freq[1], - max_freq=ps.freq[-1], white_noise_offset=0) + max_freq=ps.freq[-1], + white_noise_offset=0) assert np.allclose(rms_ps, rms_ps_l, atol=0.01) assert np.allclose(rms_err, rms_err_l, atol=0.01) @@ -789,7 +810,7 @@ def iter_lc(lc, n): iter_lc(self.lc, 1), segment_size=1, legacy=legacy, gti=self.lc.gti) - message = "The averaged Power spectrum from a generator " + message = "The averaged power spectrum from a generator" assert np.any([message in r.message.args[0] for r in record]) @@ -980,7 +1001,8 @@ def test_rebin_small_dt(self): segment_size = 3 with warnings.catch_warnings(): warnings.simplefilter("ignore", category=UserWarning) - dps = DynamicalPowerspectrum(self.lc_test, segment_size=segment_size) + dps = DynamicalPowerspectrum(self.lc_test, + segment_size=segment_size) with pytest.raises(ValueError): dps.rebin_time(dt_new=2.0) @@ -1028,7 +1050,8 @@ def test_rebin_time_mean_method(self): rebin_dps = np.array([[0.59722222, 0.87301587, 0.21428571]]) with warnings.catch_warnings(): warnings.simplefilter("ignore", category=UserWarning) - dps = DynamicalPowerspectrum(self.lc_test, segment_size=segment_size) + dps = DynamicalPowerspectrum(self.lc_test, + segment_size=segment_size) new_dps = dps.rebin_time(dt_new=dt_new, method='mean') assert np.allclose(new_dps.time, rebin_time) assert np.allclose(new_dps.dyn_ps, rebin_dps) @@ -1060,7 +1083,8 @@ def test_rebin_time_average_method(self): with warnings.catch_warnings(): warnings.simplefilter("ignore", category=UserWarning) - dps = DynamicalPowerspectrum(self.lc_test, segment_size=segment_size) + dps = DynamicalPowerspectrum(self.lc_test, + segment_size=segment_size) new_dps = dps.rebin_time(dt_new=dt_new, method='average') assert np.allclose(new_dps.time, rebin_time) assert np.allclose(new_dps.dyn_ps, rebin_dps) diff --git a/tox.ini b/tox.ini index 7b7030ce4..cb413dab1 100644 --- a/tox.ini +++ b/tox.ini @@ -1,8 +1,8 @@ [tox] envlist = - py{36,37,38,39,310}-test{,-alldeps,-devdeps}{,-cov} - py{36,37,38,39,310}-test-numpy{116,117,118} - py{36,37,38,39,310}-test-astropy{3,4,5,lts} + py{38,39,310}-test{,-alldeps,-devdeps}{,-cov} + py{38,39,310}-test-numpy{116,117,118} + py{38,39,310}-test-astropy{3,4,5,lts} build_docs linkcheck codestyle