X-PSI has been applied in the following studies. In addition to giving an idea of the scientific applications, these may also be useful for performance benchmarking and planning of computational resource consumption.
If you have used X-PSI for a project and would like to link it here, please contact the X-PSI team and/or submit a pull-request on GitHub.
Early X-PSI development
X-PSI was initiated by Thomas Riley as part of his PhD work at the University of Amsterdam.
Chapter 3 of his PhD thesis (Riley 2019)
provides an extended technical description of
v0.1 of X-PSI and contains
results of some parameter recovery tests using synthetic data.
X-PSI has been used in several Pulse Profile Modeling analysis papers by the NICER team. These are typically published with a Zenodo repository containing data, model files, X-PSI scripts, samples and post-processing notebooks to enable full reproduction and independent analysis.
Salmi et al. 2022 (ApJ, 941, 150) used
v0.7.10 of X-PSI to model NICER observations of the rotation-powered millisecond X-ray pulsar PSR J0740+6620 using NICER background estimates. See also the associated Zenodo repository.
Bogdanov et al. 2021 (ApJL, 914, L15) provides additional details of the models used for NICER Pulse Profile Modeling, reports ray-tracing cross-tests for X-PSI and other codes for very compact stars where multiple imaging is important, and reports some parameter recovery simulations for synthetic data.
Bogdanov et al. 2019 (ApJL, 887, L26) reports the results of ray-tracing cross-tests for several codes in use in the NICER team including X-PSI.
Kini et al. 2023 (MNRAS, 522, 3389) used
v0.7.9 of X-PSI (with small modifications described in the paper) to model simulated RXTE observations of thermonuclear X-ray burst oscillations when ignoring time variability in the properties of the emitting regions. See also the associated Zenodo repository.
Below we give details of some settings for the Riley et al. 2019 paper. For later papers some changes were made - see papers for details.
Resource consumption: The calculations reported were performed on the Dutch national supercomputer Cartesius, mostly on the Broadwell nodes (i.e., using CPUs only, usually Broadwell architecture). A number of models were applied, with increasing complexity. Typically, ~1000+ cores were used for posterior sampling, when the parallelisation is weighted by time consumed. In total, the sampling problems in R19 required ~450,000 core hours.
Likelihood function settings: On compute nodes, the likelihood function evaluation times ranged from ~1 to ~3 seconds (single-threaded), depending on the model and point in parameter space. The parallelisation was purely via MPI, with all but one process receiving likelihood function evaluation requests.
Resolution settings: Number of surface cells/elements per hot (closed) region: 24x24; Number of phases (linearly spaced) = 100; Number of energies (logarithmically spaced) = 175; Number of rays (per parallel; linear in cosine of ray angle alpha): 200
Interpolation settings: Steffen splines (GSL) were used everywhere apart from for the atmosphere, and for the atmosphere cubic polynomial interpolants were used in four dimensions.