A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2019; you can also visit the original URL.
The file type is application/pdf
.
Filters
Adaptive ADMM in Distributed Radio Interferometric Calibration
[article]
2017
arXiv
pre-print
Faruk Diblen and Hanno Spreeuw Netherlands eScience Center Science Park 140 (Matrix I) 1098 XG Amsterdam, The Netherlands Abstract-Distributed radio interferometric calibration based on consensus optimization ...
arXiv:1710.05656v1
fatcat:hguxdq3bljg4vkrqyhpqoabdna
GPU acceleration of the SAGECal calibration package for the SKA
[article]
2019
arXiv
pre-print
SAGECal has been designed to find the most accurate calibration solutions for low radio frequency imaging observations, with minimum artefacts due to incomplete sky models. SAGECAL is developed to handle extremely large datasets, e.g., when the number of frequency bands greatly exceeds the number of available nodes on a compute cluster. Accurate calibration solutions are derived at the expense of large computational loads, which require distributed computing and modern compute devices, such as
arXiv:1910.13908v1
fatcat:cfpzinwajrbofnovgpfxnmvbcy
more »
... PUs, to decrease runtimes. In this work, we investigate if the GPU version of SAGECal scales well enough to meet the requirements for the Square Kilometre Array and we compare its performance with the CPU version.
A Stochastic LBFGS Algorithm for Radio Interferometric Calibration
[article]
2019
arXiv
pre-print
We present a stochastic, limited-memory Broyden Fletcher Goldfarb Shanno (LBFGS) algorithm that is suitable for handling very large amounts of data. A direct application of this algorithm is radio interferometric calibration of raw data at fine time and frequency resolution. Almost all existing radio interferometric calibration algorithms assume that it is possible to fit the dataset being calibrated into memory. Therefore, the raw data is averaged in time and frequency to reduce its size by
arXiv:1904.05619v2
fatcat:fliw2aai5fa33nzx2syzl7if5a
more »
... y orders of magnitude before calibration is performed. However, this averaging is detrimental for the detection of some signals of interest that have narrow bandwidth and time duration such as fast radio bursts (FRBs). Using the proposed algorithm, it is possible to calibrate data at such a fine resolution that they cannot be entirely loaded into memory, thus preserving such signals. As an additional demonstration, we use the proposed algorithm for training deep neural networks and compare the performance against the mainstream first order optimization algorithms that are used in deep learning.
Processing Radio Astronomical Data Using the PROCESS Software Ecosystem
2020
Computing and informatics
In this paper we discuss our efforts in "unlocking" the Long Term Archive (LTA) of the LOFAR radio telescope using the software ecosystem developed in the PROCESS project. The LTA is a large (> 50 PB) archive that expands with about 7 PB per year by the ingestion of new observations. It consists of coarsely calibrated "visibilities", i.e. correlations between signals from LOFAR stations. Converting these observations into sky maps (images), which are needed for astronomy research, can be
doi:10.31577/cai_2020_4_838
fatcat:qqbykzg77vb3nnb2cvly52vuay
more »
... ging due to the data sizes of the observations and the complexity and compute requirements of the software involved. Using the PROCESS software environment and testbed, we enable a simple point-and-clickreduction of LOFAR observations into sky maps for users of this archive. This work was performed as part of the PROCESS project which aims to provide generalizable open source solutions for user friendly exascale data processing.
Portal dosimetry in wedged beams
2015
Journal of Applied Clinical Medical Physics
Portal dosimetry using electronic portal imaging devices (EPIDs) is often applied to verify high-energy photon beam treatments. Due to the change in photon energy spectrum, the resulting dose values are, however, not very accurate in the case of wedged beams if the pixel-to-dose conversion for the situation without wedge is used. A possible solution would be to consider a wedged beam as another photon beam quality requiring separate beam modeling of the dose calculation algorithm. The aim of
doi:10.1120/jacmp.v16i3.5375
pmid:26103497
fatcat:xyl2rrymhveujibu7brqvrtaga
more »
... s study was to investigate a more practical solution: to make aSi EPID-based dosimetry models also applicable for wedged beams without an extra commissioning effort of the parameters of the model. For this purpose two energy-dependent wedge multiplication factors have been introduced to be applied for portal images taken with and without a patient/phantom in the beam. These wedge multiplication factors were derived from EPID and ionization chamber measurements at the EPID level for wedged and nonwedged beams, both with and without a polystyrene slab phantom in the beam. This method was verified for an EPID dosimetry model used for wedged beams at three photon beam energies (6, 10, and 18 MV) by comparing dose values reconstructed in a phantom with data provided by a treatment planning system (TPS), as a function of field size, depth, and off-axis distance. Generally good agreement, within 2%, was observed for depths between dose maximum and 15 cm. Applying the new model to EPID dose measurements performed during ten breast cancer patient treatments with wedged 6 MV photon beams showed that the average isocenter underdosage of 5.3% was reduced to 0.4%. Gamma-evaluation (global 3%/3 mm) of these in vivo data showed an increase in percentage of points with γ ≤ 1 from 60.2% to 87.4%, while γ mean reduced from 1.01 to 0.55. It can be concluded that, for wedged beams, the multiplication of EPID pixel values with an energy-dependent correction factor provides good agreement between dose values determined by an EPID and a TPS, indicating the usefulness of such a practical solution.
matchms - processing and similarity evaluation of mass spectrometry data
[article]
2020
biorxiv/medrxiv
pre-print
Mass spectrometry data is at the heart of numerable applications in the biomedical and life sciences. With growing use of high throughput techniques researchers need to analyse larger and more complex datasets. In particular through joint effort in the research community, fragmentation mass spectrometry datasets are growing in size and number. Matchms is an open-access Python package to import, process, clean, and compare mass spectrometry data (MS/MS). It allows to implement and run an
doi:10.1101/2020.08.06.239244
fatcat:folnxbg2t5aqhf32pcf3mln5b4
more »
... follow, easy-to-reproduce workflow from raw mass spectra to pre- and post-processed spectral data.
The LOFAR Transients Key Project
[article]
2006
arXiv
pre-print
LOFAR, the Low Frequency Array, is a new radio telescope under construction in the Netherlands, designed to operate between 30 and 240 MHz. The Transients Key Project is one of the four Key Science Projects which comprise the core LOFAR science case. The remit of the Transients Key Project is to study variable and transient radio sources detected by LOFAR, on timescales from milliseconds to years. This will be achieved via both regular snapshot monitoring of historical and newly-discovered
arXiv:astro-ph/0611298v1
fatcat:f7hvfa5dijh3za2dlh335cenrq
more »
... variables and, most radically, the development of a 'Radio Sky Monitor' which will survey a large fraction of the northern sky on a daily basis.
Data multiplexing in radio interferometric calibration
2017
Monthly notices of the Royal Astronomical Society
In addition to the steps (8) , (9) and (10) , we also extend (Yatawatta, Diblen & Spreeuw 2017) to update ρ n f , which is described in Section 3.3. ...
doi:10.1093/mnras/stx3130
fatcat:vcvofjzje5gzrptptdn3q2tp6q
matchms - processing and similarity evaluation of mass spectrometry data
2020
Journal of Open Source Software
The LOFAR Transients Pipeline
[article]
2015
arXiv
pre-print
We separate out the components of these composite sources in a process referred to as "deblending" (Bertin and Arnouts, 1996; Spreeuw, 2010; Hancock et al., 2012) . ...
Similarly, Spreeuw (2010, chapter 3) describes an elaborate series of statistical tests on the sourcefinder, which are expanded upon by Carbone et al. (in prep.) . ...
arXiv:1503.01526v1
fatcat:una25urvbrghpcjl3ygcos6ev4
The LOFAR Transients Pipeline
2015
Astronomy and Computing
Predictions for Radio Emission from Extrasolar Planets
2008
Proceedings of Bursts, Pulses and Flickering: wide-field monitoring of the dynamic radio sky — PoS(Dynamic2007)
unpublished
Spreeuw ambient stellar wind, as both the stellar wind and the planet can either be magnetised or unmagnetised. ...
Spreeuw figure 1 (for the magnetic energy model), figure 2 (for the CME model) and figure 3 (for the kinetic energy model). ...
doi:10.22323/1.056.0007
fatcat:jc5eo5zlwrh3hcsdkvnjtcaczy
Page 670 of Astronomy and Astrophysics Vol. 312, Issue 2
[page]
1996
Astronomy and Astrophysics
and Hanno N. ...
Spreeuw”
' Astronomical Institute Utrecht, Postbus 80000, 3508 TA Utrecht, The Netherlands 2 Astronomical Institute ‘Anton Pannekoek’, Kruislaan 403, 1098 SJ Amsterdam, The Netherlands
Received 10 July ...
The LOFAR Transients Key Project
2007
Proceedings of VI Microquasar Workshop: Microquasars and Beyond — PoS(MQW6)
unpublished
SAGECal Performance With Large Sky Models
2021
URSI Radio Science Letters
Hanno Spreeuw, Ben van Werkhoven, and Faruk Diblen are with Netherlands eScience Center, Science Park 140, Amsterdam, Netherlands; e-mail: h.spreeuw@esciencecenter.nl, b.vanwerk hoven@esciencecenter.nl ...
doi:10.46620/20-0026
fatcat:ussoucvtjvbq3pvrjx6zd7yrqu
« Previous
Showing results 1 — 15 out of 19 results