Reply to Laing et al.: Accurate prediction of circadian time across platforms

Rosemary Braun, William L. Kath, Marta Iwanaszko, Elzbieta Kula-Eversole, Sabra M. Abbott, Kathryn J. Reid, Phyllis C. Zee, Ravi Allada
2019 Proceedings of the National Academy of Sciences of the United States of America  
TimeSignature's power is that it is highly accurate across transcriptomic platforms and experimental protocols. TimeSignature can be trained using data from a single platform/study and be applied to independent data without additional processing. Demonstrating this robustness and generalizability required applying TimeSignature to as diverse a set of studies as possible (i.e., refs. 1-4). Because not all studies had melatonin data, we chose to use draw time as a proxy for melatonin phase. Laing
more » ... atonin phase. Laing et al. (5) criticize this methodology; however, each of the studies (1-4) selected subjects for whom circadian phase was well aligned with local time. We have confirmed that dim-light melatonin onset (DLMO)25%, the time at which 25% of maximum blood melatonin is reached, is strongly correlated with local time in studies with available melatonin data (1-3) (Fig. 1) . As expected, training on local time thus accurately predicts melatonin phase (Fig. 2, oTS) . More importantly, when trained on melatonin phase, TimeSignature predicts melatonin phase even more accurately than our previously published draw-time results (1), with a median absolute error of under 1:20 ( Fig. 2 , mTS). This demonstrates that the TimeSignature algorithm is a robust and universal method for making predictions of circular variables from transcriptomic data. In regard to Laing et al.'s (5) second and third points, there are important distinctions between TimeSignature and differential partial least squares regression (dPLSR) (6). While TimeSignature's normalization uses the average between the time points as a reference, the prediction yields two separate predictions that need not be 180°(12 h) apart. In fact, TimeSignature could articulate circadian asymmetries (e.g., a well-aligned first time point but a delayed second time point). By contrast, dPLSR makes a single prediction for the midpoint of the two samples, and thus could not be used to detect such effects. Although peak accuracy is at 12 h apart, TimeSignature performs accurately even for samples <12 h apart-a significant practical consideration. Plots showing TimeSignature's superior performance predicting melatonin phase compared to both PLSR and dPLSR are given in Fig. 2 . With respect to conormalization, we are specifically referring to combining samples from different studies (which can exhibit systematically different statistical characteristics) and then normalizing the mixed samples across study subjects. In the PLSR paper (6), data from two studies (2, 3) were mixed in the training and testing sets; quantile normalization was performed across the mixture of studies in the training set, with the same normalization applied to the validation mixture, followed by z scoring across the mixture of samples. This constructs an artificial situation in which the training and application data have the same statistical characteristics (both are equivalent mixtures of the same source studies, normalized to the same reference) and generates dependencies between data from independent studies. A true test of an algorithm's performance requires validating the model in truly independent data. Hence, we trained on a subset of samples from a single study (2) and tested against independent studies (1, 3, 4), without any mixing or renormalizing across subjects. Additionally, z scoring across subjects (or any other cross-subject renormalization) creates a situation in which predictions for a given individual will depend on data from other subjects, thus compromising interpretability and reproducibility. TimeSignature avoids this drawback. We absolutely agree with Laing et al. (5) that progress in this field requires careful validation of any new method.
doi:10.1073/pnas.1819173116 fatcat:khbwwcudanf6ff2okoe3kh5eiy