Filters








1,091 Hits in 1.3 sec

Invasive Palliative Interventions

Florian Strasser, David Blum, Daniel Bueche
2010 The Cancer Journal  
In palliative cancer care situations, invasive palliative interventions are frequently considered. The perception of invasiveness has a wide range and is subjective. A structured palliative care approach can guide decisional processes. It may contain 6 key elements: (1) multidimensional and multiprofessional assessment patients current priorities, (2) quality of current symptom management for the potential target intervention, (3) documentation of potential reasons to reduce symptomatic
more » ... symptomatic medications, (4) cautious judgment if patients' potential clinical benefit can be extrapolated from published evidence, (5) a decisional process for the considered intervention (e.g., the 7 P's model : priority, price, probability, prognosis, progression, prevention, preferences), and (6) agreement on the goal of the intervention before the invasive intervention. The examples of pleural effusion and parenteral nutrition are briefly emphasized. Oncologists may be competent to foster patients' participation in decision making and to use available specialist palliative care competencies and those of other professions.
doi:10.1097/ppo.0b013e3181f842b3 pmid:20890144 fatcat:ceunk6mcorhhzkx353icuv3tf4

Efficient implementation of penalized regression for genetic risk prediction [article]

Florian Prive, Hugues Aschard, Michael G.B. Blum
2018 bioRxiv   pre-print
Polygenic Risk Scores (PRS) consist in combining the information across many single-nucleotide polymorphisms (SNPs) in a score reflecting the genetic risk of developing a disease. PRS might have a major public health impact, possibly allowing for screening campaigns to identify high-genetic risk individuals for a given disease. The "Clumping+Thresholding" (C+T) approach, which is the most common method to derive PRS, uses only univariate genome-wide association studies (GWAS) summary
more » ... summary statistics, which makes it fast and easy to use. However, previous work showed that jointly estimating SNP effects for computing PRS has the potential to significantly improve the predictive performance of PRS as compared to C+T. In this paper, we present an efficient method to jointly estimate SNP effects, allowing for practical application of penalized logistic regression on modern datasets including hundreds of thousands of individuals. Moreover, our implementation of penalized logistic regression directly includes automatic choices for hyper-parameters. The choice of hyper-parameters for a predictive model is very important since it can dramatically impact its predictive performance. As an example, AUC values range from less than 60% to 90% in a model with 30 causal SNPs, depending on the p-value threshold in C+T. We compare the performance of penalized logistic regression to the C+T method and to a derivation of random forests. Penalized logistic regression consistently achieves higher predictive performance than the two other methods while being very fast. We find that improvement in predictive performance is more pronounced when there are few effects located in nearby genomic regions with correlated SNPs; AUC values increase from 83% with the best prediction of C+T to 92.5% with penalized logistic regression. We confirm these results in a data analysis of a case-control study for celiac disease where penalized logistic regression and the standard C+T method achieve AUC of 89% and of 82.5%. In conclusion, our study demonstrates that penalized logistic regression is applicable to large-scale individual-level data and can achieve more discriminative polygenic risk scores. Our implementation is publicly available in R package bigstatsr.
doi:10.1101/403337 fatcat:duon6icswjbnnbpvdo5djpmqde

Political institutions and health expenditure

Johannes Blum, Florian Dorn, Axel Heuer
2021 International Tax and Public Finance  
Florian Dorn is grateful for support from the Hanns-Seidel-Foundation. Funding Open Access funding enabled and organized by Projekt DEAL.  ...  Scholars suggest employing Political institutions and health expenditure individual measures of democracy when institutional determinants of government spending are examined (Blum 2021) . 3 We therefore  ... 
doi:10.1007/s10797-020-09648-9 fatcat:vpfcote5x5hclmat3mxgkkmu3u

Cachexia assessment tools

David Blum, Florian Strasser
2011 Current Opinion in Supportive and Palliative Care  
 Cachexia from the Greek words kakos hexis, meaning bad condition, was already described by Hippocrates as a common consequence of cancer and other diseases.  Assessment, treatment or prevention of cachexia in cancer patients is paramount, as cachexia is associated with reduced effectiveness of anticancer treatments, increased risk of therapyassociated side-effects, reduced performance status, physical function and reduced quality of life . Cachexia shortens survival and responsible for the
more » ... sponsible for the death of a substantial amount of cancer patients.  Cancer cachexia definition evolved over time. International consensus defines cancer cachexia now as a multifactorial syndrome characterized by an ongoing loss of skeletal muscle mass (with or without loss of fat mass) that cannot be fully reversed by conventional nutritional support and leads to progressive functional impairment. The pathophysiology is characterized by a negative protein and energy balance driven by a variable combination of educed food intake and abnormal metabolism. Five domains of assessment are proposed. Due to the lack of a specific cachexia assessment tool, malnutrition assessment tools are used in daily practice.
doi:10.1097/spc.0b013e32834c4a05 pmid:21986911 fatcat:hstabtipffbr7gfrwb6nvoemqq

Making the most of Clumping and Thresholding for polygenic scores [article]

Florian Privé, Bjarni J. Vilhjálmsson, Hugues Aschard, Michael G.B. Blum
2019 bioRxiv   pre-print
Polygenic prediction has the potential to contribute to precision medicine. Clumping and Thresholding (C+T) is a widely used method to derive polygenic scores. When using C+T, people usually test several p-value thresholds to maximize predictive ability of derived polygenic scores. Along with this p-value threshold, we propose to tune 3 other hyper-parameters for C+T. We implement an efficient way to derive C+T scores corresponding to many different sets of hyper-parameters. For example, you
more » ... For example, you can now derive thousands of different C+T scores for 300K individuals and 1M variants in less than one day. We show that tuning 4 hyper-parameters of C+T consistently improves its predictive performance in both simulations and real data applications as compared to tuning only the p-value threshold. Using this grid of computed C+T scores, we further extend C+T with stacking. More precisely, instead of choosing one set of hyper-parameters that maximizes prediction in some training set, we propose to learn an optimal linear combination of all these C+T scores using an efficient penalized regression. We call this method Stacked Clumping and Thresholding (SCT) and show that this makes C+T more flexible. When the training set is large enough, SCT can provide much larger predictive performance as compared to any of the C+T scores individually.
doi:10.1101/653204 fatcat:jhln2ohh4fhr3biuemw6p3huhy

Efficient Implementation of Penalized Regression for Genetic Risk Prediction

Florian Privé, Hugues Aschard, Michael G. B. Blum
2019 Genetics  
Polygenic Risk Scores (PRS) combine genotype information across many single-nucleotide polymorphisms (SNPs) to give a score reflecting the genetic risk of developing a disease. PRS might have a major impact on public health, possibly allowing for screening campaigns to identify high-genetic risk individuals for a given disease. The "Clumping+Thresholding" (C+T) approach is the most common method to derive PRS. C+T uses only univariate genome-wide association studies (GWAS) summary statistics,
more » ... mmary statistics, which makes it fast and easy to use. However, previous work showed that jointly estimating SNP effects for computing PRS has the potential to significantly improve the predictive performance of PRS as compared to C+T. In this paper, we present an efficient method for the joint estimation of SNP effects using individual-level data, allowing for practical application of penalized logistic regression (PLR) on modern datasets including hundreds of thousands of individuals. Moreover, our implementation of PLR directly includes automatic choices for hyper-parameters. We also provide an implementation of penalized linear regression for quantitative traits. We compare the performance of PLR, C+T and a derivation of random forests using both real and simulated data. Overall, we find that PLR achieves equal or higher predictive performance than C+T in most scenarios considered, while being scalable to biobank data. In particular, we find that improvement in predictive performance is more pronounced when there are few effects located in nearby genomic regions with correlated SNPs; for instance, in simulations, AUC values increase from 83% with the best prediction of C+T to 92.5% with PLR. We confirm these results in a data analysis of a case-control study for celiac disease where PLR and the standard C+T method achieve AUC values of 89% and of 82.5%. Applying penalized linear regression to 350,000 individuals of the UK Biobank, we predict height with a larger correlation than with the best prediction of C+T (∼65% instead of ∼55%), further demonstrating its scalability and strong predictive power, even for highly polygenic traits. Moreover, using 150,000 individuals of the UK Biobank, we are able to predict breast cancer better than C+T, fitting PLR in a few minutes only. In conclusion, this paper demonstrates the feasibility and relevance of using penalized regression for PRS computation when large individual-level datasets are available, thanks to the efficient implementation available in our R package bigstatsr.
doi:10.1534/genetics.119.302019 pmid:30808621 pmcid:PMC6499521 fatcat:j4wmmrd645atfoidhprmqajr7e

Efficient toolkit implementing best practices for principal component analysis of population genetic data [article]

Florian Privé, Keurcien Luu, Michael G.B. Blum, John J. McGrath, Bjarni J. Vilhjálmsson
2019 bioRxiv   pre-print
Principal Component Analysis (PCA) of genetic data is routinely used to infer ancestry and control for population structure in various genetic analyses. However, conducting PCA analyses can be complicated and has several potential pitfalls. These pitfalls include (1) capturing Linkage Disequilibrium (LD) structure instead of population structure, (2) projected PCs that suffer from shrinkage bias when projecting PCA from a reference dataset to another independent dataset, (3) detecting sample
more » ... detecting sample outliers, and (4) uneven population sizes. In this work, we explore these potential issues when using PCA, and present efficient solutions to these. Following applications to the UK Biobank and the 1000 Genomes project datasets, we make recommendations for best practices and provide efficient and user-friendly implementations of the proposed solutions in R packages bigsnpr and bigutilsr. For example, we show that PC19 to PC40 in the UK Biobank capture LD structure. Using our automatic algorithm for removing long-range LD regions, we recover 16 PCs that capture population structure only. Therefore, we recommend using only 16-18 PCs from the UK Biobank. We provide evidence for a shrinkage bias when projecting PCs computed with data from the 1000 Genomes project. Although PC1 to PC4 suffer from only moderate shrinkage (1.01-1.09), PC5 (resp. PC10) for example suffers from a shrinkage factor of 1.50 (resp. 3.14). We provide a fast way to project new individuals that is not affected by this shrinkage bias. We also show how to use PCA to restrict analyses to individuals of homogeneous ancestry. Overall, we believe this work would be of interest for anyone using PCA in their analyses of genetic data, as well as for other omics data.
doi:10.1101/841452 fatcat:wrjh5qwnqngvjihvzfwi77lasy

Full genome sequence of bovine alphaherpesvirus 2 (BoHV-2)

Florian Pfaff, Antonie Neubauer-Juric, Stefan Krebs, Andreas Hauser, Stefanie Singer, Helmut Blum, Bernd Hoffmann
2020 Archives of Virology  
. * Florian Pfaff florian.pfaff@fli.de 1 Institute of Diagnostic Virology, Friedrich-Loeffler-Institut, 17493 Greifswald -Insel Riems, Deutschland 2 Bavarian Health and Food Safety Authority,  ... 
doi:10.1007/s00705-020-04895-x pmid:33315144 fatcat:dwlos4dua5brpd7bttn45d7ply

Efficient management and analysis of large-scale genome-wide data with two R packages: bigstatsr and bigsnpr [article]

Florian Prive, Hugues Aschard, Michael G. B. Blum
2017 bioRxiv   pre-print
Genome-wide datasets produced for association studies have dramatically increased in size over the past few years, with modern datasets commonly including millions of variants measured in dozens of thousands of individuals. This increase in data size is a major challenge severely slowing down genomic analyses. Specialized software for every part of the analysis pipeline have been developed to handle large genomic data. However, combining all these software into a single data analysis pipeline
more » ... analysis pipeline might be technically difficult. Here we present two R packages, bigstatsr and bigsnpr, allowing for management and analysis of large scale genomic data to be performed within a single comprehensive framework. To address large data size, the packages use memory-mapping for accessing data matrices stored on disk instead of in RAM. To perform data pre-processing and data analysis, the packages integrate most of the tools that are commonly used, either through transparent system calls to existing software, or through updated or improved implementation of existing methods. In particular, the packages implement a fast derivation of Principal Component Analysis, functions to remove SNPs in Linkage Disequilibrium, and algorithms to learn Polygenic Risk Scores on millions of SNPs. We illustrate applications of the two R packages by analysing a case-control genomic dataset for the celiac disease, performing an association study and computing Polygenic Risk Scores. Finally, we demonstrate the scalability of the R packages by analyzing a simulated genome-wide dataset including 500,000 individuals and 1 million markers on a single desktop computer.
doi:10.1101/190926 fatcat:7os6rg3onzgublwf4ntdrgz3qu

Grabbing at an angle

Nur Al-huda Hamdan, Jeffrey R. Blum, Florian Heller, Ravi Kanth Kosuru, Jan Borchers
2016 Proceedings of the 2016 ACM International Symposium on Wearable Computers - ISWC '16  
This paper investigates the pinch angle as a menu selection technique for two-dimensional foldable textile controllers. Based on the principles of marking menus, the selection of a menu item is performed by grabbing a fold at a specific angle, while changing value is performed by rolling the fold between the fingers. In a first experiment we determined an upper bound for the number of different angles users can reliably grab into a piece of fabric on their forearm. Our results show that users
more » ... s show that users can, without looking at it, reliably grab fabric on their forearm with an average accuracy between 30 • and 45 • , which would provide up to six different menu options selectable with the initial pinch. In a second experiment, we show that our textile sensor, Grabrics, can detect fold angles at 45 • spacing with up to 85% accuracy. Our studies also found that user performance and workload are independent of the fabric types that were tested.
doi:10.1145/2971763.2971786 dblp:conf/iswc/HamdanBHKB16 fatcat:j3msqkvmivfihmpzsznldnwtdi

A highly tunable dopaminergic oscillator generates ultradian rhythms of behavioral arousal

Ian D Blum, Lei Zhu, Luc Moquin, Maia V Kokoeva, Alain Gratton, Bruno Giros, Kai-Florian Storch
2014 eLife  
Blum et al. have now identified a second internal clock within the brain, which they name 'the DUO', and shown that this clock normally works in concert with the circadian clock to regulate daily patterns  ...  Acquisition of data, Analysis and interpretation of data; MVK, AG, BG, Conception and design, Drafting or revising the article, Contributed unpublished essential data or reagents Author ORCIDs Ian D Blum  ... 
doi:10.7554/elife.05105 pmid:25546305 pmcid:PMC4337656 fatcat:2xy3abukxzbwvbhtjilkiytdkq

Enhancing circadian clock function in cancer cells inhibits tumor growth

Silke Kiessling, Lou Beaulieu-Laroche, Ian D. Blum, Dominic Landgraf, David K. Welsh, Kai-Florian Storch, Nathalie Labrecque, Nicolas Cermakian
2017 BMC Biology  
Circadian clocks control cell cycle factors, and circadian disruption promotes cancer. To address whether enhancing circadian rhythmicity in tumor cells affects cell cycle progression and reduces proliferation, we compared growth and cell cycle events of B16 melanoma cells and tumors with either a functional or dysfunctional clock. Results: We found that clock genes were suppressed in B16 cells and tumors, but treatments inducing circadian rhythmicity, such as dexamethasone, forskolin and heat
more » ... forskolin and heat shock, triggered rhythmic clock and cell cycle gene expression, which resulted in fewer cells in S phase and more in G1 phase. Accordingly, B16 proliferation in vitro and tumor growth in vivo was slowed down. Similar effects were observed in human colon carcinoma HCT-116 cells. Notably, the effects of dexamethasone were not due to an increase in apoptosis nor to an enhancement of immune cell recruitment to the tumor. Knocking down the essential clock gene Bmal1 in B16 tumors prevented the effects of dexamethasone on tumor growth and cell cycle events. Conclusions: Here we demonstrated that the effects of dexamethasone on cell cycle and tumor growth are mediated by the tumor-intrinsic circadian clock. Thus, our work reveals that enhancing circadian clock function might represent a novel strategy to control cancer progression.
doi:10.1186/s12915-017-0349-7 pmid:28196531 pmcid:PMC5310078 fatcat:3skcdwdvfngfbbhpvkyxdrrqv4

Evolving classification systems for cancer cachexia: ready for clinical practice?

David Blum, Aurelius Omlin, Ken Fearon, Vickie Baracos, Lukas Radbruch, Stein Kaasa, Florian Strasser
2010 Supportive Care in Cancer  
Introduction Involuntary weight loss, the defining factor of cachexia, is a common consequence of advanced cancer. Discussion This review summarizes the actual cachexia definitions and classification systems (NCCTG-studies, Loprinzi et al.; PG-SGA, Ottery et al.; Cachexia Consensus Conference, Evans et al; Cancer Cachexia Study Group, Fearon et al.; and SCRINIO Working group, Bozzetti et al.). We describe the ongoing development of a new classification system for cancer cachexia, which is based
more » ... xia, which is based on literature reviews and Delphi processes within the European Palliative Care Research Collaborative. The review describes the evolving understanding of the pathophysiological mechanisms of cachexia and outlines an overview on treatment options. Conclusion In this review an outlook on the requirements of a new decision guiding instrument is given and the challenges in clinical decision making in palliative are discussed.
doi:10.1007/s00520-009-0800-6 pmid:20076976 fatcat:kp75ya444jcmzihjgvfxwwmjoe

Efficient analysis of large-scale genome-wide data with two R packages: bigstatsr and bigsnpr

Florian Privé, Hugues Aschard, Andrey Ziyatdinov, Michael G B Blum, Oliver Stegle
2018 Bioinformatics  
Motivation: Genome-wide datasets produced for association studies have dramatically increased in size over the past few years, with modern datasets commonly including millions of variants measured in dozens of thousands of individuals. This increase in data size is a major challenge severely slowing down genomic analyses, leading to some software becoming obsolete and researchers having limited access to diverse analysis tools. Results: Here we present two R packages, bigstatsr and bigsnpr,
more » ... sr and bigsnpr, allowing for the analysis of large scale genomic data to be performed within R. To address large data size, the packages use memory-mapping for accessing data matrices stored on disk instead of in RAM. To perform data pre-processing and data analysis, the packages integrate most of the tools that are commonly used, either through transparent system calls to existing software, or through updated or improved implementation of existing methods. In particular, the packages implement fast and accurate computations of principal component analysis and association studies, functions to remove single nucleotide polymorphisms in linkage disequilibrium and algorithms to learn polygenic risk scores on millions of single nucleotide polymorphisms. We illustrate applications of the two R packages by analyzing a case-control genomic dataset for celiac disease, performing an association study and computing polygenic risk scores. Finally, we demonstrate the scalability of the R packages by analyzing a simulated genome-wide dataset including 500 000 individuals and 1 million markers on a single desktop computer.
doi:10.1093/bioinformatics/bty185 pmid:29617937 pmcid:PMC6084588 fatcat:u3237k2fkbhjfnqxptydkyrrry

MobilityCoins – A new currency for the multimodal urban transportation system [article]

Klaus Bogenberger, Philipp Blum, Florian Dandl, Lisa-Sophie Hamm, Allister Loder, Patrick Malcolm, Martin Margreiter, Natalie Sautter
2021 arXiv   pre-print
The MobilityCoin is a new, all-encompassing currency for the management of the multimodal urban transportation system. MobilityCoins includes and replaces various existing transport policy instruments while also incentivizing a shift to more sustainable modes as well as empowering the public to vote for infrastructure measures.
arXiv:2107.13441v2 fatcat:h2b32pvv5jfa3ka2humzj6soe4
« Previous Showing results 1 — 15 out of 1,091 results