10,876 Hits in 12.2 sec

Efficient Sparsity Estimation via Marginal-Lasso Coding [chapter]

Tzu-Yi Hung, Jiwen Lu, Yap-Peng Tan, Shenghua Gao
2014 Lecture Notes in Computer Science  
While there are many works working on sparse coding and dictionary learning, none of them has exploited the advantages of the marginal regression and the lasso simultaneously to provide more efficient  ...  On the other hand, the proposed method is more robust than the conventional marginal regression based methods.  ...  This work is supported by the research grant for the Human Cyber Security Systems (HCSS) Program at the Advanced Digital Sciences Center from the Agency for Science, Technology and Research of Singapore  ... 
doi:10.1007/978-3-319-10593-2_38 fatcat:k4e636qbsfbx3m7vjbchlixdum

Regularization after retention in ultrahigh dimensional linear regression models

Haolei Weng, Yang Feng, Xingye Qiao
2018 Statistica sinica  
The new method is shown to be model selection consistent in the ultrahigh dimensional linear regression model.  ...  To improve the finite sample performance, we then introduce a three-step version and characterize its asymptotic behavior.  ...  In the first step, both RAR and SIS-lasso calculate and rank the marginal regression coefficient estimates.  ... 
doi:10.5705/ss.202015.0413 fatcat:t5wd257xpzdararxmceoqlc2xe

Accuracy and stability of solar variable selection comparison under complicated dependence structures [article]

Ning Xu, Timothy C.G. Fisher, Jian Hong
2020 arXiv   pre-print
In this paper we focus on the empirical variable-selection peformance of subsample-ordered least angle regression (Solar) – a novel ultrahigh dimensional redesign of lasso – on the empirical data with  ...  Also, With the same computation load, solar yields substantiali mprovements over two lasso solvers (least-angle regression for lasso and coordinate-descent) in terms of the sparsity (37-64% reduction in  ...  The variable-selection comparison among lasso, CV-en and solar confirms the advantage of solar, which we have demonstrated in last chapter.  ... 
arXiv:2007.15614v2 fatcat:g5opbblqxjaj7oft4yxdjrkwxi

The Bayesian Lasso

Trevor Park, George Casella
2008 Journal of the American Statistical Association  
Slight modifications lead to Bayesian versions of other Lasso-related estimation methods, including bridge regression and a robust variant.  ...  Moreover, the structure of the hierarchical model provides both Bayesian and likelihood methods for selecting the Lasso parameter.  ...  For comparison, the least squares estimates and the Lasso estimates for two different values of λ are also shown.  ... 
doi:10.1198/016214508000000337 fatcat:behyabgeond6jj3ujkti5otmd4

Practical Issues in Screening and Variable Selection in Genome-Wide Association Analysis

Sungyeon Hong, Yongkang Kim, Taesung Park
2014 Cancer Informatics  
Penalized regressions, such as the ridge, lasso, adaptive lasso, and elastic-net regressions, are commonly used for the variable selection step.  ...  The pre-screening step selects SNPs in terms of their P-values or the absolute values of the regression coefficients in single SNP analysis.  ...  Acknowledgement The AREDS data used for the analyses described in this manuscript were obtained from the AREDS database, controlled through the database of Genotypes and Phenotypes (dbGaP) accession number  ... 
doi:10.4137/cin.s16350 pmid:25635166 pmcid:PMC4298256 fatcat:wcimuwrjbrgfbnirp6se2frbqu

A novel algorithm for simultaneous SNP selection in high-dimensional genome-wide association studies

Verena Zuber, A Pedro Duarte Silva, Korbinian Strimmer
2012 BMC Bioinformatics  
Subsequently, we conduct a comprehensive comparison study including five advanced regression approaches (boosting, lasso, NEG, MCP, and CAR score) and a univariate approach (marginal correlation) to determine  ...  Results: We develop a novel multivariate algorithm for large scale SNP selection using CAR score regression, a promising new approach for prioritizing biomarkers.  ...  Preparation of the Genetic Analysis Workshop 17 simulated exome data set was supported in part by NIH R01 MH059490 and used sequencing data from the 1000 Genomes Project (  ... 
doi:10.1186/1471-2105-13-284 pmid:23113980 pmcid:PMC3558454 fatcat:e5zry25g6jhp7p54n5tai563e4

Sparse and Smooth Prior for Bayesian Linear Regression with Application to ETEX Data [article]

Lukas Ulrych, Vaclav Smidl
2017 arXiv   pre-print
Sparsity of the solution of a linear regression model is a common requirement, and many prior distributions have been designed for this purpose.  ...  In simulation, we show that the LS-APC prior achieves results comparable to that of the Bayesian Fused Lasso for piecewise constant parameter and outperforms the BFL for parameters of more general shapes  ...  Figure 5 : 5 Comparison of marginal log-likelihood of the LS-APC model for varying value of ξ.  ... 
arXiv:1706.06908v1 fatcat:cmmkz2u72fb2jati57vnvwayxa

Effect modification in anchored indirect treatment comparisons: Comments on "Matching-adjusted indirect comparisons: Application to time-to-event data" [article]

Antonio Remiro-Azócar, Anna Heath, Gianluca Baio
2021 arXiv   pre-print
and accuracy than MAIC if there are no effect modifiers in imbalance; (3) while the target estimand of the simulation study is a conditional treatment effect, MAIC targets a marginal or population-average  ...  This commentary regards a recent simulation study conducted by Aouni, Gaudel-Dedieu and Sebastien, evaluating the performance of different versions of matching-adjusted indirect comparison (MAIC) in an  ...  Peer Reviewer 1 of a previous article of the authors, 19 helped improve this article further with comments on effect modification and interaction in the Cox model.  ... 
arXiv:2012.05127v4 fatcat:q4bg3c6hjvgkpk6hz6vyy7a6sm

Marginal false discovery rate control for likelihood-based penalized regression models [article]

Ryan Miller, Patrick Breheny
2019 arXiv   pre-print
Our approach is fast, flexible and can be used with a variety of penalty functions including lasso, elastic net, MCP, and MNet.  ...  In this paper, we derive a general method for controlling the marginal false discovery rate that can be applied to any penalized likelihood-based model, such as logistic regression and Cox regression.  ...  The comparison of FDR control procedures for the lasso (mFDR, sample splitting, and covariance testing) is shown in Figure 2 .  ... 
arXiv:1710.11459v4 fatcat:iureyntdfzg4hdid6dqybnalqi

Comparison of microarray breast cancer classification using support vector machine and logistic regression with LASSO and boruta feature selection

Nursabillilah Mohd Ali, Nor Azlina Ab Aziz, Rosli Besar
2020 Indonesian Journal of Electrical Engineering and Computer Science  
Two feature selection methods namely Boruta and LASSO and SVM and LR classifier are studied. A breast cancer dataset from GEO web is adopted in this study.  ...  Despite the advancement of medical diagnostic and prognostic tools for early detection and treatment of breast cancer patients, research on development of better and more reliable tools is still actively  ...  ACKNOWLEDGEMENTS This work is supported by Universiti Teknikal Malaysia Melaka, the Ministry of Education Malaysia, under Funding Number: KPT(BS)850320045568 through SLAB Sponsorship Awards and Multimedia  ... 
doi:10.11591/ijeecs.v20.i2.pp712-719 fatcat:za3olh3donaz3htjqbym243xcy

Temporal prediction of future state occupation in a multistate model from high-dimensional baseline covariates via pseudo-value regression

Sandipan Dutta, Susmita Datta, Somnath Datta
2016 Journal of Statistical Computation and Simulation  
least squares (PLS) or the least absolute shrinkage and selection operator (LASSO), or their variants.  ...  With the advent of high throughput genomic and proteomic assays, a clinician may intent to use such high dimensional covariates in making better prediction of state occupation.  ...  Acknowledgments This work was supported by the NCI/NIH under Grant CA170091-01A1. We thank the reviewers for their constructive comments that helped to improve the article.  ... 
doi:10.1080/00949655.2016.1263992 pmid:29217870 pmcid:PMC5714309 fatcat:c3po7ubqynfxvk5tkcy7bxhduy

A fast unified algorithm for solving group-lasso penalize learning problems

Yi Yang, Hui Zou
2014 Statistics and computing  
This paper concerns a class of group-lasso learning problems where the objective function is the sum of an empirical loss and the group-lasso penalty.  ...  As illustration examples, we develop concrete algorithms for solving the group-lasso penalized least squares and several group-lasso penalized large margin classifiers.  ...  Acknowledgments The authors thank the editor, an associate editor and two referees for their helpful comments and suggestions. This work is supported in part by NSF Grant DMS-08-46068.  ... 
doi:10.1007/s11222-014-9498-5 fatcat:fuoryqdofrayrh7k6icdvqwmae

Mas-o-menos: a simple sign averaging method for discrimination in genomic data analysis

S. D. Zhao, G. Parmigiani, C. Huttenhower, L. Waldron
2014 Bioinformatics  
Through simulations and a comprehensive analysis of gene expression datasets of ovarian tumors with survival information, we confirm that más-o-menos can match and even exceed the discrimination power  ...  of more established methods, with a significant advantage in speed, ease of implementation and interpretation, and reproducibility across clinical and technological settings.  ...  As in the simulations, we compared más-o-menos to lasso, ridge, marginal regression, the single best gene, and a randomly generated negative control.  ... 
doi:10.1093/bioinformatics/btu488 pmid:25061068 pmcid:PMC4201155 fatcat:rg3dwyqm5ne4ng7mcntyvhkapi

Simulation Studies as Designed Experiments: The Comparison of Penalized Regression Models in the "Large p, Small n" Setting

Elias Chaibub Neto, J. Christopher Bare, Adam A. Margolin, Gustavo Stolovitzky
2014 PLoS ONE  
We illustrate the application of our proposed simulation framework with a detailed comparison of the ridge-regression, lasso and elastic-net algorithms in a large scale study investigating the effects  ...  Nonetheless, the field experiences a lack of rigorous methodology aimed to systematically and objectively evaluate competing approaches.  ...  Acknowledgments We would like to thank the editor and the anonymous reviewers for the several comments/suggestions that greatly improved this work.  ... 
doi:10.1371/journal.pone.0107957 pmid:25289666 pmcid:PMC4188526 fatcat:l2d7345c4jfqhczp2kgjmd3v5i

Smooth sparse coding via marginal regression for learning sparse representations

Krishnakumar Balasubramanian, Kai Yu, Guy Lebanon
2016 Artificial Intelligence  
We propose and analyze a novel framework for learning sparse representations, based on two statistical techniques: kernel smoothing and marginal regression.  ...  Furthermore, we propose using marginal regression for obtaining sparse codes, which significantly improves the speed and allows one to scale to large dictionary sizes easily.  ...  ., 2012) for a statistical comparison of lasso regression and marginal regression.  ... 
doi:10.1016/j.artint.2016.04.009 fatcat:pvh6zwdxbnernf2siwka7pthjy
« Previous Showing results 1 — 15 out of 10,876 results