Filters








12,417 Hits in 7.5 sec

Using groupings of static analysis alerts to identify files likely to contain field failures

Mark S. Sherriff, Sarah Smith Heckman, J. Michael Lake, Laurie A. Williams
2007 The 6th Joint Meeting on European software engineering conference and the ACM SIGSOFT symposium on the foundations of software engineering companion papers - ESEC-FSE companion '07  
Our technique uses singular value decomposition to generate groupings of static analysis alert types, which we call alert signatures, that have been historically linked to field failure-prone files in  ...  Files that have a matching alert signature are identified as having similar static analysis alert characteristics to files with known field failures in a previous release of the system.  ...  Our research goal is to provide a methodology for highlighting files that contain groups of static analysis alerts historically associated with field failures.  ... 
doi:10.1145/1295014.1295042 fatcat:pxgac6lubna3bksifsavzmxbum

Using groupings of static analysis alerts to identify files likely to contain field failures

Mark S. Sherriff, Sarah Smith Heckman, J. Michael Lake, Laurie A. Williams
2007 Proceedings of the the 6th joint meeting of the European software engineering conference and the ACM SIGSOFT symposium on The foundations of software engineering - ESEC-FSE '07  
Our technique uses singular value decomposition to generate groupings of static analysis alert types, which we call alert signatures, that have been historically linked to field failure-prone files in  ...  Files that have a matching alert signature are identified as having similar static analysis alert characteristics to files with known field failures in a previous release of the system.  ...  Our research goal is to provide a methodology for highlighting files that contain groups of static analysis alerts historically associated with field failures.  ... 
doi:10.1145/1287624.1287711 dblp:conf/sigsoft/SherriffHLW07 fatcat:4escpvc6mzh2hmzqvlkonpb2wu

Identifying fault-prone files using static analysis alerts through singular value decomposition

Mark Sherriff, Sarah Smith Heckman, Mike Lake, Laurie Williams
2007 CASCON '07: Proceedings of the 2007 conference of the center for advanced studies on Collaborative research  
In this paper, we propose a technique for leveraging field failures and historical change records to determine which sets of alerts are often associated with a field failure using singular value decomposition  ...  Static analysis tools tend to generate more alerts than a development team can reasonably examine without some form of guidance.  ...  Our research goal is to provide a methodology for highlighting files that contain groups of static analysis alerts historically associated with field failures.  ... 
doi:10.1145/1321211.1321247 dblp:conf/cascon/SherriffHLW07 fatcat:nwj6tkgc4je4fpc7wmw5zobntu

Prioritizing software security fortification throughcode-level metrics

Michael Gegick, Laurie Williams, Jason Osborne, Mladen Vouk
2008 Proceedings of the 4th ACM workshop on Quality of protection - QoP '08  
Using recursive partitioning, we built attack-prone prediction models with the following code-level metrics: static analysis tool alert density, code churn, and count of source lines of code.  ...  We create predictive models to identify which components are likely to have the most security risk.  ...  Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation.  ... 
doi:10.1145/1456362.1456370 dblp:conf/ccs/GegickWOV08 fatcat:j5jzqps5vbhxxoios5iamcffna

A Model Building Process for Identifying Actionable Static Analysis Alerts

Sarah Heckman, Laurie Williams
2009 2009 International Conference on Software Testing Verification and Validation  
Automated static analysis can identify potential source code anomalies early in the software process that could lead to field failures.  ...  We propose a process for building false positive mitigation models to classify static analysis alerts as actionable or unactionable using machine learning techniques.  ...  Introduction Automated static analysis tools can be used to identify potential source code anomalies, which we call alerts, early in the software process that could lead to field failures [9] .  ... 
doi:10.1109/icst.2009.45 dblp:conf/icst/HeckmanW09 fatcat:amptpw533jbd7dfoiqo4wit7xi

Finding patterns in static analysis alerts: improving actionable alert ranking

Quinn Hanam, Lin Tan, Reid Holmes, Patrick Lam
2014 Proceedings of the 11th Working Conference on Mining Software Repositories - MSR 2014  
High rates of unactionable alerts decrease the utility of static analysis tools in practice.  ...  For a developer inspecting the top 5% of all alerts for three sample projects, our approach is able to identify 57 of 211 actionable alerts, which is 38 more than the FindBugs priority measure.  ...  They identify 18 prior papers that provide methods of predicting actionable alerts and group what attributes were used to classify warnings as actionable or unactionable.  ... 
doi:10.1145/2597073.2597100 dblp:conf/msr/HanamTHL14 fatcat:qeln4vvumrgkvazh4emkabf724

An Empirical Investigation on the Challenges of Creating Custom Static Analysis Rules for Defect Localization [article]

Diogo Silveira Mendonça, Marcos Kalinowski
2021 arXiv   pre-print
However, a proper selection and training of maintainers is needed to apply PDM effectively. Also, using a higher level of abstraction can ease static analysis rule programming for novice maintainers.  ...  The study was divided into three tasks: (i) identifying a defect pattern, (ii) programming a static analysis rule to locate instances of the pattern, and (iii) verifying the located instances.  ...  The data that should be extracted consists of a file name and line where the exception was thrown as well as the exception type and error message contained in the failure.  ... 
arXiv:2011.12886v3 fatcat:xklo2cdrsjawjlbid2blgjcxbi

On establishing a benchmark for evaluating static analysis alert prioritization and classification techniques

Sarah Heckman, Laurie Williams
2008 Proceedings of the Second ACM-IEEE international symposium on Empirical software engineering and measurement - ESEM '08  
We utilized FAULTBENCH to evaluate three versions of the AWARE adaptive ranking model to prioritize and classify static analysis alerts.  ...  Alert prioritization and classification addresses the problem in many static analysis tools of numerous alerts that are not an indication of a fault or unimportant to the developer.  ...  ACKNOWLEDGMENTS This research is funded by an IBM PhD Fellowship awarded to the first author. We would like to thank the RealSearch reading group, particularly Andy Meneely, for their feedback.  ... 
doi:10.1145/1414004.1414013 dblp:conf/esem/HeckmanW08 fatcat:t5dk4vl5ize75p3424hzknyiny

A systematic literature review of actionable alert identification techniques for automated static code analysis

Sarah Heckman, Laurie Williams
2011 Information and Software Technology  
Context: Automated static analysis (ASA) identifies potential source code anomalies early in the software development lifecycle that could lead to field failures.  ...  Techniques that identify anomalies important enough for developers to fix (actionable alerts) may increase the usefulness of ASA in practice.  ...  and • Focus on identifying if a single static analysis alert or a group of alerts are actionable or unactionable as opposed to using ASA results to identify fault-or failure-prone files.  ... 
doi:10.1016/j.infsof.2010.12.007 fatcat:bwettl5fqjczhl4svfikm7545q

Evaluation of Static Vulnerability Detection Tools with Java Cryptographic API Benchmarks [article]

Sharmin Afrose, Ya Xiao, Sazzadur Rahaman, Barton P. Miller, Danfeng Yao
2021 arXiv   pre-print
Our benchmarks are useful for advancing state-of-the-art solutions in the space of misuse detection.  ...  We present their performance and comparative analysis. The ApacheCryptoAPI-Bench also examines the scalability of the tools.  ...  The probable reason is the larger number of files and lines of code Spark contains for analysis.  ... 
arXiv:2112.04037v1 fatcat:kv4jwcw2wnfulfz2yh6zjoleyi

Beyond the Hype: A Real-World Evaluation of the Impact and Cost of Machine Learning-Based Malware Detection [article]

Robert A. Bridges, Sean Oesch, Miki E. Verma, Michael D. Iannacone, Kelly M.T. Huffer, Brian Jewell, Jeff A. Nichols, Brian Weber, Justin M. Beaver, Jared M. Smith, Daniel Scofield, Craig Miles (+3 others)
2022 arXiv   pre-print
To identify weaknesses, we tested each tool against 3,536 total files (2,554 or 72\% malicious, 982 or 28\% benign) of a variety of file types, including hundreds of malicious zero-days, polyglots, and  ...  We present statistical results on detection time and accuracy, consider complementary analysis (using multiple tools together), and provide two novel applications of the recent cost-benefit evaluation  ...  The views and conclusions contained herein are those of the authors and should not be interpreted as representing the official policies or endorsements, either expressed or implied, of the DOD, NAVWAR,  ... 
arXiv:2012.09214v3 fatcat:gzfaihar6ve7haviaxsm52uxnm

Follow Your Nose – Which Code Smells are Worth Chasing? [article]

Idan Amit, Nili Ben Ezra, Dror G. Feitelson
2021 arXiv   pre-print
Files without the potentially causal smells are 50% more likely to be of high quality.  ...  The common use case of code smells assumes causality: Identify a smell, remove it, and by doing so improve the code. We empirically investigate their fitness to this use.  ...  Static analysis is a common way to implement code smells identification. CheckStyle, the code smell identification tool that we used, is also based on static analysis.  ... 
arXiv:2103.01861v1 fatcat:vv7wqkymc5cc5owvdszloygy3u

A Survey on Automated Log Analysis for Reliability Engineering [article]

Shilin He, Pinjia He, Zhuangbin Chen, Tianyi Yang, Yuxin Su, Michael R. Lyu
2021 arXiv   pre-print
event templates, and how to employ logs to detect anomalies, predict failures, and facilitate diagnosis.  ...  This survey presents a detailed overview of automated log analysis research, including how to automate and assist the writing of logging statements, how to compress logs, how to parse logs into structured  ...  Manually sifting through a massive amount of logs to identify failure-relevant ones is like finding a needle in a haystack.  ... 
arXiv:2009.07237v2 fatcat:thbtfboglnglld5rr6s2gqhizi

Test Suites as a Source of Training Data for Static Analysis Alert Classifiers [article]

Lori Flynn and William Snavely and Zachary Kurtz
2021 arXiv   pre-print
We propose using static analysis test suites (i.e., repositories of "benchmark" programs that are purpose-built to test coverage and precision of static analysis tools) as a novel source of training data  ...  To save on human effort to triage these alerts, a significant body of work attempts to use machine learning to classify and prioritize alerts.  ...  We propose using static analysis test suites (i.e., repositories of "benchmark" programs that are purposebuilt to test coverage and precision of static analysis tools) as a novel source of training data  ... 
arXiv:2105.03523v1 fatcat:q63xum4yvnh67dvgw64n3owbsm

Characterizing Buffer Overflow Vulnerabilities in Large C/C++ Projects

Jose D'Abruzzo Pereira, Naghmeh Ivaki, Marco Vieira
2021 IEEE Access  
Then, we run two widely used C/C++ Static Analysis Tools (SATs) (i.e., CppCheck and Flawfinder) on the vulnerable and neutral (after the vulnerability fix) versions of each code unit, showing the low effectiveness  ...  Nevertheless, most buffer overflow vulnerabilities are not detectable by vulnerability detection tools and static analysis tools (SATs).  ...  They use ODC to identify the faults and failure types detected by the three techniques studied (static analysis, inspection, and testing).  ... 
doi:10.1109/access.2021.3120349 fatcat:siplqbof2bhgzajm3vxybr4pka
« Previous Showing results 1 — 15 out of 12,417 results