Filters








2,993,500 Hits in 5.0 sec

Smaller Models, Better Generalization [article]

Mayank Sharma, Suraj Tripathi, Abhimanyu Dubey, Jayadeva, Sai Guruju, Nihal Goalla
2019 arXiv   pre-print
We observe a general trend in improvement of accuracies as we quantize the models.  ...  We compare various regularizations prevalent in the literature and show the superiority of our method in achieving sparser models that generalize well.  ...  Smaller Models, Better Generalization Mayank Sharma  ... 
arXiv:1908.11250v1 fatcat:lkqvczu4tjferlq2fbuual6jrm

Research on Complex Product Parts Matching by using Improved Taguchi Method

Pei Fengque, Tong Yifei, Yuan Minghai, Song Haojie
2021 Mechanics  
to improve convergence perusal and increase matching rate; General adopt Smaller-is-better to enhance assembly accuracy, reduce interference fit and assembly cost.  ...  improved Taguchi method to dimension chains measures, by using different quality loss function to different dimension chains, the cores are the Nominal-is-best, non-core is measured with the improved Smaller-is-better  ...  The model can be divided into the-Target-is-Best, the-Smaller-is-Better and the-Larger-is-Better [18] .  ... 
doi:10.5755/j02.mech.28182 fatcat:np334xc2ivdnpnqbmatz5dj64y

A comparison between Tsallis's statistics-based and generalized quasi-hyperbolic discount models in humans

Taiki Takahashi
2008 Physica A: Statistical Mechanics and its Applications  
Although a recent neuroeconomic study has proposed a dual-self discounting model (a generalized quasi-hyperbolic discounting), no study to date examined the relationship between the q-exponential and quasi-hyperbolic  ...  quasi-hyperbolic discounting better fit individual data.  ...  Consistency parameter q was smaller than 1. q-exponential discount function better fit group data. Note that smaller AIC corresponds to better fitting.  ... 
doi:10.1016/j.physa.2007.09.007 fatcat:xv5xpxygzjdnjpgituz5oqxr5e

A Comparison of Word Similarity Performance Using Explanatory and Non-explanatory Texts

Lifeng Jin, William Schuler
2015 Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies  
to make these kinds of category judgments with equal or better accuracy.  ...  This paper shows vectorial representations derived from substantially smaller explanatory text datasets such as English Wikipedia and Simple English Wikipedia preserve enough lexical semantic information  ...  At the later points, it is also clear that although FW-CBOW is generally better than all the other models most of the time, the margin could be considered narrow and furthermore it is equally as good as  ... 
doi:10.3115/v1/n15-1101 dblp:conf/naacl/JinS15 fatcat:rbay7qwq7zb5tmnl7qiccfegge

JuriBERT: A Masked-Language Model Adaptation for French Legal Text [article]

Stella Douka, Hadi Abdine, Michalis Vazirgiannis, Rajaa El Hamdani, David Restrepo Amariles
2022 arXiv   pre-print
We conclude that some specific tasks do not benefit from generic language models pre-trained on large amounts of data.  ...  We prove that domain-specific pre-trained models can perform better than their equivalent generalised ones in the legal domain.  ...  Instead of using general purpose pre-trained models that are highly skewed towards generic language, we can now pre-train models that better meet our needs and are highly adapted to specific domains, like  ... 
arXiv:2110.01485v2 fatcat:dwbxhd5acnfgli5dgittlspyzu

Page 1050 of American Society of Civil Engineers. Collected Journals Vol. 126, Issue 11 [page]

2000 American Society of Civil Engineers. Collected Journals  
For the larger runoff event, the model tended to overpredict nutrient losses slightly, whereas, for smaller events, the model generally underpredicted nutrient losses.  ...  For the four smaller storms, the model underpredicted P loss by 68—98%.  ... 

Multi-objective optimization based reverse strategy with differential evolution algorithm for constrained optimization problems

Liang Gao, Yinzhi Zhou, Xinyu Li, Quanke Pan, Wenchao Yi
2015 Expert systems with applications  
Compared with usual strategies, the innovation strategy cuts off the worse solutions with smaller fitness value regardless of its constraints violation.  ...  The experimental results demonstrate that MRS-DE can achieve better performance on 22 classical benchmark functions compared with several state-of-the-art algorithms.  ...  solution with smaller constraints violation; if offspring has smaller fitness value than gbest, than offspring is better than parent.  ... 
doi:10.1016/j.eswa.2015.03.016 fatcat:alvnfo674zb2vefrtpt6bznhle

Implicit Regularization of Stochastic Gradient Descent in Natural Language Processing: Observations and Implications [article]

Deren Lei, Zichen Sun, Yijun Xiao, William Yang Wang
2018 arXiv   pre-print
We show pure SGD tends to converge to minimas that have better generalization performances in multiple natural language processing (NLP) tasks.  ...  Altogether, our work enables a deepened understanding on how implicit regularization affects the deep learning model and sheds light on the future study of the over-parameterized model's generalization  ...  ., 2017) that F3: Smaller initialization range will lead to a better generalization effect.  ... 
arXiv:1811.00659v1 fatcat:pvuy2jivlzgb5e3bz2ecd53kim

Page 54 of The Journal of Business Vol. 46, Issue 1 [page]

1973 The Journal of Business  
In addition, the FT model was alone in generally providing better results than the “same-change” autoregressive baseline. All other models were generally inferior to this baseline.  ...  As seen, the Friend-Taubman model dominates this comparison, having a smaller forecasting error than the NII model in 18 of 28 cases (one tie), smaller than the Saint Louis model in 20 of 28 cases, and  ... 

Design Space Exploration of Hybrid Quantum–Classical Neural Networks

Muhammad Kashif, Saif Al-Kuwari
2021 Electronics  
Although the classical models generalize slightly better than hybrid variants, the generalization improvement rate of hybrid variants is still quite comparable to classical models.  ...  We observed that the accuracy improvement rate and model convergence are better in all hybrid variants for the majority of the experiments, and hence it is safe to say that when the amount of data is increased  ...  GeneralizationError(%) = TrainAccuracy − ValidationAccuracy TrainAccuracy × 100 (1) The smaller the difference between the train and validation accuracy, the better the generalization.  ... 
doi:10.3390/electronics10232980 fatcat:pgxxq3vfsjb6pmjhnj65mxrwqe

Self-supervised Contrastive Learning for Irrigation Detection in Satellite Imagery [article]

Chitra Agastya, Sirak Ghebremusse, Ian Anderson, Colorado Reed, Hossein Vahabi, Alberto Todeschini
2021 arXiv   pre-print
and 40% more generalization ability than the traditional supervised learning methods.  ...  We apply state-of-the-art self-supervised deep learning techniques to optical remote sensing data, and find that we are able to detect irrigation with up to nine times better precision, 90% better recall  ...  Distillation scores were better than those of supervised baselines for smaller data sizes on many architectures.  ... 
arXiv:2108.05484v1 fatcat:op6xujrraremnjhbswhi2vxani

Joint modelling of potentially avoidable hospitalisation for five diseases accounting for spatiotemporal effects: A case study in New South Wales, Australia

Jannah Baker, Nicole White, Kerrie Mengersen, Margaret Rolfe, Geoffrey G. Morgan, Mohammad Ali
2017 PLoS ONE  
likelihood function of observing the data given the model at iteration . = ̅ + ̅ = 1 ∑ ( , ( ) ) =1 ( , ( ) ) = −2log ( ( , ( ) )) A smaller value of ̅ indicates a relatively better model fit.  ...  The average log likelihood over all iterations is then reported, with higher values indicative of better model fit, relative to other models under consideration.  ... 
doi:10.1371/journal.pone.0183653 pmid:28854280 pmcid:PMC5576724 fatcat:fztcbkncbbe7dcdbqbbqilwdia

Page 336 of Educational and Psychological Measurement Vol. 71, Issue 2 [page]

2011 Educational and Psychological Measurement  
The within (Level 1) model is a factor model: Both F,,; and F,,3 generate (subject to independent measurement errors without further mention as  ...  The algorithm performs better for a larger Level 1 sample size than for a smaller Level 1 sample size in the sense that the SD becomes smaller for a larger N, with the same G.  ... 

Quantile Regression Approach to Model Censored Data

Sarmada Sarmada, Ferra Yanuar
2020 Science and Technology Indonesia  
Both methods were applied to generated data of 150, 500, and 3000 sample size.  ...  This study proves that the censored quantile regression method tends to produce smaller absolute bias and a smaller standard error than the quantile regression method for all three group data specified  ...  The results obtained on the three data sizes generated indicate that the estimated value of the parameters in the CQR tends to be better than the quantile regression.  ... 
doi:10.26554/sti.2020.5.3.79-84 fatcat:2wujyty5e5auvh3yc2gmh7vctu

Assessment of a Regional-Scale Weather Model for Hydrological Applications in South Korea

Yong Jung, Yuh-Lang Lin
2016 Environment and Natural Resources Research  
<p class="1Body">In this study, a regional numerical weather prediction (NWP) model known as the Weather Research Forescasting (WRF) model was adopted to improve the quantitative precipitation forecasts  ...  Sensitivity of QPF on domain size at Sangkeug indicated that the localized smaller domain had 55% (from 0.35 to 0.90) improved precipitation accuracy based on IOA of 2008.  ...  Furthermore, they indicated that the WRF model generated finer-scale structures closer to realistic conditions than those in other models.  ... 
doi:10.5539/enrr.v6n2p28 fatcat:6qrqibltmbfijnw7ixbv2qy47q
« Previous Showing results 1 — 15 out of 2,993,500 results