A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2021; you can also visit the original URL.
The file type is application/pdf
.
Filters
Effect of Socio-Cultural Factors on Feeding and Growth of Children Less Than Two Years
2019
Bulletin of the National Nutrition Institute
This study was conducted on 120 healthy children less than two years of age at the Breastfeeding Counseling Clinic in National Institute of Nutrition to study the social and cultural impact on the growth ...
half of the samples (45%) were stunted. ...
Less than half (45%) of the caregivers in our sample received any information from books, TV, or the Internet. ...
doi:10.21608/enj.2019.144746
fatcat:fzni2bl2ojaaravt4c6j6svim4
Diatom Cooccurrence Shows Less Segregation than Predicted from Niche Modeling
2016
PLoS ONE
We examined stream diatom cooccurrences in France through a national database of samples. ...
the patterns generated from a set of standard and environmentally constrained null models. ...
Acknowledgments We thank Soizic Morin, Michel Coste and Sophia Passy for their helpul comments on the attribution of guilds to the different species. ...
doi:10.1371/journal.pone.0154581
pmid:27128737
pmcid:PMC4851409
fatcat:vzjzoicikngu3ju4uwtuecxuz4
Ultrasonic Incisions Produce Less Inflammatory Mediator Response during Early Healing than Electrosurgical Incisions
2013
PLoS ONE
Samples were also assessed via histological examination. ...
Although electrosurgery (ES) has been used for many generations, newly-developed ultrasonic devices (HARMONICH Blade, HB) have been shown at a macroscopic level to offer better coagulation with less thermally-induced ...
The slab was trimmed to remove the skin from the top and muscle from the bottom of the sample. ...
doi:10.1371/journal.pone.0073032
pmid:24058457
pmcid:PMC3776814
fatcat:ow7gnovpwbf47gin4jklihh5o4
A learning style classification mechanism for e-learning
2009
Computers & Education
To demonstrate the viability of the proposed mechanism, the proposed mechanism is implemented on an open-learning management system. ...
Adaptive learning provides adaptive learning materials, learning strategies and/or courses according to a student's learning style. ...
,(c m , R m )} can be derived. In S, more than one class has the same number of samples. In this situation, the priority order in the class ranks is difficult to be determined. ...
doi:10.1016/j.compedu.2009.02.008
fatcat:uq5bzhxynrgf7jfx6q7eweetqa
Monotone Learning
[article]
2022
arXiv
pre-print
The amount of training-data is one of the key factors which determines the generalization capacity of learning algorithms. ...
For example, in PAC learning it implies that every learnable class admits a monotone PAC learner. This resolves questions by Viering, Mey, and Loog (2019); Viering and Loog (2021); Mhammedi (2021). ...
As the sample size increases, the pool of hypotheses to choose from will increase and the best one from a larger pool will thus have a smaller loss than from a smaller pool. ...
arXiv:2202.05246v1
fatcat:xdzlvtan3vfgri6jli5mpmqf3e
Learning to Sample: an Active Learning Framework
[article]
2019
arXiv
pre-print
samples based on an optimized integration of uncertainty and diversity. ...
This framework has two key components: a sampling model and a boosting model, which can mutually learn from each other in iterations to improve the performance of each other. ...
When a dataset has highly imbalanced classes (i.e., the number of instances from a majority class is much more than the number of instances from a minority class), the cold start problem can be further ...
arXiv:1909.03585v1
fatcat:vyehba6x6bfwthqrnzdlj34sae
Variational Adversarial Kernel Learned Imitation Learning
2020
PROCEEDINGS OF THE THIRTIETH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE AND THE TWENTY-EIGHTH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE
Our method optimizes over a large cost-function space and is sample efficient and robust to overfitting. ...
To this end, we propose the variational adversarial kernel learned imitation learning (VAKLIL), which measures the distance using the maximum mean discrepancy with variational kernel learning. ...
β, λ h Output :Learned policy π for iter = 0, 1, ... do Sample trajectories D π = {(s i , a i )} M i=1 by executing π, sample M state-action pairs from D π E , labeling them i = 1, .., M : s E i , a E ...
doi:10.1609/aaai.v34i04.6135
fatcat:566mknvbgfbk5jwmawavhlxxyi
On Weak Learning
1995
Journal of computer and system sciences (Print)
Since strong learning algorithms can be built from weak learning algorithms, our results also characterizes strong learnability. ...
This paper presents relationships between weak learning, weak prediction (where the probability of being correct is slightly larger than 50%), and consistency oracles (which decide whether or not a given ...
the hypothesis class in the example following Theorem 4.2 by one. ...
doi:10.1006/jcss.1995.1044
fatcat:iv6enkyurnczldgmum4mm4isba
A kernel learning framework for domain adaptation learning
2012
Science China Information Sciences
Domain adaptation learning (DAL) methods have shown promising results by utilizing labeled samples from the source (or auxiliary) domain(s) to learn a robust classifier for the target domain which has ...
a few or even no labeled samples. ...
class and the other from the negative class) to construct one domain. ...
doi:10.1007/s11432-012-4611-x
fatcat:2zf6c4igefhrzd3b5lamui5xoq
Learning User's Confidence for Active Learning
2013
IEEE Transactions on Geoscience and Remote Sensing
We propose a filtering scheme based on a classifier that learns the confidence of the user in labeling, thus minimizing the queries where the user would not be able to provide a class for the pixel. ...
The capacity of a model to learn the user's confidence is studied in detail, also showing the effect of resolution is such a learning task. ...
ACKNOWLEDGEMENTS The authors would like to acknowledge M. Kanevski (University of Lausanne) for the access to the QuickBird images and M. ...
doi:10.1109/tgrs.2012.2203605
fatcat:ly4eyshgazadjaebk4dzwbf7lq
Sharp Learning Bounds for Contrastive Unsupervised Representation Learning
[article]
2021
arXiv
pre-print
Contrastive unsupervised representation learning (CURL) encourages data representation to make semantically similar pairs closer than randomly drawn negative samples, which has been successful in various ...
We verify that our theory is consistent with experiments on synthetic, vision, and language datasets. ...
K negative classes contains all classes c ∈ [C]. v K := K n=1 C−1 m=0 C − 1 m (−1) m 1 − m + 1 C n−1 . ...
arXiv:2110.02501v1
fatcat:wd47p6ubmbegvp4m6wv33ndnqa
Machine Learning
[chapter]
2017
Elements of Robotics
The constant N is the size of the set of samples for the learning phase, while n is the number of sensor values returned for each sample. ...
Algorithm 14.5: Classification by a perceptron (learning phase) float array[N ,n] X // Set of samples float array[n + 1] w ← [0.1, 0.1, . . .] // Weights float array[n] x // Random sample integer c // ...
doi:10.1007/978-3-319-62533-1_14
fatcat:cblpd54r3bbdnbsfvaci5iy3yu
On Meta-Learning Rule Learning Heuristics
2007
Seventh IEEE International Conference on Data Mining (ICDM 2007)
The goal of this paper is to investigate to what extent a rule learning heuristic can be learned from experience. ...
Subsequently, we train regression algorithms on predicting the test set performance of a rule from its training set characteristics. ...
Accuracy and theory complexity comparison of various heuristics with training-set (p, n) and predicted (p,n) coverages (number of conditions in brackets) M AE(f,f ) = 1 m m i=0 |f (i) − f (i)| where m ...
doi:10.1109/icdm.2007.51
dblp:conf/icdm/JanssenF07
fatcat:3q6a24uqrjd7tgqaxpkv7cqeoi
EEC: Learning to Encode and Regenerate Images for Continual Learning
[article]
2021
arXiv
pre-print
The two main impediments to continual learning are catastrophic forgetting and memory limitations on the storage of data. ...
During training on a new task, reconstructed images from encoded episodes are replayed in order to avoid catastrophic forgetting. ...
Most existing class-incremental learning methods avoid this problem by storing a portion of the training samples from the earlier learned classes and retraining the model (often a neural network) on a ...
arXiv:2101.04904v4
fatcat:lgahmf4sabdtffdnse2pjztgoq
Learning Cooperative Games
[article]
2016
arXiv
pre-print
Specifically, we are given m random samples of coalitions and their values, taken from some unknown cooperative game; can we predict the values of unseen coalitions? ...
This paper explores a PAC (probably approximately correct) learning model in cooperative games. ...
We say that A can properly learn a function f ∈ C from a class of functions C (C is sometimes referred to as the hypothesis class), if by observing m samples -where m can depend only on n (the representation ...
arXiv:1505.00039v2
fatcat:hdgfgtjkhjh5jnhq7v5qq36hu4
« Previous
Showing results 1 — 15 out of 478,386 results