Filters








358,374 Hits in 6.4 sec

Learning to See by Looking at Noise [article]

Manel Baradad, Jonas Wulff, Tongzhou Wang, Phillip Isola, Antonio Torralba
2022 arXiv   pre-print
We also find that diversity is a key property to learn good representations. Datasets, models, and code are available at https://mbaradad.github.io/learning_with_noise.  ...  To counter these costs, interest has surged in learning from cheaper data sources, such as unlabeled images.  ...  This research was also supported by a grant from the MIT-IBM Watson AI lab, and it was partially conducted using computation resources from the Satori cluster donated by IBM to MIT.  ... 
arXiv:2106.05963v3 fatcat:6hx5lqvk6ffyxl2a7nc5o6g4ze

Toddlers' ability to map the meaning of new words in multi-talker environments

Justine Dombroski, Rochelle S. Newman
2014 Journal of the Acoustical Society of America  
Children showed similar accuracy in all three conditions, suggesting that even at a 0 dB SNR, children were successfully able to learn new words.  ...  Three groups of children aged 32-36 months were taught two new words either in quiet, or in the presence of multi-talker babble at a þ5 or 0 dB signal-to-noise ratio (SNR).  ...  Do you see the coopa? Find the coopa! Coopa!"). On baseline trials, the voice did not instruct children to look at a particular object ("Wow, look at that! Do you see that? Ooooo! look!").  ... 
doi:10.1121/1.4898051 pmid:25373980 fatcat:mmhbxh7efjcanjkzc4jp6zfovy

Using Autoencoders on Differentially Private Federated Learning GANs [article]

Gregor Schram, Rui Wang, Kaitai Liang
2022 arXiv   pre-print
privacy and federated learning to GANs.  ...  Machine learning has been applied to almost all fields of computer science over the past decades.  ...  When we look at the classifier score (see Figure 5b ), remember a GAN has usually trains against a classifier, we see a notable decrease in accuracy with our implementation due to previous described issues  ... 
arXiv:2206.12270v1 fatcat:cttbhz4645eohcpt5xzcjjwzwu

Evaluating the Robustness of Learning from Implicit Feedback [article]

Filip Radlinski, Thorsten Joachims
2006 arXiv   pre-print
The model is used to understand the effect of user behavior on the performance of a learning algorithm for ranked retrieval.  ...  In particular, we create a model of user behavior by drawing upon user studies in laboratory and real-world settings.  ...  Acknowledgments We would like to thank Laura Granka, Bing Pang, Helene Hembrooke and Geri Gay for their collaboration in the eye tracking study.  ... 
arXiv:cs/0605036v1 fatcat:2tgumb55rndgzltlfw3vip6aje

Sharper signals: how machine learning is cleaning up microscopy images

Amber Dance
2021 Nature  
"Then came the deep-learning era," says Elad. By passing the images to computers and allowing them to work out the best de-noising approach, researchers have begun to see striking results.  ...  Then, the machine-learning algorithm attempts to de-noise by predicting the centre value for that patch on the basis of surrounding pixels.  ...  "It's really the biologist, the expert, who decides what you're going to use and not use."  ... 
doi:10.1038/d41586-021-00023-0 pmid:33432178 fatcat:5yegm6nxu5g63d4bp2uxofths4

Image denoising with multi-layer perceptrons, part 2: training trade-offs and analysis of their mechanisms [article]

Harold Christopher Burger, Christian J. Schuler, Stefan Harmeling
2012 arXiv   pre-print
By analysing the activation patterns of the hidden units we are able to make observations regarding the functioning principle of multi-layer perceptrons trained for image denoising.  ...  Image denoising can be described as the problem of mapping from a noisy image to a noise-free image.  ...  The feature generators look similar to those learned by other MLPs. However, the feature detectors again look somewhat different: many seem to focus on the center area of the input patch.  ... 
arXiv:1211.1552v1 fatcat:4ycbli2gjzh4lom3tpbueemu7y

Noise, overestimation and exploration in Deep Reinforcement Learning [article]

Rafael Stekolshchik
2020 arXiv   pre-print
In the appendix, in relation with the Hill-Climbing algorithm, we will look at one more example of noise: adaptive noise.  ...  We will discuss some statistical noise related phenomena, that were investigated by different authors in the framework of Deep Reinforcement Learning algorithms.  ...  We will look at Deep Reinforcement Learning algorithms in terms of issues related to noise. In this article, we will touch the following algorithms: DQN, Double DQN, DDPG, TD3, Hill-Climbing.  ... 
arXiv:2006.14167v1 fatcat:hr5mh42crbaobgmjeyri4di4zu

The Negative Impact of Noise on Adolescents' Executive Function: An Online Study in the Context of Home-Learning During a Pandemic

Brittney Chere, Natasha Kirkham
2021 Frontiers in Psychology  
UNICEF estimates that 1.6 billion children across the world have had their education impacted by COVID-19 and have attempted to continue their learning at home.  ...  In particular, adolescents coming from noisier homes were more likely to report that they studied in a noisy room and that they were annoyed by noise when studying.  ...  Both authors contributed to manuscript revision, read, and approved the submitted version.  ... 
doi:10.3389/fpsyg.2021.715301 pmid:34630225 pmcid:PMC8492971 fatcat:4qpvkoe4mfgndpskhc6jujjpva

Deep Convolutional Generative Adversarial Network for Procedural 3D Landscape Generation Based on DEM [chapter]

Andreas Wulff-Jensen, Niclas Nerup Rant, Tobias Nordvig Møller, Jonas Aksel Billeskov
2018 Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering  
Perlin noise is especially interesting as it has been used to generate game maps in previous productions [3, 4] .  ...  The diversity test showed the generated maps had a significantly greater diversity than the Perlin noise maps. Afterwards the heightmaps was converted to 3D maps in Unity3D.  ...  The results got significantly worse when it was set to 0.002, where the network did not learn at all and continued to generate noise.  ... 
doi:10.1007/978-3-319-76908-0_9 fatcat:7arlho457zg5jjw7bkx2jcexpi

Learning in Complex Environments: The Effects of Background Speech on Early Word Learning

Brianna T. M. McMillan, Jenny R. Saffran
2016 Child Development  
not when the signal-to-noise ratio (SNR) was 5 dB.  ...  Toddlers (28-to 30-month-olds; n = 26) successfully learned novel words with a 5-dB SNR when they initially heard the labels embedded in fluent speech without background noise, before they were mapped  ...  In this first look at early word learning when background speech is present, we have begun to understand some of the vulnerabilities of learning mechanisms involved in word learning.  ... 
doi:10.1111/cdev.12559 pmid:27441911 pmcid:PMC5113671 fatcat:e35kqwdjd5aabousqf653b3hwe

Robustness of different loss functions and their impact on networks learning capability [article]

Vishal Rajput
2021 arXiv   pre-print
In particular, we will look at how fast the accuracy of different models decreases when we change the pixels corresponding to the most salient gradients.  ...  Despite so many advances in the field, AIs full capability is yet to be exploited by the industry.  ...  at noise level of 0.6 compared to the Dice performs at the noise level of 0.4.  ... 
arXiv:2110.08322v2 fatcat:qzruirgisrgjbgnwfen64fep7m

Learning to Avoid Errors in GANs by Manipulating Input Spaces [article]

Alexander B. Jung
2017 arXiv   pre-print
The core idea is to apply small changes to each noise vector in order to shift them away from areas in the input space that tend to result in errors.  ...  In this paper, we instead explore methods that enable GANs to actively avoid errors by manipulating the input space.  ...  We train each experiment for 100k batches at a learning rate of 0.0005, followed by 200k batches at a learning rate of 0.0001.  ... 
arXiv:1707.00768v1 fatcat:jt7wskom3vgxzjcxpbnr6ng7za

Analysis of fitness noise in particle swarm optimization: From robotic learning to benchmark functions

Ezequiel Di Mario, Inaki Navarro, Alcherio Martinoli
2014 2014 IEEE Congress on Evolutionary Computation (CEC)  
We show that the large-amplitude noise found in robotic evaluations is disruptive to the initial phases of the learning process of PSO.  ...  Population-based learning techniques have been proven to be effective in dealing with noise and are thus promising tools for the optimization of robotic controllers, which have inherently noisy performance  ...  ACKNOWLEDGEMENTS The authors would like to thank Jim Pugh for helpful discussions.  ... 
doi:10.1109/cec.2014.6900514 dblp:conf/cec/MarioNM14 fatcat:qiglp4ivnvg7xivk2oejmrabby

A Machine Learning Approach for Non-blind Image Deconvolution

Christian J. Schuler, Harold Christopher Burger, Stefan Harmeling, Bernhard Scholkopf
2013 2013 IEEE Conference on Computer Vision and Pattern Recognition  
In a second (and arguably more difficult) step, one then needs to remove the colored noise, typically using a cleverly engineered algorithm.  ...  This step amplifies and colors the noise, and corrupts the image information.  ...  The radius of the actual PSF can be estimated by looking at the position of the first zero-frequency in Fourier domain.  ... 
doi:10.1109/cvpr.2013.142 dblp:conf/cvpr/SchulerBHS13 fatcat:olxzg4c7cbdv7o6pyuxlv4wfey

U-shaped motor development emerges from Goal Babbling with intrinsic motor noise

Kenichi Narioka, Jochen J. Steil
2015 2015 Joint IEEE International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob)  
We use Goal Babbling, a computational model for exploratory learning of motor skills, to model this U-shaped learning dynamics and furthermore also the qualitative differences caused by presentation modes  ...  To this aim, we introduce developmentally plausible motor noise and adaptive learning rates in Goal Babbling and show through extensive simulation that U-shaped motor development emerges, thereby reproducing  ...  We would like to thank Marc Ernst and Marieke Rohde for intensive discussion on role of motor noise in human skill learning.  ... 
doi:10.1109/devlrn.2015.7346115 dblp:conf/icdl-epirob/NariokaS15 fatcat:wfsl6ss6abaltgam2ltmptbmgy
« Previous Showing results 1 — 15 out of 358,374 results