Filters








111,501 Hits in 5.2 sec

Investigating Power laws in Deep Representation Learning [article]

Arna Ghosh, Arnab Kumar Mondal, Kumar Krishna Agrawal, Blake Richards
2022 arXiv   pre-print
For visual representations, we estimate the coefficient of the power law, α, across three key attributes which influence representation learning: learning objective (supervised, SimCLR, Barlow Twins and  ...  Inspired by recent advances in theoretical machine learning and vision neuroscience, we observe that the eigenspectrum of the empirical feature covariance matrix often follows a power law.  ...  This research was enabled in part by support provided by Mila (mila.quebec/en/) and Compute Canada (www.computecanada.ca).  ... 
arXiv:2202.05808v1 fatcat:hydd2ediirhthigzlkwnr5awvq

Scaling Effect of Self-Supervised Speech Models

Jie Pu, Yuguang Yang, Ruirui Li, Oguz Elibol, Jasha Droppo
2021 Conference of the International Speech Communication Association  
Then the advantage of large speech models in learning effective speech representations is demonstrated in two downstream tasks: i) speaker recognition and ii) phoneme classification.  ...  In recent years, the model size of state-of-the-art deep learning systems has rapidly increased and sometimes reached to billions of parameters.  ...  Different from other self-supervised representation learning methods in speech [14, 15, 16] , which learn speech representations by predicting future frames based on past frames, Mockingjay alleviates  ... 
doi:10.21437/interspeech.2021-1935 dblp:conf/interspeech/Pu0LED21 fatcat:3pwm5mgvvvc7bhskn3kwywdamy

Scalable Geometric Deep Learning on Molecular Graphs [article]

Nathan C. Frey, Siddharth Samsi, Joseph McDonald, Lin Li, Connor W. Coley, Vijay Gadepally
2021 arXiv   pre-print
Deep learning in molecular and materials sciences is limited by the lack of integration between applied science, artificial intelligence, and high-performance computing.  ...  Here, we present LitMatter, a lightweight framework for scaling molecular deep learning methods.  ...  Introduction Many blockbuster results in deep learning are enabled by immense scale.  ... 
arXiv:2112.03364v1 fatcat:kgkmguoccbbihbq7rixuqbbyom

Relationship between manifold smoothness and adversarial vulnerability in deep learning with local errors [article]

Zijian Jiang, Jianwen Zhou, Haiping Huang
2020 arXiv   pre-print
Our study reveals that a high generalization accuracy requires a relatively fast power-law decay of the eigen-spectrum of hidden representations.  ...  Here, we establish a fundamental relationship between geometry of hidden representations (manifold perspective) and the generalization capability of the deep networks.  ...  This behavior provides an ideal candidate of deep learning to investigate the emergent properties of the layered intermediate representations after learning, without and with adversarial attacks.  ... 
arXiv:2007.02047v2 fatcat:nybnqy6kjrecxbrcb7cuwelzwm

Resolution and Relevance Trade-offs in Deep Learning [article]

Juyong Song, Matteo Marsili, Junghyo Jo
2018 arXiv   pre-print
A signature of such efficient representations is that frequency distributions follow power laws.  ...  ii) exhibit power law distributions.  ...  FIG. 1 . 1 Information processing in deep learning.  ... 
arXiv:1710.11324v2 fatcat:womweyeaifffldbx6xxehws6da

An empirical study of Conv-TasNet [article]

Berkan Kadioglu, Michael Horgan, Xiaoyu Liu, Jordi Pons, Dan Darcy,, Vivek Kumar
2020 arXiv   pre-print
Conv-TasNet is a recently proposed waveform-based deep neural network that achieves state-of-the-art performance in speech source separation.  ...  In addition, we experiment with the larger and more diverse LibriTTS dataset and investigate the generalization capabilities of the studied models when trained on a much larger dataset.  ...  In certain experiments (see Section 4.2), in an effort to constrain the scale of the predicted sources from the deep encoder/decoder, we augment the objective function with a power-law term that encourages  ... 
arXiv:2002.08688v2 fatcat:5tis2mtbvzg4bjx32hkc7uakey

Scale-invariant representation of machine learning [article]

Sungyeop Lee, Junghyo Jo
2022 arXiv   pre-print
In this study, we derive the process by which these power laws can naturally arise in machine learning.  ...  We observe that the frequency of internal codes or labels follows power laws in both supervised and unsupervised learning models.  ...  This work was supported in part by the National Research Foundation of Korea (NRF) grant (Grant No. 2021R1A2C2012350) (S.L.), the New Faculty Startup Fund from Seoul National University, and the NRF grant  ... 
arXiv:2109.02914v2 fatcat:qeel6qnqa5al7gmakxd7siu7ia

On 1/n neural representation and robustness [article]

Josue Nassar, Piotr Aleksander Sokol, SueYeon Chung, Kenneth D. Harris, Il Memming Park
2020 arXiv   pre-print
Understanding the nature of representation in neural networks is a goal shared by neuroscience and machine learning.  ...  In this work, we investigate the latter by juxtaposing experimental results regarding the covariance spectrum of neural representations in the mouse V1 (Stringer et al) with artificial neural networks.  ...  theoretical questions came to the fore in deep learning.  ... 
arXiv:2012.04729v1 fatcat:64n46oqxdzg63ouzhkmiscdx2u

Why state-of-the-art deep learning barely works as good as a linear classifier in extreme multi-label text classification

Mohammadreza Qaraei, Sujay Khandagale, Rohit Babbar
2020 The European Symposium on Artificial Neural Networks  
Even though deep learning algorithms have surpassed linear and kernel methods for most natural language processing tasks over the last decade; recent works show that state-of-the-art deep learning methods  ...  of linear classifiers in this regime, has been ignored in many relevant recent works.  ...  The distribution of training instances among labels for the rest four datasets, exhibit fit to power-law distribution. This distribution is shown in Figure 3 for Amazon-670K dataset.  ... 
dblp:conf/esann/QaraeiKB20 fatcat:ovxumdkp6zf2rg3grr4n6ekpvq

Deep Learning for Law Enforcement: A Survey About Three Application Domains

Paolo Contardo, Paolo Sernani, Nicola Falcionelli, Aldo Franco Dragoni
2021 International Conference on Recent Trends and Applications in Computer Science and Information Technology  
Following the success of deep learning, many automatic data analysis techniques are becoming common also in law enforcement agencies.  ...  To this end, we present a survey about the potential impact of deep learning on three application domains, peculiar to law enforcement agencies.  ...  [62] , should be investigated in the presented application domains, to avoid the use of deep learning techniques as mere "black boxes".  ... 
dblp:conf/rtacsit/ContardoSFD21 fatcat:qiaxtc2esnhi7fhj6tayxunv7q

Machine learning meets condensed matter

Eric Howard
2020 figshare.com  
Published in The Startup  ...  and computational/data analysis methods being migrated towards the theoretical physics community, The recent predictive, representational and computational power of machine learning in processing and simulating  ...  Important open questions of fundamental interest in quantum many body systems may find their answers and insights into the powerful shallow or deep learning architectures that exhibit a complexity that  ... 
doi:10.6084/m9.figshare.12780521.v1 fatcat:dxls53n5prhrnnqfjwmfy5yove

Video Face Recognition using Deep Learning based Representation

Shahzadi Asra
2019 International Journal for Research in Applied Science and Engineering Technology  
Face recognition in video is extremely significant in the area of law enforcement purposes, the recent advances have reported high accuracies at equal error rates, performance at lower false accept rates  ...  Smart phones and surveillance cameras have initiated research in video face identification.  ...  Deep learning can utilize big data for training deep architecture models so as to obtain more powerful features for representing faces.  ... 
doi:10.22214/ijraset.2019.3009 fatcat:lpfco63myrbqli3lfwov7j6elu

Toward Improving Attentive Neural Networks in Legal Text Processing [article]

Ha-Thanh Nguyen
2022 arXiv   pre-print
In recent years, thanks to breakthroughs in neural network techniques especially attentive deep learning models, natural language processing has made many impressive achievements.  ...  Language models tend to grow larger and larger, though, without expert knowledge, these models can still fail in domain adaptation, especially for specialized fields like law.  ...  In recent years, language models have been a powerful approach in deep learning.  ... 
arXiv:2203.08244v1 fatcat:54rx64cucfdw3bzeujrmx5s7z4

Complex network prediction using deep learning [article]

Yoshihisa Tanaka, Ryosuke Kojima, Shoichi Ishida, Fumiyoshi Yamashita, Yasushi Okuno
2021 arXiv   pre-print
In this work, we propose a deep learning approach to this problem based on Graph Convolutional Networks for predicting networks while preserving their original structural properties.  ...  Systematic relations between multiple objects that occur in various fields can be represented as networks.  ...  The power law fitting in degree distribution was performed using the python package powerlaw (52) .  ... 
arXiv:2104.03871v1 fatcat:p2dxz7lvfrcodhitwkxkl5n5bm

A Prediction Method of Peak Time Popularity Based on Twitter Hashtags

Hai Yu, Ying Hu, Peng Shi
2020 IEEE Access  
Thirdly, this paper designs a multi-modal based deep learning method, where the state-of-art deep learning techniques, such as multi-modal embedding and attention mechanisms, are adopted.  ...  Firstly, this paper investigates how early popularity reaches its peaks. Then, it is found that popularity tends to peak in the early stage of its evolution.  ...  We identify 3.3 million hashtags in these tweets. From Figure 1 we can see that the popularity distribution of these 3.3 million hashtags follows a power-law shape.  ... 
doi:10.1109/access.2020.2983583 fatcat:llsycscz5nc65fmu544fgkbdiu
« Previous Showing results 1 — 15 out of 111,501 results