2,968 Hits in 8.7 sec

A Multi-task Selected Learning Approach for Solving 3D Flexible Bin Packing Problem [article]

Lu Duan, Haoyuan Hu, Yu Qian, Yu Gong, Xiaodong Zhang, Yinghui Xu, Jiangwen Wei
2019 arXiv   pre-print
tasks and significantly outperforms the single task Pointer Network and the multi-task network without selected learning; 2) our method obtains an average 5.47% cost reduction than the well-designed greedy  ...  Because of the huge practical significance, we focus on the issue of packing cuboid-shaped items orthogonally into a least-surface-area bin.  ...  Thus the orientation task parameterized by θ or i can be trained with the following standard differentiable loss (the cross-entropy loss): L or i (θ or i ) = − n i=1 log p(o * i |s 1 , o * 1 , ..., s i  ... 
arXiv:1804.06896v3 fatcat:ldg7aohvwnhnzbjxcshxkst4u4

Multi-Channel Convolutional Neural Networks Architecture Feeding for Effective EEG Mental Tasks Classification

Sławomir Opałka, Bartłomiej Stasiak, Dominik Szajerman, Adam Wojciechowski
2018 Sensors  
The solution presented applies a frequency domain for input data processed by a multi-channel architecture that isolates frequency sub-bands in time windows, which enables multi-class signal classification  ...  Mental tasks classification is increasingly recognized as a major challenge in the field of EEG signal processing and analysis.  ...  Conflicts of Interest: The authors declare no conflict of interest.  ... 
doi:10.3390/s18103451 pmid:30322205 pmcid:PMC6210443 fatcat:6ikgnvdir5dg7iqwog4fa2qhee

Optimizing Task Placement and Online Scheduling for Distributed GNN Training Acceleration [article]

Ziyue Luo, Yixin Bao, Chuan Wu
2022 arXiv   pre-print
Training Graph Neural Networks (GNN) on large graphs is resource-intensive and time-consuming, mainly due to the large graph data that cannot be fit into the memory of a single machine, but have to be  ...  Unlike distributed deep neural network (DNN) training, the bottleneck in distributed GNN training lies largely in large graph data transmission for constructing mini-batches of training samples.  ...  INTRODUCTION Graph neural networks (GNN) [1] [2] generalize deep neural networks (DNN) to learning from graph-structured data and have been exploited in various domains, e.g., computer networking  ... 
arXiv:2204.11224v1 fatcat:sv5dsh77fjg6rawwzbtoqfzr2q

State-Consistency Loss for Learning Spatial Perception Tasks from Partial Labels

Mirko Nava, Luca Gambardella, Alessandro Giusti
2021 IEEE Robotics and Automation Letters  
We introduce a general approach to deal with this class of problems using an auxiliary loss enforcing the expectation that the perceived environment state should not abruptly change; then, we instantiate  ...  the approach to solve two robot perception problems: a simulated ground robot learning long-range obstacle mapping as a 400-binary-label classification task in a self-supervised way in a static environment  ...  ACKNOWLEDGMENT The authors would like to thank Luca Brena, Jérôme Guzzi, Antonio Paolillo and Daniele Palossi for their useful comments and support during the development of this work.  ... 
doi:10.1109/lra.2021.3056378 fatcat:cluta6dwhngplpetmdjqvbjfnq

Spatio-Temporal Cortical Patterns Evoked in Monkeys by a Discrimination Task

Don Krieger, Robert J. Sclabassi, Richard Coppola, Richard Nakamura
1991 Journal of Cognitive Neuroscience  
Only those functions that are actually utilized in performance of the task and that may be isolated by comparing task subsets can be mapped.  ...  For readily identifiable single trial peaks, such a histogram would be expected to be peaked about lag zero; for data with low signal/noise, the histogram would be expected to show no peak.  ... 
doi:10.1162/jocn.1991.3.3.242 pmid:23964839 fatcat:f2esdcoya5cbxbixixz3mx2j6a

Distributed and Multi-Task Learning at the Edge for Energy Efficient Radio Access Networks

Marco Miozzo, Zoraze Ali, Lorenza Giupponi, Paolo Dini
2021 IEEE Access  
The big data availability of Radio Access Network (RAN) statistics suggests using it for improving the network management through machine learning based Self Organized Network (SON) functionalities.  ...  Multiaccess Edge Computing can mitigate this problem; however, the machine learning solutions have to be properly designed for efficiently working in a distributed fashion.  ...  In [38] , multi-task deep ANN framework for non-orthogonal multiple access (NOMA), namely DeepNOMA, has been proposed for treating non-orthogonal transmissions as multiple distinctive but correlated tasks  ... 
doi:10.1109/access.2021.3050841 fatcat:3nvy3gbbw5dcdjl6qw5r4lttpm

Task-dependent mixed selectivity in the subiculum [article]

Debora Ledergerber, Claudia Battistin, Jan Sigurd Blackstad, Richard J. Gardner, Menno P Witter, May-Britt Moser, Yasser Roudi, Edvard I Moser
2020 bioRxiv   pre-print
Each navigational variable could be decoded with higher precision, from a similar number of neurons, in SUB than CA1.  ...  Here we estimate the tuning of simultaneously recorded CA1 and SUB cells to position, head direction, and speed.  ...  Using the previously introduced generalized linear model, we next set out to quantify the level of mixed selectivity in the network across the two task situations.  ... 
doi:10.1101/2020.06.06.129221 fatcat:chietdva7zchlhuxjv4pphufry

Learning to Place Objects onto Flat Surfaces in Upright Orientations [article]

Rhys Newbury, Kerry He, Akansel Cosgun, Tom Drummond
2020 arXiv   pre-print
At every iteration, we use a convolutional neural network to estimate the required object rotation which is executed by the robot, and then a separate convolutional neural network to estimate if the object  ...  We use two neural networks in an iterative fashion.  ...  We propose two convolutional neural networks, Placement Rotation Convolutional Neural Network (PR-CNN) and Placement Stability Convolutional Neural Network (PS-CNN) that are used in an iterative algorithm  ... 
arXiv:2004.00249v2 fatcat:v6gqzwfcebg2fe7nm6qyeaddqi

Orthogonal Over-Parameterized Training [article]

Weiyang Liu, Rongmei Lin, Zhen Liu, James M. Rehg, Liam Paull, Li Xiong, Le Song, Adrian Weller
2021 arXiv   pre-print
To achieve good generalization, how to effectively train a neural network is of great importance.  ...  The inductive bias of a neural network is largely determined by the architecture and the training algorithm.  ...  energy as a regularization in the network but do not guarantee that the hyperspherical energy can be effectively minimized (due to the existence of data fitting loss).  ... 
arXiv:2004.04690v6 fatcat:6av4jjqw2vh7nbf66ubksv3u5q

On the Preservation of Spatio-temporal Information in Machine Learning Applications [article]

Yigit Oktar, Mehmet Turkan
2020 arXiv   pre-print
In conventional machine learning applications, each data attribute is assumed to be orthogonal to others.  ...  As a result, the conventional vectorization process disrupts all of the spatio-temporal information about the order/place of data whether it be 1D, 2D, 3D, or 4D.  ...  This unsupervised convolutional decomposition of a signal can be regarded as a feature extraction method that tackles the problem of orthogonality, where the extracted features for the i th data point  ... 
arXiv:2006.08321v1 fatcat:fokwd5vtvrdipkyc2tmavaivju

Regularized Simple Graph Convolution (SGC) for improved interpretability of large datasets

Phuong Pho, Alexander V. Mantzaris
2020 Journal of Big Data  
Classification of data points which correspond to complex entities such as people or journal articles is a ongoing research task.  ...  The features that can be extracted are many and result in large datasets which are a challenge to process with complex machine learning methodologies.  ...  Authors' contributions Availability of data and materials The dataset, Cora, is presented in [33] and is available at https :// n/data.html.  ... 
doi:10.1186/s40537-020-00366-x fatcat:rwc3gsqk2fb6xcem4vlxd37ddu

Orthogonal Deep Features Decomposition for Age-Invariant Face Recognition [article]

Yitong Wang, Dihong Gong, Zheng Zhou, Xing Ji, Hao Wang, Zhifeng Li, Wei Liu, Tong Zhang
2018 arXiv   pre-print
Extensive experiments conducted on the three public domain face aging datasets (MORPH Album 2, CACD-VS and FG-NET) have shown the effectiveness of the proposed approach and the value of the constructed  ...  To reduce the intra-class discrepancy caused by the aging, in this paper we propose a novel approach (namely, Orthogonal Embedding CNNs, or OE-CNNs) to learn the age-invariant deep face features.  ...  Based on the proposed model, age-invariant deep features can be effectively obtained for improved AIFR performance. 2.  ... 
arXiv:1810.07599v1 fatcat:i27pjg24dbdyzhpm5kvuf2cc4i

A Selective Survey on Versatile Knowledge Distillation Paradigm for Neural Network Models [article]

Jeong-Hoe Ku, JiHun Oh, YoungYoon Lee, Gaurav Pooniwala, SangJeong Lee
2020 arXiv   pre-print
This paper aims to provide a selective survey about knowledge distillation(KD) framework for researchers and practitioners to take advantage of it for developing new optimized models in the deep neural  ...  network field.  ...  It is introduced as a general method of improving the accuracy of a quantized neural network, by exploiting non-uniform quantization point placement.  ... 
arXiv:2011.14554v1 fatcat:46rruchrlbgfxexsyvujsgprou

Event-related brain potentials (ERPs) in schizophrenia for tonal and phonetic oddball tasks

Jürgen Kayser, Gerard E Bruder, Craig E Tenke, Barbara K Stuart, Xavier F Amador, Jack M Gorman
2001 Biological Psychiatry  
Task-independent reductions of negativities between 80 and 280 msec after stimulus onset suggest a deficit of automatic stimulus classification in schizophrenia, which may be partly compensated by later  ...  Conclusions: The findings suggest that both patients and control subjects activated lateralized cortical networks required for pitch (right frontotemporal) and phoneme (left parietotemporal) discrimination  ...  The authors thank Paul Leite, Regan Fong, Michelle Friedman, Jennifer Bunn-Watson, and Nilabja Bhattacharya for their help in data collection. Software developed by Charles L.  ... 
doi:10.1016/s0006-3223(00)01090-8 pmid:11343680 fatcat:kmd5iyz625fmxobbtsqittkbfm

Learning Visual Context by Comparison [article]

Minchul Kim, Jongchan Park, Seil Na, Chang Min Park, Donggeun Yoo
2020 arXiv   pre-print
We show that explicit difference modeling can be very helpful in tasks that require direct comparison between locations from afar. This module can be plugged into existing deep learning models.  ...  Current methods for solving this task exploit various characteristics of the chest X-ray image, but one of the most important characteristics is still missing: the necessity of comparison between related  ...  Orthogonal Loss Weight We introduce a new hyperparameter λ to balance between the target task loss and the orthogonal loss.  ... 
arXiv:2007.07506v1 fatcat:mhwvyzjvm5edzlrzkueg4yepiy
« Previous Showing results 1 — 15 out of 2,968 results