24,501 Hits in 5.3 sec

Conditional Generative Neural Decoding with Structured CNN Feature Prediction

Changde Du, Changying Du, Lijie Huang, Huiguang He
The proposed SMR model can simultaneously leverage the covariance structures underlying the brain activities, the CNN features and the prediction tasks to improve the decoding accuracy and interpretability  ...  Specifically, our approach first decodes the brain activity to the multilayer intermediate features of a pretrained convolutional neural network (CNN) with a structured multi-output regression (SMR) model  ...  This result shows that simultaneously leveraging the multiple covariance structures are also important.  ... 
doi:10.1609/aaai.v34i03.5647 fatcat:tv4kahkf3zgvzcizlu6xpblj6i

A Survey on Multi-output Learning [article]

Donna Xu, Yaxin Shi, Ivor W. Tsang, Yew-Soon Ong, Chen Gong, Xiaobo Shen
2019 arXiv   pre-print
Multi-output learning aims to simultaneously predict multiple outputs given an input.  ...  Then we present the paradigm on multi-output learning, including its myriads of output structures, definitions of its different sub-problems, model evaluation metrics and popular data repositories used  ...  . 2) Multi-target Regression: The aim of multi-target regression is to simultaneously predict multiple real-valued output variables for one instance [5] , [57] .  ... 
arXiv:1901.00248v2 fatcat:obc5iphzv5gj7ewut6lcqchqiy

Dynamic Structure Embedded Online Multiple-Output Regression for Stream Data [article]

Changsheng Li and Fan Wei and Weishan Dong and Qingshan Liu and Xiangfeng Wang and Xin Zhang
2015 arXiv   pre-print
Online multiple-output regression is an important machine learning technique for modeling, predicting, and compressing multi-dimensional correlated data streams.  ...  In this paper, we propose a novel online multiple-output regression method, called MORES, for stream data.  ...  for online multiple-output regression.  ... 
arXiv:1412.5732v2 fatcat:iqwwpn6ezjfrloxq4qkvuzxgxm

Full Quantification of Left Ventricle via Deep Multitask Learning Network Respecting Intra- and Inter-Task Relatedness [article]

Wufeng Xue, Andrea Lum, Ashley Mercado, Mark Landis, James Warringto, Shuo Li
2017 arXiv   pre-print
structure and the complexity of temporal dynamics.  ...  During the final estimation, both intra- and inter-task relatedness are modeled to enforce improvement of generalization: 1) respecting intra-task relatedness, group lasso is applied to each of the regression  ...  of z t andz t denotes the average value of z t across its multiple outputs.  ... 
arXiv:1706.01912v2 fatcat:cyqgaojhwrf5pgiyyoey2kwt24

A Deep Learning Model for Structured Outputs with High-order Interaction [article]

Hongyu Guo, Xiaodan Zhu, Martin Renqiang Min
2015 arXiv   pre-print
However, typical classification and regression models often lack the ability of simultaneously exploring high-order interaction within input and that within output.  ...  Our current work focuses on structured output regression, which is a less explored area, although the model can be extended to handle structured label classification.  ...  and multiple output nodes; (4) the so-called multivariate multiple regression (denoted as Mul-tivariateReg), which takes into account the correlations among the multiple targets using a matrix computation  ... 
arXiv:1504.08022v1 fatcat:b4lbibfgynej7jta3okbombrty

Machine learning architectures versus diagnostic tasks

Georgios Leontidis, Aiden Durrant
2021 Zenodo  
Machine learning architectures versus diagnostic tasks  ...  vector to two fully-connected (Multi-Layer Perceptron) output layers. • 3 Neuron ->Coordinate Regression • 9 Neuron Softmax Non-linear -> Classification, • Given the two tasks we use a multi-task objective  ...  function to simultaneously train the network for both tasks. 2 1 2 1 2 1 1 1 1ˆˆ( ; , , ) log( ) (1 ) log(1 ) , M P C j j j j c c i j c i M P C     = = =     = − + − − − −            ... 
doi:10.5281/zenodo.5212518 fatcat:azfcowyeqfeyrobjzxekmc4vb4

Exploring Hidden Semantics in Neural Networks with Symbolic Regression [article]

Yuanzhen Luo, Qiang Lu, Xilei Hu, Jake Luo, Zhiguang Wang
2022 arXiv   pre-print
To address this need, we propose a novel symbolic regression method for neural works (called SRNet) to discover the mathematical expressions of a NN.  ...  It then leverages a multi-chromosome NNCGP to represent hidden semantics of all layers of the NN.  ...  Training 5 MLPs is similar to the regression task except for the function "softmax" that replaces their output functions.  ... 
arXiv:2204.10529v1 fatcat:j3bvss3vr5gz5gm2qxjzxr2trq

The TCGA Meta-Dataset Clinical Benchmark [article]

Mandana Samiei, Tobias Würfl, Tristan Deleu, Martin Weiss, Francis Dutil, Thomas Fevens, Geneviève Boucher, Sebastien Lemieux, Joseph Paul Cohen
2019 arXiv   pre-print
Each task represents an independent dataset. We use regression and neural network baselines for all the tasks using only 150 samples and compare their performance.  ...  Also, learning to predict multiple clinical variables using gene-expression data is an important task due to the variety of phenotypes in clinical problems and lack of samples for some of the rare variables  ...  Despite having a small number of tasks, developing this benchmark opens the door to the use of multiple datasets for few-shot learning by leveraging the use of samples from different dataset to construct  ... 
arXiv:1910.08636v1 fatcat:snd3skcbabe5tjhd7mx4klbiq4

Simultaneous Parameter Learning and Bi-Clustering for Multi-Response Models [article]

Ming Yu and Karthikeyan Natesan Ramamurthy and Addie Thompson and Aurélie Lozano
2018 arXiv   pre-print
We consider multi-response and multitask regression models, where the parameter matrix to be estimated is expected to have an unknown grouping structure.  ...  This additional structure can not only can be leveraged for more accurate parameter estimation, but it also provides valuable information on the underlying data mechanisms (e.g. relationships among genotypes  ...  The views and opinions of authors expressed herein do not necessarily state or reflect those of the United States Government or any agency thereof.  ... 
arXiv:1804.10961v1 fatcat:mgvuani3kzhhzlzaddhn7iv6s4

Detection and localisation of multiple in-core perturbations with neutron noise-based self-supervised domain adaptation

A. Durrant
2021 Zenodo  
. • Derivation of core perturbation characteristics to classify and locate its origin. • Yet this is challenging due to the limited number of neutron detectors in western type reactors. • We ask, can we  ...  • 3 Neuron ->Coordinate Regression • 9 Neuron Softmax Non-linear -> Classification, • Given the two tasks we use a multi-task objective function to simultaneously train the network for both tasks. 2 1  ...  of collective knowledge. 3D Densely Connected CNN (2) • To classify and localise, the network outputs a representational feature vector to two fully-connected (Multi-Layer Perceptron) output layers.  ... 
doi:10.5281/zenodo.4438588 fatcat:deua6jo5dfccfkqbmrrid6fhj4

Correlated Regression Feature Learning for Automated Right Ventricle Segmentation

Jun Chen, Heye Zhang, Weiwei Zhang, Xiuquan Du, Yanping Zhang, Shuo Li
2018 IEEE Journal of Translational Engineering in Health and Medicine  
points' coordinates of RV directly and simultaneously.  ...  Therefore, RegressionCNN can achieve optimally convolutional feature learning for accurately catching the regression features that are more correlated to RV regression segmentation task in training process  ...  ACKNOWLEDGMENT We thank editors and reviewers for careful reading of the manuscript.  ... 
doi:10.1109/jtehm.2018.2804947 pmid:30057864 pmcid:PMC6061487 fatcat:vpr3r6plczgp3ahuba26xk46rq

Deep Structured Learning for Facial Action Unit Intensity Estimation

Robert Walecki, Ognjen Rudovic, Vladimir Pavlovic, Bjoern Schuller, Maja Pantic
2017 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)  
We consider the task of automated estimation of facial expression intensity. This involves estimation of multiple output variables (facial action units -AUs) that are structurally dependent.  ...  Our model accounts for ordinal structure in output variables and their non-linear dependencies via copula functions modeled as cliques of a CRF.  ...  By contrast, we capture the output structure by means of a CRF graph that explicitly accounts for ordinal and non-linear relations between multiple outputs.  ... 
doi:10.1109/cvpr.2017.605 dblp:conf/cvpr/WaleckiRPSP17 fatcat:e66unqqze5dghkaj76tbdifohe

Forecasting Spatially-Distributed Urban Traffic Volumes via Multi-Target LSTM-Based Neural Network Regressor

Alessandro Crivellari, Euro Beinat
2020 Mathematics  
Specifically, we propose a multi-target deep learning regressor for simultaneous predictions of traffic volumes, in multiple entry and exit points among city neighborhoods.  ...  By leveraging a single training process for all location points, and an instant one-step volume inference for every location at each time update, our sequential modeling approach is able to grasp rapid  ...  Acknowledgments: The authors would like to thank the Austrian Science Fund (FWF) for the Open Access Funding. Conflicts of Interest: The authors declare no conflict of interest.  ... 
doi:10.3390/math8122233 fatcat:bfucoeoftbg5faq62ofam5qmkm

Novel applications of multitask learning and multiple output regression to multiple genetic trait prediction

Dan He, David Kuhn, Laxmi Parida
2016 Bioinformatics  
We then adapted multitask learning algorithms and multiple output regression algorithms to solve the multitrait prediction problem.  ...  Genetic trait prediction is usually represented as linear regression models. In many cases, for the same set of samples and markers, multiple traits are observed.  ...  Multiple output regression For the multiple output regression methods, as there is only one dataset, we conducted 10-fold cross validation.  ... 
doi:10.1093/bioinformatics/btw249 pmid:27307640 pmcid:PMC4908333 fatcat:4ryqumctlbe6nnwvckafn7yxde

Semi-supervised Multi-task Learning for Semantics and Depth [article]

Yufeng Wang, Yi-Hsuan Tsai, Wei-Chih Hung, Wenrui Ding, Shuo Liu, Ming-Hsuan Yang
2021 arXiv   pre-print
To this end, we design an adversarial learning scheme in our semi-supervised training by leveraging unlabeled data to optimize all the task branches simultaneously and accomplish all tasks across datasets  ...  Typical MTL methods are jointly trained with the complete multitude of ground-truths for all tasks simultaneously.  ...  SSL methods leverage the vast amount of unlabeled data for classification and regression problems.  ... 
arXiv:2110.07197v1 fatcat:gvgaylptunfvlhpxjwslpvonvu
« Previous Showing results 1 — 15 out of 24,501 results