Filters








2,939 Hits in 3.9 sec

COUPLING SUPERVISED AND UNSUPERVISED TECHNIQUES IN TRAINING FEED-FORWARD NETS

CRIS KOUTSOUGERAS, GEORGE PAPADOURAKIS
1992 International journal on artificial intelligence tools  
We argue that a better approach to the training of feed-forward nets is to use adaptive techniques that combine properties of both supervised and unsupervised learning.  ...  According to this formulation the net is viewed as two coupled sub-nets the first of which is trained by an unsupervised learning technique and the second by a supervised one.  ...  By introducing elements of unsupervised learning in training techniques of supervised nature, better guidance and control is exercised on the formation of the internal representations.  ... 
doi:10.1142/s0218213092000120 fatcat:gtbn6jdbzbeb7bhuidjff4b75e

Hybrid Network Learning

Gnanambigai Dinadayalan, P. Dinadayalan, K. Balamurugan
2011 International Journal of Computer Applications  
The experimental results show that the proposed approach has attained good performance in terms of speed and efficiency.  ...  Associative memory has been expressed in terms of Turing machine.  ...  These are feed forward network based and recurrent network based.  ... 
doi:10.5120/2616-3347 fatcat:wo2fjxnyzzekvkwgtieqx5xxwm

Diving Deep into Deep Learning:History, Evolution, Types and Applications

2020 VOLUME-8 ISSUE-10, AUGUST 2019, REGULAR ISSUE  
While machine learning is busy in supervised and unsupervised methods, deep learning continues its motivation for replicating the human nervous system by incorporating advanced types of Neural Networks  ...  This work will serve as an introduction to the amazing field of deep learning and its potential use in dealing with today's large chunk of unstructured data, that it could take decades for humans to comprehend  ...  outputs CNN Feed forward ANN Fully connected multi layers of perceptron's Supervised Fixed size inputs and outputs RNN Uses their internal memory to Made up of one node, result back into itself Supervised  ... 
doi:10.35940/ijitee.a4865.019320 fatcat:orn2asvoxfaxvlc5iv7kec4nm4

Training Deep Spiking Convolutional Neural Networks With STDP-Based Unsupervised Pre-training Followed by Supervised Fine-Tuning

Chankyu Lee, Priyadarshini Panda, Gopalakrishnan Srinivasan, Kaushik Roy
2018 Frontiers in Neuroscience  
We train the deep SNNs in two phases wherein, first, convolutional kernels are pre-trained in a layer-wise manner with unsupervised learning followed by fine-tuning the synaptic weights with spike-based  ...  In this paper, we propose a pre-training scheme using biologically plausible unsupervised learning, namely Spike-Timing-Dependent-Plasticity (STDP), in order to better initialize the parameters in multi-layer  ...  , and the DoD Vannevar Bush Fellowship.  ... 
doi:10.3389/fnins.2018.00435 pmid:30123103 pmcid:PMC6085488 fatcat:5rgy4l2ycvawnnr46ru4xusg5e

Incorporating BERT into Neural Machine Translation [article]

Jinhua Zhu, Yingce Xia, Lijun Wu, Di He, Tao Qin, Wengang Zhou, Houqiang Li, Tie-Yan Liu
2020 arXiv   pre-print
We conduct experiments on supervised (including sentence-level and document-level translations), semi-supervised and unsupervised machine translation, and achieve state-of-the-art results on seven benchmark  ...  We propose a new algorithm named BERT-fused model, in which we first use BERT to extract representations for an input sequence, and then the representations are fused with each layer of the encoder and  ...  target sequence which exists in decoder only, and a feed-forward layer for non-linear transformation.  ... 
arXiv:2002.06823v1 fatcat:mai3zvjeqvcavfumeky5s3qo3i

Comparison of Classification Performance of Selected Algorithms Using Rural Development Investments Support Programme Data
Kırsal Kalkınma Yatırımlarının Desteklenmesi Programı Verileri Kullanılarak Seçilen Algoritmalarının Sınıflandırma Performanslarının Karşılaştırılması

Mehmet Ali ALAN, Cavit YEŞİLYURT, Saadettin AYDIN, Erol AYDIN
2014 Kafkas Universitesi Veteriner Fakultesi Dergisi  
It is not always possible to solve a large size of data via traditional statistical techniques. In order to solve these kinds of data special tactics like data mining are needed.  ...  Data mining may meet these kinds of needs with both categorizing and piling tactic.  ...  A multilayer feed-forward neural network consists of an input layer, one or more hidden layers, and an output layer. An example of a multilayer feed-forward network is shown by Han and Kamber [1] .  ... 
doi:10.9775/kvfd.2013.10154 fatcat:gysmvmdm7jfv7azwqfdqgzrloa

Medical Image Registration Using Deep Neural Networks: A Comprehensive Review [article]

Hamid Reza Boveiri, Raouf Khayami, Reza Javidan, Ali Reza MehdiZadeh
2020 arXiv   pre-print
and surveyed in details in this comprehensive review.  ...  Key concepts, statistical analysis from different points of view, confiding challenges, novelties and main contributions, key-enabling techniques, future directions and prospective trends all are discussed  ...  Also, a hierarchical training mechanism was used where small-sized patches in the size of 13×13×13 voxels were feeded to the first layer, and the first layer was trained accordingly.  ... 
arXiv:2002.03401v1 fatcat:u4utrifr2rg3bf6x6fgohyfmpy

A Review on Machine Learning Models in Injection Molding Machines

Senthil Kumaran Selvaraj, Aditya Raj, R. Rishikesh Mahadevan, Utkarsh Chadha, Velmurugan Paramasivam, Fuat Kara
2022 Advances in Materials Science and Engineering  
Conventional methods relying on the operator's expertise and defect detection techniques are ineffective in reducing defects.  ...  Some problems include data division, collection, and preprocessing steps, such as considering the inputs, networks, and outputs, algorithms used, models utilized for testing and training, and performance  ...  , [59] [60] [61] [62] . (1) Single-layer feed-forward network: in the singlelayer feed-forward network, there will be two layers: the input layer and the output layer.  ... 
doi:10.1155/2022/1949061 fatcat:lzi6kpqmdzcdrbr4tagwggdtp4

CortexNet: a Generic Network Family for Robust Visual Temporal Representations [article]

Alfredo Canziani, Eugenio Culurciello
2017 arXiv   pre-print
We introduce two training schemes - the unsupervised MatchNet and weakly supervised TempoNet modes - where a network learns how to correctly anticipate a subsequent frame in a video clip or the identity  ...  In the past five years we have observed the rise of incredibly well performing feed-forward neural networks trained supervisedly for vision related tasks.  ...  Acknowledgements This project leveraged the power, speed, and quick implementation time of PyTorch for all computationally expensive operations.  ... 
arXiv:1706.02735v2 fatcat:z6mbci4of5bepjg7tnl4bzmlbe

Unsupervised Neural Network Approach to Frame Analysis of Conventional Buildings

Lácides R. Pinto, Alejandro R. Zambrano
2014 International Journal of Communications, Network and System Sciences  
The issue of choosing an appropriate neural network structure and providing structural parameters to that network for training purposes is addressed by using an unsupervised algorithm.  ...  This is achieved by training the network. The frame will deform so that all joints will rotate an angle.  ...  The linear feed forward net has been found to be a suitable one for training techniques.  ... 
doi:10.4236/ijcns.2014.77022 fatcat:5zaong4atbea5awfu65i43lcu4

DDRNet: Depth Map Denoising and Refinement for Consumer Depth Cameras Using Cascaded CNNs [chapter]

Shi Yan, Chenglei Wu, Lizhen Wang, Feng Xu, Liang An, Kaiwen Guo, Yebin Liu
2018 Lecture Notes in Computer Science  
The rendering equation is exploited in our network in an unsupervised manner. In detail, we impose an unsupervised loss based on the light transport to extract the high-frequency geometry.  ...  Thanks to the well decoupling of the low and high frequency information in the cascaded network, we achieve superior performance over the state-of-the-art techniques.  ...  The denoising net is supervised by temporally fused reference depth map, and the refinement CNN is trained in an unsupervised manner.  ... 
doi:10.1007/978-3-030-01249-6_10 fatcat:ru4ijo7novhj7gfkep5r326s4a

Wind Energy Predictions of Small-Scale Turbine Output Using Exponential Smoothing and Feed-Forward Neural Network

Zaccheus O. Olaofe
2015 International Journal of Energy Engineering  
This article presents the comparisons of energy production predictions of a small-scale 40 kW wind turbine using an exponential smoothing technique and multilayer feed-forward neural network.  ...  In addition, an energy model based on a multilayer feed-forward neural network was used to compute the energy generation of the turbine.  ...  The energy model based on the multilayer feed-forward neural network (FNN) was trained using the supervised Levenberg Marquardt back-propagation algorithm.  ... 
doi:10.5963/ijee0502002 fatcat:zqmcbquakrad7n2uybjwv4u6m4

Unsupervised Learning of Depth, Camera Pose and Optical Flow from Monocular Video [article]

Dipan Mandal, Abhilash Jain, Sreenivas Subramoney
2022 arXiv   pre-print
Due to the nature of 3D scene geometry these three components are coupled. We leverage this fact to jointly train all the three components in an end-to-end manner.  ...  We propose DFPNet -- an unsupervised, joint learning system for monocular Depth, Optical Flow and egomotion (Camera Pose) estimation from monocular image sequences.  ...  -both supervised and unsupervised.  ... 
arXiv:2205.09821v1 fatcat:yepxjozka5b2hko6tab2ev4p7u

Adversarially learned iterative reconstruction for imaging inverse problems [article]

Subhadip Mukherjee, Ozan Öktem, Carola-Bibiane Schönlieb
2021 arXiv   pre-print
The improvement in reconstruction quality comes at the expense of higher training complexity, but, once trained, the reconstruction time remains the same as its supervised counterpart.  ...  Therefore, it is imperative to develop unsupervised learning protocols that are competitive with supervised approaches in performance.  ...  Fig. 1 . 1 Comparison of supervised and unsupervised training on the Shepp-Logan phantom. The PSNR (dB) and SSIM are indicated below the images.  ... 
arXiv:2103.16151v1 fatcat:yswemvnkmfhj3c6al7rcwttzqa

Unveiling phase transitions with machine learning [article]

Askery Canabarro, Felipe Fernandes Fanchini, André Luiz Malvezzi, Rodrigo Pereira, Rafael Chaves
2019 arXiv   pre-print
Here, we propose an alternative framework to identify quantum phase transitions, employing both unsupervised and supervised machine learning techniques.  ...  Typically, it relies on the identification of order parameters and the analysis of singularities in the free energy and its derivatives.  ...  It is a sophisticated supervised learning technique in the class of artificial feed-forward neural networks (or simply neural nets) used to approximate an unknown function f (X) by f * (X; θ), which maps  ... 
arXiv:1904.01486v1 fatcat:ufjfoonqv5dajelryk25e5l5hu
« Previous Showing results 1 — 15 out of 2,939 results