Filters








13,256 Hits in 7.0 sec

Estimating Training Data Influence by Tracing Gradient Descent [article]

Garima Pruthi, Frederick Liu, Mukund Sundararajan, Satyen Kale
2020 arXiv   pre-print
We introduce a method called TracIn that computes the influence of a training example on a prediction made by the model.  ...  It applies to any machine learning model trained using stochastic gradient descent or a variant of it, agnostic of architecture, domain and task.  ...  Practical gradient descent algorithms almost always operate with a group of training examples, i.e., a minibatch.  ... 
arXiv:2002.08484v3 fatcat:ctntn67xpzbxxolhmspuoxctai

FOAL: Fast Online Adaptive Learning for Cardiac Motion Estimation [article]

Hanchao Yu, Shanhui Sun, Haichao Yu, Xiao Chen, Honghui Shi, Thomas Huang, Terrence Chen
2020 arXiv   pre-print
In this context, we proposed a novel fast online adaptive learning (FOAL) framework: an online gradient descent based optimizer that is optimized by a meta-learner.  ...  The results showed the superior performance of FOAL in accuracy compared to the offline-trained tracking method. On average, the FOAL took only $0.4$ second per video for online optimization.  ...  The online adaptor is a gradient descent based optimizer which itself is also optimized by a meta-learner.  ... 
arXiv:2003.04492v2 fatcat:25mo63sj7nb2df53j4rkllh5ey

Artificial Intelligence and Its Applications 2014

Yudong Zhang, Saeed Balochian, Praveen Agarwal, Vishal Bhatnagar, Orwa Jaber Housheya
2016 Mathematical Problems in Engineering  
and gradient descent with momentum algorithm.  ...  with Beale, conjugate gradient with Fletcher Reeves updates, conjugate gradient with Polakribiere updates, one step secant, gradient descent, gradient descent with momentum and adaptive learning rate,  ...  and gradient descent with momentum algorithm.  ... 
doi:10.1155/2016/3871575 fatcat:irj62qjsdzfu7h4fdslkgy5hny

A Generic Framework for Tracking Using Particle Filter With Dynamic Shape Prior

Yogesh Rathi, Namrata Vaswani, Allen Tannenbaum
2007 IEEE Transactions on Image Processing  
Tracking deforming objects involves estimating the global motion of the object and its local deformations as functions of time.  ...  The PF also models image statistics such as mean and variance of the given data which can be useful in obtaining proper separation of object and background.  ...  Dambreville for fruitful discussions on various topics involving tracking.  ... 
doi:10.1109/tip.2007.894244 pmid:17491466 pmcid:PMC3654013 fatcat:2gctfjduvvgi3ecq43ofujnqum

Parameter Prediction Using Machine Learning in Robot-Assisted Finishing Process

Bobby K Pappachan, Tegoeh Tjahjowidodo
2020 International Journal of Mechanical Engineering and Robotics Research  
The quality and dimensions of a finished product is often times dictated by the process parameter set initially.  ...  This paper presents a parameter prediction method tested successfully on data acquired from a robotassisted deburring process.  ...  Machine learning model The sensor data collected was used train two different machine learning models based on gradient descent optimisation and neural networks algorithm respectively.  ... 
doi:10.18178/ijmerr.9.3.435-440 fatcat:zjn2kikdiraipnvcjajotprlcy

Explaining Neural Matrix Factorization with Gradient Rollback [article]

Carolin Lawrence, Timo Sztyler, Mathias Niepert
2020 arXiv   pre-print
We propose gradient rollback, a general approach for influence estimation, applicable to neural models where each parameter update step during gradient descent touches a smaller number of parameters, even  ...  This establishes that gradient rollback is robustly estimating example influence.  ...  In contrast, GR tracks the changes made to the parameters during training and uses the aggregated contributions to estimate influence.  ... 
arXiv:2010.05516v4 fatcat:7evbju2yxzd77bxwbod2zcb4ty

Eye-Tracking Signals Based Affective Classification Employing Deep Gradient Convolutional Neural Networks

Yuanfeng Li, Jiangang Deng, Qun Wu, Ying Wang
2021 International Journal of Interactive Multimedia and Artificial Intelligence  
This research aims to develop a deep gradient convolutional neural network (DGCNN) for classifying affection by using an eye-tracking signals.  ...  A convolutional neural networks-based training structure was subsequently applied; the experimental dataset was acquired by an eye-tracking device by assigning four affective stimuli (nervous, calm, happy  ...  descent (BDG), stochastic gradient descent (SGD), and mini-batch gradient descent (MGD); the evolution of gradient descent is mainly several adaptive gradient descent algorithms such as AdaGrad, RMSprop  ... 
doi:10.9781/ijimai.2021.06.002 fatcat:nl4jey3kdzhtphrxo55zzhwsvi

Interpretable Data-Based Explanations for Fairness Debugging [article]

Romila Pradhan, Jiongli Zhu, Boris Glavic, Babak Salimi
2021 arXiv   pre-print
Specifically, we introduce the concept of causal responsibility that quantifies the extent to which intervening on training data by removing or updating subsets of it can resolve the bias.  ...  In this work, we introduce Gopher, a system that produces compact, interpretable, and causal explanations for bias or unexpected model behavior by identifying coherent subsets of the training data that  ...  While our single-step gradient descent approach can be used to estimate subset influence, this may not be a good idea for learning algorithms use more efficient techniques than gradient descent.  ... 
arXiv:2112.09745v1 fatcat:2tfjxr7kg5f33eiovymhbbm3sq

Dense Object Reconstruction from RGBD Images with Embedded Deep Shape Representations [article]

Lan Hu, Yuchen Cao, Peng Wu, Laurent Kneip
2018 arXiv   pre-print
We demonstrate a general ability to improve mapping accuracy with respect to each modality alone, and present a successful application to real data.  ...  The traditional approach is given by a least-squares objective, which minimizes many local photometric or geometric residuals over explicitly parametrized structure and camera parameters.  ...  Experiments We evaluate our method qualitatively and quantitatively on both synthetic data and real data. We start by tuning the step size of gradient descent in a dedicated offline experiment.  ... 
arXiv:1810.04891v1 fatcat:4hfonznxw5ednngxo57fwpc6ui

Estimation of Navigation Mark Floating Based on Fractional-Order Gradient Descent with Momentum for RBF Neural Network

Qionglin Fang, Juan C. Jauregui-Correa
2021 Mathematical Problems in Engineering  
The navigation mark's meteorological, hydrological, and initial position data are taken as the input of the neural network. The neural network is trained and used to estimate the mark's position.  ...  It is effective at accelerating convergence speed and improving the performance of a gradient descent method.  ...  Algorithm 1: FOGDM-RBF for AIS interpolation Input: Navigation mark position data, meteorological and hydrological data as training samples and testing samples Output: the estimated drifting position of  ... 
doi:10.1155/2021/6681651 fatcat:zfdi5lebtzbctpzmycqqfysjxi

Calibration-free TDOA self-localisation

Johannes Wendeberg, Fabian Höflinger, Christian Schindelhauer, Leonhard Reindl
2013 Journal of Location Based Services  
Using our algorithm, we estimate the trajectory of a moving model train and of a RC car with a precision in the range of few centimeters.  ...  It is solved using the iterative Gauss-Newton method [5] or by linear estimators [6] .  ...  Acknowledgment This work has partly been supported by the German Research Foundation (Deutsche Forschungsgemeinschaft, DFG) within the Research Training Group 1103 (Embedded Microsystems).  ... 
doi:10.1080/17489725.2013.796410 fatcat:fhgh4asnxbc4vj2sh2bitrnxam

Research on Data Processing Method of High Altitude Meteorological Parameters Based on Neural Network

Min Qiu, Yan Nan Mu, Xiu Ping Zhang, Dong Yang
2015 Open Automation and Control Systems Journal  
, establish the forecast model of high altitude meteorological data with Fletcher-Reeves algorithm, and analyze the influence of the hidden layer nodes of training.  ...  ., which are influenced between each parameter, therefore, it is very important to deal with and analyze these parameters.  ...  Such as batch gradient descent training function, momentum batch gradient descent function, adaptive learning algorithm, resilient BP algorithm, Fletcher-Reeves algorithm, the conjugate gradient algorithm  ... 
doi:10.2174/1874444301507011597 fatcat:sm732bv62fc35hurf24ru2bdgy

Leaf Segmentation and Tracking Using Probabilistic Parametric Active Contours [chapter]

Jonas De Vylder, Daniel Ochoa, Wilfried Philips, Laury Chaerle, Dominique Van Der Straeten
2011 Lecture Notes in Computer Science  
This can be done by changing the weighting parameters of the data fit and regularization term. There is, however, no rule to set these parameters optimally for a given application.  ...  Active contours or snakes are widely used for segmentation and tracking.  ...  Jonas De Vylder is funded by the Institute for the Promotion of Innovation by Science and Technology in Flanders (IWT).  ... 
doi:10.1007/978-3-642-24136-9_7 fatcat:xw54myh7kbbpfhnukd6iyj2epi

Multi-resolution Tensor Learning for Large-Scale Spatial Data [article]

Stephan Zheng, Rose Yu, Yisong Yue
2018 arXiv   pre-print
MMT leverages the property that spatial data can be viewed at multiple resolutions, which are related by coarsening and finegraining from one resolution to another.  ...  Using this property, MMT learns a tensor model by starting from a coarse resolution and iteratively increasing the model complexity.  ...  During training, we estimate the gradient distribution P(д w (x, y)) by recording the empirical minibatch gradients and their statistics µ, σ , S over a rolling window of T training steps.  ... 
arXiv:1802.06825v2 fatcat:npigyaxzqvayrcsyyvqweoxt2i

Dynamic Mini-batch SGD for Elastic Distributed Training: Learning in the Limbo of Resources [article]

Haibin Lin, Hang Zhang, Yifei Ma, Tong He, Zhi Zhang, Sheng Zha, Mu Li
2019 arXiv   pre-print
We therefore propose to smoothly adjust the learning rate over time to alleviate the influence of the noisy momentum estimation.  ...  With an increasing demand for training powers for deep learning algorithms and the rapid growth of computation resources in data centers, it is desirable to dynamically schedule different distributed deep  ...  . , N do Run in parallel Compute gradient ∇l i based on data partition X i Allreduce gradient by ∇l ← N i=1 ∇l i Update parameters by w ← w − η B ∇l end for Table 2 : 2 The training throughput  ... 
arXiv:1904.12043v2 fatcat:dlxem5feyfhmlle2oyxxaovwta
« Previous Showing results 1 — 15 out of 13,256 results