Filters








800,269 Hits in 5.2 sec

Approximate testing with error relative to input size

Marcos Kiwi, Frédéric Magniez, Miklos Santha
2003 Journal of computer and system sciences (Print)  
This work considers approximation errors whose magnitude grows with the size of the input to the program.  ...  We formalize the notion and initiate the investigation of approximate testing for arbitrary forms of the error term.  ...  ACKNOWLEDGMENT We would like to thank Stéphane Boucheron for useful discussions.  ... 
doi:10.1016/s0022-0000(03)00004-7 fatcat:dexcmsh2ofayvgabrv3cr5hbom

Automated designation of tie-points for image-to-image coregistration

R. E. Kennedy, W. B. Cohen
2003 International Journal of Remote Sensing  
For more than 1600 tests, median time needed to identify each ITP was approximately 8 s on a common image-processing computer system.  ...  We tested the software under several confounding conditions, representing image distortions, inaccurate user input, and changes between images.  ...  The authors would like to thank Karin Fassnacht and Andrew Hudak for helpful comments on the manuscript, Andy Hansen for collaboration on one of the funding projects, Greg Asner for images from Brazil,  ... 
doi:10.1080/0143116021000024249 fatcat:qgbwcc3qffhm7mndfugigiic2i

Is Nonparametric Learning Practical in Very High Dimensional Spaces?

Gregory Z. Grudic, Peter D. Lawrence
1997 International Joint Conference on Artificial Intelligence  
functions despite the presence of noise in both inputs and outputs.  ...  We propose a simple nonparametric learning algorithm to support our conclusion.  ...  With the addition of noise the relative error approaches the theoretical limit (due to the 3 to 1 signal to noise ratio) of 0.1.  ... 
dblp:conf/ijcai/GrudicL97 fatcat:cucqfrc7jnazdmd34bdzy4ve5a

Variational training of neural network approximations of solution maps for physical models [article]

Yingzhou Li, Jianfeng Lu, Anqi Mao
2019 arXiv   pre-print
map adapted to the input data distribution.  ...  A novel solve-training framework is proposed to train neural network in representing low dimensional solution maps of physical models.  ...  Table 4 presents the relative errors for different N train and N test with K " 5. The test relative error decreases as N train increases.  ... 
arXiv:1905.02789v1 fatcat:st7m6wzlpndfnkinodxs75mcje

Prediction of hydrogen concentration in containment during severe accidents using fuzzy neural network

Dong Yeong Kim, Ju Hyun Kim, Kwae Hwan Yoo, Man Gyun Na
2015 Nuclear Engineering and Technology  
The FNN model is expected to assist operators to prevent a hydrogen explosion in severe accident situations and manage the accident properly because they are able to predict the changes in the trend of  ...  A method using a fuzzy neural network (FNN) was applied to predict the hydrogen concentration in the containment.  ...  In cases in which the break size of the LOCAs was assumed to be predicted with a random error of < 5%, the RMS errors for the test data were approximately 1.78%, 4.96%, and 19.01% for the hot-leg and cold-leg  ... 
doi:10.1016/j.net.2014.12.004 fatcat:dv2o525zafcobkpp46yfgjygp4

Estimating training data boundaries in surrogate-based modeling

Luis E. Pineda, Benjamin J. Fregly, Raphael T. Haftka, Nestor V. Queipo
2010 Structural And Multidisciplinary Optimization  
The 16 proposed approach: i) gives good approximations for the boundaries of the 17 restricted input spaces, ii) exhibits reasonable error rates when classifying 18 prediction sites as inside or outside  ...  Using surrogate models outside training data boundaries can be risky and subject to 9 significant errors.  ...  Branin-Hoo test caselimited sample sizes, the proposed approach exhibited good approximations to 355 the training data boundaries, and reasonable balanced error rates when classifying prediction 356 sites  ... 
doi:10.1007/s00158-010-0541-7 fatcat:mw4daqt4ufffrbilouz57oql5e

Multi-linearity Self-Testing with Relative Error [chapter]

Frédéric Magniez
2000 Lecture Notes in Computer Science  
We investigate self-testing programs with relative error by allowing error terms proportional to the function to be computed.  ...  In the self-testing literature for numerical computations, only absolute errors and sublinear (in the input size) errors were previously studied.  ...  We are also grateful to the anonymous referees for their remarks that greatly improved the presentation of the paper.  ... 
doi:10.1007/3-540-46541-3_25 fatcat:alwynu2dbze4foa2y7foodx2y4

Multi-Linearity Self-Testing with Relative Error

Frédéric Magniez
2004 Theory of Computing Systems  
We investigate self-testing programs with relative error by allowing error terms proportional to the function to be computed.  ...  In the self-testing literature for numerical computations, only absolute errors and sublinear (in the input size) errors were previously studied.  ...  We are also grateful to the anonymous referees for their remarks that greatly improved the presentation of the paper.  ... 
doi:10.1007/s00224-004-1125-y fatcat:a6gxocslavaj5inylr7b7midr4

Finite-size effects and optimal test set size in linear perceptrons

D Barber, D Saad, P Sollich
1995 Journal of Physics A: Mathematical and General  
test set size.  ...  Where exact results were not tractable, a good approximation is given to the variance.  ...  That is, given a data set of size In order to calculate the optimal scheme to satisfy the above dual requirement, we minimize e,b(mll) with respect to m~ to find the optimal test set size, m'.  ... 
doi:10.1088/0305-4470/28/5/018 fatcat:onryi2vb2jbrviqa4o4y27lwja

On the influence of over-parameterization in manifold based surrogates and deep neural operators [article]

Katiana Kontolati, Somdatta Goswami, Michael D. Shields, George Em Karniadakis
2022 arXiv   pre-print
We demonstrate that the performance of m-PCE and DeepONet is comparable for cases of relatively smooth input-output mappings.  ...  Furthermore, we compare the performance of the above models with another operator learning model, the Fourier Neural Operator, and show that its over-parameterization also leads to better generalization  ...  Similar to Case I, the error continuously decreases with increasing training set size for all models.  ... 
arXiv:2203.05071v1 fatcat:dpbitrk5kvdh5ilgz6funy25zy

Superfast Approximate Linear Least Squares Solution of a Highly Overdetermined Linear System of Equations [article]

Qi Luan, Victor Y. Pan
2021 arXiv   pre-print
Our extensive tests are in good accordance with this result.  ...  We propose its simple deterministic variation which computes such a solution for a random input whp and therefore computes it deterministically for a large input class.  ...  Relative residual norm in tests with ill-conditioned random inputs Figure 3 : 3 Relative residual norm in tests with Red Wine Quality Data Figure 4 : 4 Relative residual norm in tests with California  ... 
arXiv:1906.03784v9 fatcat:dooiydw3lrgrtay7jct52sowru

Fixed-Rank Approximation of a Positive-Semidefinite Matrix from Streaming Data [article]

Joel A. Tropp, Alp Yurtsever, Madeleine Udell, Volkan Cevher
2017 arXiv   pre-print
Theoretical analysis establishes that the proposed method can achieve any prescribed relative error in the Schatten 1-norm and that it exploits the spectral decay of the input matrix.  ...  The approach combines the Nystrom approximation with a novel mechanism for rank truncation.  ...  Choose a psd input matrix A ∈ F n×n and a target rank r. Then fix a sketch size parameter k with r ≤ k ≤ n.  ... 
arXiv:1706.05736v1 fatcat:oeye7rs7hzfexk3b3fkaldmgva

Simulator-free Solution of High-Dimensional Stochastic Elliptic Partial Differential Equations using Deep Neural Networks [article]

Sharmila Karumuri, Rohit Tripathy, Ilias Bilionis, Jitesh Panchal
2019 arXiv   pre-print
This poses an insurmountable challenge to response surface modeling since the number of forward model evaluations needed to construct an accurate surrogate grows exponentially with the dimension of the  ...  We demonstrate our solver-free approach through various examples where the elliptic SPDE is subjected to different types of high-dimensional input uncertainties.  ...  Acknowledgements We would like to acknowledge support from the NSF awards #1737591 and #1728165. We would also like to acknowledge support from the Defense Ad-  ... 
arXiv:1902.05200v3 fatcat:2wdjauxkufbhfjktvzn5t6tejm

DeepSampling: Selectivity Estimation with Predicted Error and Response Time [article]

Tin Vu, Ahmed Eldawy
2020 arXiv   pre-print
The model can also be reversed to measure the sample size that would produce a desired accuracy.  ...  This paper proposes DeepSampling, a deep-learning-based model that predicts the accuracy of a sample-based AQP algorithm, specially selectivity estimation, given the sample size, the input distribution  ...  The errors are relatively higher than the accuracy prediction problem in Section 4.2.  ... 
arXiv:2008.06831v1 fatcat:ms2dpb35fne4vfemvmj5kxn2oq

Autoregressive time series prediction by means of fuzzy inference systems using nonparametric residual variance estimation

Federico Montesino Pouzols, Amaury Lendasse, Angel Barriga Barros
2010 Fuzzy sets and systems (Print)  
Delta test based residual variance estimations are used in order to select the best subset of inputs to the fuzzy inference systems as well as the number of linguistic labels for the inputs.  ...  Concrete criteria and procedures within the proposed methodology framework are applied to a number of time series prediction problems.  ...  Training and test errors are expressed relative to the training and test errors, respectively, of fuzzy models built using the W&M and Levenberg-Marquardt methods within the proposed methodology.  ... 
doi:10.1016/j.fss.2009.10.018 fatcat:wsvmrsf3k5hfdcsv3qoqxcgdua
« Previous Showing results 1 — 15 out of 800,269 results