Filters








2,335 Hits in 5.6 sec

Comparison of Accuracy and Scalability of Gauss-Newton and Alternating Least Squares for CP Decomposition [article]

Navjot Singh, Linjian Ma, Hongru Yang, Edgar Solomonik
2020 arXiv   pre-print
In addition, we propose a regularization scheme for Gauss-Newton method to improve convergence properties without any additional cost.  ...  In particular, we leverage a formulation that employs tensor contractions for implicit matrix-vector products within the conjugate gradient method.  ...  Navjot Singh, Linjian Ma, and Edgar Solomonik were supported by the US NSF OAC SSI program, award No. 1931258.  ... 
arXiv:1910.12331v2 fatcat:p5ql44sgw5azbe2eodhxbs4mmq

Automatic parameters selection in machine learning

Teresa B. Ludermir, Marcilio C.P. de Souto, Marley Vellasco
2012 Neurocomputing  
As a result, this edition will provide the readers a rich material of current research on Automatic Parameters selection in Machine Learning and related issues.  ...  Support Vector Machines (SVMs) have achieved very good performance on different learning problems.  ...  (QNN) based on a novel quantum neuron node implemented as a very simple quantum circuit is proposed and investigated.  ... 
doi:10.1016/j.neucom.2011.07.008 fatcat:eb5yy4n7sfe2tnf7fqqecbbu2u

Quantum algorithms for Second-Order Cone Programming and Support Vector Machines [article]

Iordanis Kerenidis, Anupam Prakash, Dániel Szilágyi
2021 arXiv   pre-print
For the case of random SVM (support vector machine) instances of size O(n), the quantum algorithm scales as O(n^k), where the exponent k is estimated to be 2.59 using a least-squares power law.  ...  We present a quantum interior-point method (IPM) for second-order cone programming (SOCP) that runs in time O( n√(r)ζκ/δ^2log(1/ϵ) ) where r is the rank and n the dimension of the SOCP, δ bounds the distance  ...  Acknowledgmenets This work was partly supported by IdEx Université de Paris ANR-18-IDEX-0001, as well the French National Research Agency (ANR) projects QuBIC and QuDATA.  ... 
arXiv:1908.06720v4 fatcat:llcs3twxnbbs5ckab2lc73rrj4

Quantum algorithms for Second-Order Cone Programming and Support Vector Machines

Iordanis Kerenidis, Anupam Prakash, Dániel Szilágyi
2021 Quantum  
For the case of random SVM (support vector machine) instances of size O(n), the quantum algorithm scales as O(nk), where the exponent k is estimated to be 2.59 using a least-squares power law.  ...  We present a quantum interior-point method (IPM) for second-order cone programming (SOCP) that runs in time O~(nrζκδ2log⁡(1/ϵ)) where r is the rank and n the dimension of the SOCP, δ bounds the distance  ...  Acknowledgmenets This work was partly supported by IdEx Université de Paris ANR-18-IDEX-0001, as well the French National Research Agency (ANR) projects QuBIC and QuDATA.  ... 
doi:10.22331/q-2021-04-08-427 fatcat:j7gyiliihneuxptgk3uuyqc6hy

Estimating the gradient and higher-order derivatives on quantum hardware [article]

Andrea Mari, Thomas R. Bromley, Nathan Killoran
2020 arXiv   pre-print
Our findings are supported by several numerical and hardware experiments, including an experimental estimation of the Hessian of a simple variational circuit and an implementation of the Newton optimizer  ...  a quantum computer.  ...  This project was supported under DARPA project HR00112090015, Backpropagating Through Quantum Computers.  ... 
arXiv:2008.06517v1 fatcat:rzaxwnluhzdjnp35jhtgtjspze

Quantum gradient descent and Newton's method for constrained polynomial optimization

Patrick Rebentrost, Maria Schuld, Leonard Wossnig, Francesco Petruccione, Seth Lloyd
2019 New Journal of Physics  
Optimization problems in disciplines such as machine learning are commonly solved with iterative methods.  ...  The required operations perform polylogarithmically in the dimension of the solution vector and exponentially in the number of iterations.  ...  MS and FP acknowledge support by the South African Research Chair Initiative of the Department of Science and Technology and National Research Foundation. SL was supported by ARO.  ... 
doi:10.1088/1367-2630/ab2a9e fatcat:l6zmjcjgnjgy7bowjwg6pgdctm

Grid-based lattice summation of electrostatic potentials by assembled rank-structured tensor approximation

Venera Khoromskaia, Boris N. Khoromskij
2014 Computer Physics Communications  
Newton kernel.  ...  It is based on the assembled low-rank canonical tensor representations of the collected potentials using pointwise sums of shifted canonical vectors representing the single generating function, say the  ...  One of the first steps in development of tensor numerical methods was the 3D grid-based tensor-structured method for solution of the nonlinear Hartree-Fock equation [28, 12, 9, 11] based on the efficient  ... 
doi:10.1016/j.cpc.2014.08.015 fatcat:lbcbxmegozeljip4cgxizaglue

Methods for geometry optimization of large molecules. I. An O(N2) algorithm for solving systems of linear equations for the transformation of coordinates and forces

Ödön Farkas, H. Bernhard Schlegel
1998 Journal of Chemical Physics  
The current study is focused on a special approach to solving these sequential systems of linear equations using a method based on the update of the inverse of the symmetric matrix M i .  ...  The most recent methods in quantum chemical geometry optimization use the computed energy and its first derivatives with an approximate second derivative matrix.  ...  Molecule No. of atoms No. of coordinates Level of theory Machine type CPU time usage in seconds during geometry optimization Regular Current study First cycle Average b C 60 60  ... 
doi:10.1063/1.477393 fatcat:ufzy6ohg6vcx5amz6tqzyoebty

Prospects of tensor-based numerical modeling of the collective electrostatic potential in many-particle systems [article]

Venera Khoromskaia, Boris N. Khoromskij
2020 arXiv   pre-print
In this paper, we outline the prospects for tensor-based numerical modeling of the collective electrostatic potential on lattices and in many-particle systems of general type.  ...  We generalize the approach initially introduced for the rank-structured grid-based calculation of the collective potentials on 3D lattices [39] to the case of many-particle systems with variable charges  ...  the effective support, see much less canonical vectors compared with the case of Newton kernel.  ... 
arXiv:2001.11393v1 fatcat:sgadzb4f3fdn7dv66mpxcx2kbi

Fast and Furious Convergence: Stochastic Second Order Methods under Interpolation [article]

Si Yi Meng, Sharan Vaswani, Issam Laradji, Mark Schmidt, Simon Lacoste-Julien
2020 arXiv   pre-print
Under this condition, we show that the regularized subsampled Newton method (R-SSN) achieves global linear convergence with an adaptive step-size and a constant batch-size.  ...  We empirically evaluate stochastic L-BFGS and a "Hessian-free" implementation of R-SSN for binary classification on synthetic, linearly-separable datasets and real datasets under a kernel mapping.  ...  LIBSVM: A library for support vector machines. ACM transac- tions on intelligent systems and technology (TIST), 2(3):27, 2011. Soham De, Abhay Yadav, David Jacobs, and Tom Goldstein.  ... 
arXiv:1910.04920v2 fatcat:jfzvxawxdrcp3ocfbnxmzh4fi4

Quantum optimal control using the adjoint method

Alfio Borzì
2012 Mathematics of Quantum Technologies  
In this paper, a review of recent developments in the field of optimal control of quantum systems is given with a focus on adjoint methods and their numerical implementation.  ...  fields of quantum computation and quantum communication.  ...  This work was supported in part by DFG Project "Controllability and Optimal Control of Interacting Quantum Dynamical Systems".  ... 
doi:10.2478/nsmmt-2012-0007 fatcat:hsyqlquq2jdx3cbpapbdghkima

Quantum gradient descent and Newton's method for constrained polynomial optimization [article]

Patrick Rebentrost, Maria Schuld, Leonard Wossnig, Francesco Petruccione, Seth Lloyd
2018 arXiv   pre-print
Optimization problems in disciplines such as machine learning are commonly solved with iterative methods.  ...  The required operations perform polylogarithmically in the dimension of the solution vector and exponentially in the number of iterations.  ...  In machine learning and artificial intelligence, common techniques such as regression, support vectors machines, and neural networks rely on optimization.  ... 
arXiv:1612.01789v4 fatcat:vz3fpkuoabe4xko7sgfcpf736a

Half-Inverse Gradients for Physical Deep Learning [article]

Patrick Schnell, Philipp Holl, Nils Thuerey
2022 arXiv   pre-print
Our method is based on a half-inversion of the Jacobian and combines principles of both classical network and physics optimizers to solve the combined optimization task.  ...  Compared to state-of-the-art neural network optimizers, our method converges more quickly and yields better solutions, which we demonstrate on three complex learning problems involving nonlinear oscillators  ...  ACKNOWLEDGEMENTS This work was supported by the ERC Consolidator Grant CoG-2019-863850 SpaTe, and by the DFG SFB-Transregio 109 DGD.  ... 
arXiv:2203.10131v1 fatcat:kctdlz6o2rf7xieud33t6esgqe

Distributed-Memory Tensor Completion for Generalized Loss Functions in Python using New Sparse Tensor Kernels [article]

Navjot Singh, Zecheng Zhang, Xiaoxiao Wu, Naijing Zhang, Siyuan Zhang, Edgar Solomonik
2021 arXiv   pre-print
Specifically, we consider alternating minimization, coordinate minimization, and a quasi-Newton (generalized Gauss-Newton) method.  ...  We provide microbenchmarking results on the Stampede2 supercomputer to demonstrate the efficiency of the new primitives and Cyclops functionality.  ...  For tensor completion, we propose a novel Newton-method-based algorithmic framework for generalized tensor completion.  ... 
arXiv:1910.02371v3 fatcat:nrxjrkde7baulfn4zdj3vmlgcm

Variational Quantum Boltzmann Machines [article]

Christa Zoufal and Aurélien Lucchi and Stefan Woerner
2020 arXiv   pre-print
This work presents a novel realization approach to Quantum Boltzmann Machines (QBMs).  ...  The preparation of the required Gibbs states, as well as the evaluation of the loss function's analytic gradient is based on Variational Quantum Imaginary Time Evolution, a technique that is typically  ...  The Linear and RBF Support Vector Machine (SVM) are based on a linear and radial kernel, respectively.  ... 
arXiv:2006.06004v1 fatcat:fhp4hwix7raofhh7242vtklqoa
« Previous Showing results 1 — 15 out of 2,335 results