Filters








42,846 Hits in 7.7 sec

Learning Interpretable Error Functions for Combinatorial Optimization Problem Modeling [article]

Florian Richoux, Jean-François Baffier
2021 arXiv   pre-print
This is, to the best of our knowledge, the first attempt to automatically learn error functions for hard constraints.  ...  Our method uses a variant of neural networks we named Interpretable Compositional Networks, allowing us to get interpretable results, unlike regular artificial neural networks.  ...  Compositional Networks, a variant of neural networks to get interpretable results, 3. to propose an architecture of Interpretable Compositional Network to learn error functions, and 4. to provide a proof  ... 
arXiv:2002.09811v4 fatcat:oilv5cgmcrhzfkf4ilu6z4xugq

Generative Deep Neural Networks for Inverse Materials Design Using Backpropagation and Active Learning

Chun‐Teh Chen, Grace X. Gu
2020 Advanced Science  
This ML-based inverse design approach uses backpropagation to calculate the analytical gradients of an objective function with respect to design variables.  ...  Compared to passive learning, the active learning strategy is capable of generating better designs and reducing the amount of training data by at least an order-of-magnitude in the case study on composite  ...  Keywords composites, inverse problem, machine learning, materials design, optimization algorithms  ... 
doi:10.1002/advs.201902607 pmid:32154072 pmcid:PMC7055566 fatcat:3skflywtcbhg7b37m3sdog25qu

Automating Crystal-Structure Phase Mapping: Combining Deep Learning with Constraint Reasoning [article]

Di Chen, Yiwei Bai, Sebastian Ament, Wenting Zhao, Dan Guevarra, Lan Zhou, Bart Selman, R. Bruce van Dover, John M. Gregoire, Carla P. Gomes
2021 arXiv   pre-print
DRNets are designed with an interpretable latent space for encoding prior-knowledge domain constraints and seamlessly integrate constraint reasoning into neural network optimization.  ...  DRNets combine deep learning with constraint reasoning for incorporating scientific prior knowledge and consequently require only a modest amount of (unlabeled) data.  ...  The authors also thank Junwen Bai for assistance with running the IAFD baseline, Aniketa Shinde for photoelectrochemistry experiments, and Rich Berstein for assistance with figure generation.  ... 
arXiv:2108.09523v1 fatcat:sed2voog2nhm3iz4yp77xyqksm

Review of Learning-Assisted Power System Optimization [article]

Guangchun Ruan, Haiwang Zhong, Guanglun Zhang, Yiliu He, Xuan Wang, Tianjiao Pu
2020 arXiv   pre-print
With dramatic breakthroughs in recent years, machine learning is showing great potential to upgrade the toolbox for power system optimization.  ...  This paper pays special attention to the coordination between machine learning approaches and optimization models, and carefully evaluates how such data-driven analysis may improve the rule-based optimization  ...  Reference [86] designed a convolutionalneural-network-based classifier for faulted line localization.  ... 
arXiv:2007.00210v2 fatcat:sh54n6fdk5c4pdouq6u4ois53q

Index—Volumes 1–89

1997 Artificial Intelligence  
in Bayesian networks 1366 for Dempsters' rule 4 16 for estimation of surface orientation, WlSP 1026 for finding 3D shape 1001 for skeptical reasoning with binary defaults 637 for the description  ...  , graph linking -679 with a continuum of gradual change 43 [See also "process"] processing best order for -507 elements, independent -337 networks 695 of constraints 1357 of disparity information  ... 
doi:10.1016/s0004-3702(97)80122-1 fatcat:6az7xycuifaerl7kmv7l3x6rpm

Physics-Guided Deep Learning for Dynamical Systems: A Survey [article]

Rui Wang, Rose Yu
2021 arXiv   pre-print
In this paper, we provide a structured overview of existing methodologies of integrating prior physical knowledge or physics-based modeling into DL, with a special emphasis on learning dynamical systems  ...  Traditional physics-based models are sample efficient, interpretable but often rely on rigid assumptions.  ...  They trained the neural network with the Euler-Lagrange constraint loss functions such that it can learns to approximately conserve the total energy of the system. .  ... 
arXiv:2107.01272v4 fatcat:bpo6uvrnsffghigaxq7oaixkba

PixelNN: Example-based Image Synthesis [article]

Aayush Bansal and Yaser Sheikh and Deva Ramanan
2017 arXiv   pre-print
collapse problem. (2) they are not interpretable, making it difficult to control the synthesized output.  ...  We demonstrate our approach for various input modalities, and for various domains ranging from human faces to cats-and-dogs to shoes and handbags.  ...  [28] propose a two-step pipeline for face hallucination where global constraints capture overall structure, and local constraints produce photorealistic local features.  ... 
arXiv:1708.05349v1 fatcat:ns6wfb5ll5bfhc222ybb7lenga

Interpretable Machine Learning: Fundamental Principles and 10 Grand Challenges [article]

Cynthia Rudin, Chaofan Chen, Zhi Chen, Haiyang Huang, Lesia Semenova, Chudi Zhong
2021 arXiv   pre-print
better interpretability; (4) Modern case-based reasoning, including neural networks and matching for causal inference; (5) Complete supervised disentanglement of neural networks; (6) Complete or even partial  ...  unsupervised disentanglement of neural networks; (7) Dimensionality reduction for data visualization; (8) Machine learning models that can incorporate physics and other generative or causal constraints  ...  Acknowledgments We thank Leonardo Lucio Custode for pointing out several useful references to Challenge 10. Thank you to David Page for providing useful references on early explainable ML.  ... 
arXiv:2103.11251v2 fatcat:52llnswt3ze5rl3zhbai5bscce

Fast and stable deep-learning predictions of material properties for solid solution alloys [article]

Massimiliano Lupo Pasini, Ying Wai Li, Junqi Yin, Jiaxin Zhang, Kipton Barros, Markus Eisenbach
2020 arXiv   pre-print
We used a simple measure based on the root-mean-squared errors (RMSE) to quantify the quality of the NN models, and found that the inclusion of charge density and magnetic moment as physical constraints  ...  Our results show that once the multitasking NN's are trained, they can estimate the material properties for a specific configuration hundreds of times faster than first-principles density functional theory  ...  The objective function to be minimized in (3) interprets the constraint as a penalization term through the penalization multiplier λ.  ... 
arXiv:1912.11152v3 fatcat:zxivliaeffhkbbfw74ubhvmm2m

Reframing Neural Networks: Deep Structure in Overcomplete Representations [article]

Calvin Murdock, Simon Lucey
2021 arXiv   pre-print
As a criterion for model selection, we show correlation with generalization error on a variety of common deep network architectures such as ResNets and DenseNets.  ...  To approach this question, we introduce deep frame approximation, a unifying framework for representation learning with structured overcomplete frames.  ...  0.15 0.2 0.25 0.3 Validation Error Residual Network, Base Width = 16 Residual Network, Base Width = 4 Chain Network, Base Width = 16 Chain Network, Base Width = 4 (a) Validation Error  ... 
arXiv:2103.05804v1 fatcat:ram3eotgejhdzndqmkond2fj6u

A Primer on Zeroth-Order Optimization in Signal Processing and Machine Learning [article]

Sijia Liu, Pin-Yu Chen, Bhavya Kailkhura, Gaoyuan Zhang, Alfred Hero, Pramod K. Varshney
2020 arXiv   pre-print
It is used for solving optimization problems similarly to gradient-based methods. However, it does not require the gradient, using only function evaluations.  ...  In this paper, we provide a comprehensive review of ZO optimization, with an emphasis on showing the underlying intuition, optimization principles and recent advances in convergence analysis.  ...  ZO optimization with black-box constraints. The current work on ZO optimization is restricted to black-box objective functions with white-box constraints.  ... 
arXiv:2006.06224v2 fatcat:fx624eqhifbqpp5hbd5a5cmsny

Automated Test Generation to Detect Individual Discrimination in AI Models [article]

Aniya Agarwal, Pranay Lohia, Seema Nagar, Kuntal Dey, Diptikalyan Saha
2018 arXiv   pre-print
Our technique combines the well-known technique called symbolic execution along with the local explainability for generation of effective test cases.  ...  Measuring individual discrimination requires an exhaustive testing, which is infeasible for a non-trivial system.  ...  Conclusion In this paper, we present a test case generation algorithm for checking individual discrimination in AI models.  ... 
arXiv:1809.03260v1 fatcat:xnwwjexkynay7a5ch2iwufwjhe

Inversion of feedforward neural networks: algorithms and applications

C.A. Jensen, R.D. Reed, R.J. Marks, M.A. El-Sharkawi, Jae-Byung Jung, R.T. Miyamoto, G.M. Anderson, C.J. Eggen
1999 Proceedings of the IEEE  
This paper surveys existing methodologies for neural network inversion, which is illustrated by its use as a tool in query-based learning, sonar performance analysis, power system security assessment,  ...  Keywords: Feedforward neural networks, multilayer perceptron, nonlinear system inversion, constrained inversion, adaptive sonar, power system security assessment, query-based learning.  ...  The average multirun RMS error for the neural network trained via the query learning algorithm was 10.11 compared with an average of the 48.53 for networks trained without query learning.  ... 
doi:10.1109/5.784232 fatcat:fwyjuttudzd2zknko7k73nxkwe

The emergent algebraic structure of RNNs and embeddings in NLP [article]

Sean A. Cantrell
2018 arXiv   pre-print
Appealing to these results, we propose a novel class of recurrent-like neural networks and a word embedding scheme.  ...  A hyperparameter search over word embedding dimension, GRU hidden dimension, and a linear combination of the GRU outputs is performed.  ...  Cantrell, and Mark Laczin for providing useful discussions, of both linguistic and mathematical natures, as the work unfolded.  ... 
arXiv:1803.02839v1 fatcat:dh4o3hc7i5erngnd56wt6brtxa

GradientBased Learning Applied to Document Recognition [chapter]

2009 Intelligent Signal Processing  
Multilayer neural networks trained with the back-propagation algorithm constitute the best example of a successful gradientbased learning technique.  ...  A graph transformer network for reading a bank check is also described.  ...  Guyon for helpful discussions, C. Stenard and R. Higgins for providing the applications that motivated some of this work, and L. R. Rabiner and L. D. Jackel for relentless support and encouragement.  ... 
doi:10.1109/9780470544976.ch9 fatcat:z6huexh62vfdvfcimjnymp4qrm
« Previous Showing results 1 — 15 out of 42,846 results