Filters








11,629 Hits in 7.4 sec

Book announcements

1993 Discrete Applied Mathematics  
General principles of neurocomputation. Semi-parallel neurocomputers. Chapter 12: A Carsten pages Computational models (Deter- ministic computation. Probabilistic computation.  ...  Program testing and verification (Robustness. In- stance checking. Self-testing and self-correcting programs. Comparison with the Blum-Luby-Rubinfeld model).  ... 
doi:10.1016/0166-218x(93)90109-2 fatcat:selkadhvzrfs5mcrv2jqauh5wm

Adversarial Policy Gradient for Alternating Markov Games

Chao Gao, Martin Müller, Ryan Hayward
2018 International Conference on Learning Representations  
We show that when combined with search, using a single neural net model, the resulting program consistently beats MoHex 2.0, the previous state-of-the-art computer Hex player, on board sizes from 9×9 to  ...  ., in AlphaGo, self-play REINFORCE was used to improve the neural net model after supervised learning.  ...  Another contribution we made is a multi-boardsize neural net architecture, we demonstrated that a single neural net model trained on smaller board size can effectively generalize to larger board sizes.  ... 
dblp:conf/iclr/Gao0H18 fatcat:7w4odimedndeflr6pm44yg4qbm

Latte: a language, compiler, and runtime for elegant and efficient deep neural networks

Leonard Truong, Rajkishore Barik, Ehsan Totoni, Hai Liu, Chick Markley, Armando Fox, Tatiana Shpeisman
2016 Proceedings of the 37th ACM SIGPLAN Conference on Programming Language Design and Implementation - PLDI 2016  
The Latte compiler synthesizes a program based on the user specification, applies a suite of domainspecific and general optimizations, and emits efficient machine code for heterogeneous architectures.  ...  Deep neural networks (DNNs) have undergone a surge in popularity with consistent advances in the state of the art for tasks including image recognition, natural language processing, and speech recognition  ...  program sponsored by MARCO and DARPA, and AS-PIRE Lab industrial sponsors and affiliates Intel, Google, Hewlett-Packard, Huawei, LGE, NVIDIA, Oracle, and Samsung.  ... 
doi:10.1145/2908080.2908105 dblp:conf/pldi/TruongBTLMFS16 fatcat:phob6d5p4nb55cx4l6vok4y6y4

Artificial neural networks on massively parallel computer hardware

Udo Seiffert
2004 Neurocomputing  
This tutorial paper gives a survey and guides those people who are willing to go the way of a parallel implementation utilizing the most recent and accessible parallel computer hardware and software.  ...  Spending a lot of additional time and extra money to implement a particular algorithm on parallel hardware is often considered as the ultimate solution to all existing time problems for the ones -and the  ...  Acknowledgement The author would like to thank Bernd Michaelis and Tobias Czauderna for their valuable support.  ... 
doi:10.1016/j.neucom.2004.01.011 fatcat:mu6c7bugpjg2bmfqjku56xalme

An Overview of Hopfield Network and Boltzmann Machine

Saratha Sathasivam, Abdu Masanawa Sagir
2014 International journal of computational and electronics aspects in engineering  
The two well-known and commonly used types of recurrent neural networks, Hopfield neural network and Boltzmann machine have different structures and characteristics.  ...  Neural networks are dynamic systems in the learning and training phase of their operations.  ...  models using a distributed knowledge representation and a massively parallel network of simple stochastic computing elements.  ... 
doi:10.26706/ijceae.1.1.20141205 fatcat:ost6kvnxzjcqrdi4qa6oww3ybi

Neural Networks for Combinatorial Optimization: A Review of More Than a Decade of Research

Kate A. Smith
1999 INFORMS journal on computing  
This article briefly summarizes the work that has been done and presents the current standing of neural networks for combinatorial optimization by considering each of the major classes of combinatorial  ...  It has been over a decade since neural networks were first applied to solve combinatorial optimization problems.  ...  Gendreau, and Dr. B. Golden for their helpful comments and suggestions.  ... 
doi:10.1287/ijoc.11.1.15 fatcat:jy5jur2pyjhndpi45argti2f54

Page 552 of Mathematical Reviews Vol. , Issue 97A [page]

1997 Mathematical Reviews  
’s computer (426-440); Wolfgang Reisig, Petri net models of distributed algorithms (441- 454); E.  ...  and David Garlan, Formulations and for- malisms in software architecture (307-323); Gert Smolka, The Oz programming model (324-343).  ... 

A patterned process approach to brain, consciousness, and behavior

José‐Luis Díaz
1997 Philosophical Psychology  
The architecture of brain, consciousness, and behavioral processes is shown to be formally similar in that all three may be conceived and depicted as Petri net patterned processes structured by a series  ...  A patterned process theory is derived from the isomorphic features of the models and contrasted with connectionist, dynamic system notions.  ...  Acknowledgments This paper was produced with the support of the National Autonomous University of Mexico (DGAPA: grant IN602491 and a sabbatical fellowship).  ... 
doi:10.1080/09515089708573214 fatcat:7dzebqw7urcobfubhqtwjyeb2u

HAO: Hardware-aware neural Architecture Optimization for Efficient Inference [article]

Zhen Dong, Yizhao Gao, Qijing Huang, John Wawrzynek, Hayden K.H. So, Kurt Keutzer
2021 arXiv   pre-print
Given a set of hardware resource constraints, our integer programming formulation directly outputs the optimal accelerator configuration for mapping a DNN subgraph that minimizes latency.  ...  With low computational cost, our algorithm can generate quantized networks that achieve state-of-the-art accuracy and hardware performance on Xilinx Zynq (ZU3EG) FPGA for image classification on ImageNet  ...  Our contributions are as follows: 1) We formulate the design of neural architecture, quantization, and hardware design jointly as an integer programming problem. 2) We use a subgraph-based latency model  ... 
arXiv:2104.12766v1 fatcat:wvpt6sil4zhf5dknqhv5zj76lu

COLOUR IMAGE SEGMENTATION USING COMPETITIVE NEURAL NETWORK

Sowmya B, Sheelarani B
2008 International Journal on Intelligent Electronic Systems  
The activation of the node with the largest net is set equal to 1, and the remaining nodes are set equal to 0. It works on the principle of "Winner Takes All".  ...  First, the color image of interest is read as a three dimensional matrix. It is then converted into a two-dimensional matrix. Weight matrix is randomly initialized.  ...  This condition is known as "winner takes all". An example of competitive neural net is MAXNET.  ... 
doi:10.18000/ijies.30025 fatcat:aprwsmgagbfhrl4udektpye23y

Learning on VLSI: a general purpose digital neurochip

Duranton, Sirat
1989 International Joint Conference on Neural Networks  
We present a general-purpose digital neurochip for the resolution and the learning stages of neural algorithms. It updates neuron states and synaptic coefficierits in parallel on input neurons.  ...  By choosing adapted parameters, most of the learning rules considered so far for neural networks can be programmed.  ...  Gobert and P. Martin (LEP) for fruitful discussions about the architecture of the circuit, N. Mauduit (LEP) for first design of the circuit, J.L. Zorer and J.R.  ... 
doi:10.1109/ijcnn.1989.118451 fatcat:v7vzftp4nfcnha4d4wmi5sngkm

Net-Trim: Convex Pruning of Deep Neural Networks with Performance Guarantee [article]

Alireza Aghasi, Afshin Abdi, Nam Nguyen, Justin Romberg
2017 arXiv   pre-print
We introduce and analyze a new technique for model reduction for deep neural networks.  ...  We present both parallel and cascade versions of the algorithm.  ...  Parallel Net-Trim The parallel Net-Trim is a straightforward application of the convex program (6) to each layer in the network.  ... 
arXiv:1611.05162v4 fatcat:fle76olwwbedrel2a5ajyfzlra

Situation-Aware Deep Reinforcement Learning Link Prediction Model for Evolving Criminal Networks

Marcus Lim, Azween Abdullah, NZ Jhanjhi, Muhammad Khurram Khan
2019 IEEE Access  
In view of this, deep reinforcement learning (DRL) technique which could improve the training of models with the self-generated dataset is leveraged upon to construct the model.  ...  The key objective of this research is to develop a link prediction model that incorporates a fusion of metadata (i.e. environment data sources such as arrest warrants, judicial judgement, wiretap records  ...  [k] Representation learning of metadata feature matrix such as the number of wiretaps, arrest warrants and judicial judgements is formulated as weights for the Metadata Fusion Neural Net.  ... 
doi:10.1109/access.2019.2961805 fatcat:5bmkwmicazfbdnbm4xhshezjsm

Fault Diagnosis of Discrete Event Systems Using Hybrid Petri Nets

R. Rangarangi Hokmabad, M. A. Badamchizadeh, S. Khanmohammadi
2012 Journal of clean energy technologies  
A new method for fault diagnosis of discrete event systems modeled by Neural Petri Nets (NPNs) is presented in this paper.  ...  Moreover, the graphical representation of the nets allows the diagnoser agent to compute off-line reduced portions of the net in order to improve the efficiency of the online computation, without a big  ...  In this paper it is shown that the programming problems to be solved by the diagnoser can be formulated on reduced portions of the net properly computed offline. II.  ... 
doi:10.7763/ijcte.2012.v4.468 fatcat:zrzwabi5abdinhlhrfwu2n6iha

A Sociological Study of the Official History of the Perceptrons Controversy

Mikel Olazaran
1996 Social Studies of Science  
Digital computers could be usedand indeed started to be usedto simulate neural nets, but the overall philosophy of the neural-net approach, as formulated mainly by Rosenblatt, favoured a brain-style, anti-von  ...  Neural-net training is usually a long (and computationally expensive) process of cycles of input feeding, output observation and weight adjustment.  ... 
doi:10.1177/030631296026003005 fatcat:4do24wctqbdbhl5caubycb2mzy
« Previous Showing results 1 — 15 out of 11,629 results