A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2020; you can also visit the original URL.
The file type is application/pdf
.
Filters
Cognitive Architecture In Mobile Music Interactions
2011
Zenodo
This paper explores how a general cognitive architecture canpragmatically facilitate the development and exploration ofinteractive music interfaces on a mobile platform. To thisend we integrated the Soar cognitive architecture into themobile music meta-environment urMus. We develop anddemonstrate four artificial agents which use diverse learningmechanisms within two mobile music interfaces. We alsoinclude details of the computational performance of theseagents, evincing that the architecture
doi:10.5281/zenodo.1177993
fatcat:sxckbrkqjfeflao5dvbdqm6sti
more »
... support real-timeinteractivity on modern commodity hardware.
Exploring Reinforcement Learning For Mobile Percussive Collaboration
2012
Zenodo
This paper presents a system for mobile percussive collaboration. We show that reinforcement learning can incrementally learn percussive beat patterns played by humans and supports realtime collaborative performance in the absence of one or more performers. This work leverages an existing integration between urMus and Soar and addresses multiple challenges involved in the deployment of machine-learning algorithms for mobile music expression, including tradeoffs between learning speed & quality;
doi:10.5281/zenodo.1178243
fatcat:xnwn27yl5vcuzidys2s2oyvwdy
more »
... interface design for human collaborators; and real-time performance and improvisation.
Efficiently Implementing Episodic Memory
[chapter]
2009
Lecture Notes in Computer Science
Endowing an intelligent agent with an episodic memory affords it a multitude of cognitive capabilities. However, providing efficient storage and retrieval in a task-independent episodic memory presents considerable theoretical and practical challenges. We characterize the computational issues bounding an episodic memory. We explore whether even with intractable asymptotic growth, it is possible to develop efficient algorithms and data structures for episodic memory systems that are practical
doi:10.1007/978-3-642-02998-1_29
fatcat:c4wk55hezzdcbfa5mffldhem5m
more »
... real-world tasks. We present and evaluate formal and empirical results using Soar-EpMem: a task-independent integration of episodic memory with Soar 9, providing a baseline for graph-based, taskindependent episodic memory systems.
An Improved Three-Weight Message-Passing Algorithm
[article]
2013
arXiv
pre-print
We describe how the powerful "Divide and Concur" algorithm for constraint satisfaction can be derived as a special case of a message-passing version of the Alternating Direction Method of Multipliers (ADMM) algorithm for convex optimization, and introduce an improved message-passing algorithm based on ADMM/DC by introducing three distinct weights for messages, with "certain" and "no opinion" weights, as well as the standard weight used in ADMM/DC. The "certain" messages allow our improved
arXiv:1305.1961v1
fatcat:2llsc2iysfatpke3lewhqmkt2e
more »
... thm to implement constraint propagation as a special case, while the "no opinion" messages speed convergence for some problems by making the algorithm focus only on active constraints. We describe how our three-weight version of ADMM/DC can give greatly improved performance for non-convex problems such as circle packing and solving large Sudoku puzzles, while retaining the exact performance of ADMM for convex problems. We also describe the advantages of our algorithm compared to other message-passing algorithms based upon belief propagation.
Proximal operators for multi-agent path planning
[article]
2015
arXiv
pre-print
the TWA requires computing proximal operators and specifying, at every iteration, what Derbinsky et al. (2013) calls the outgoing weights, − → ρ , of each proximal operator. ...
More specifically, the authors use the Alternating Direction Method of Multipliers (ADMM) and the variant introduced in Derbinsky et al. (2013) called the Three Weight Algorithm (TWA). ...
arXiv:1504.01783v1
fatcat:teyhhwgv4nfedjwipv73hyclc4
A message-passing algorithm for multi-agent trajectory planning
[article]
2013
arXiv
pre-print
We describe a novel approach for computing collision-free global trajectories for p agents with specified initial and final configurations, based on an improved version of the alternating direction method of multipliers (ADMM). Compared with existing methods, our approach is naturally parallelizable and allows for incorporating different cost functionals with only minor adjustments. We apply our method to classical challenging instances and observe that its computational requirements scale well
arXiv:1311.4527v1
fatcat:uwdfmh2fvfcizcietbdzwtkbei
more »
... with p for several cost functionals. We also show that a specialization of our algorithm can be used for local motion planning by solving the problem of joint optimization in velocity space.
Testing fine-grained parallelism for the ADMM on a factor-graph
[article]
2016
arXiv
pre-print
There is an ongoing effort to develop tools that apply distributed computational resources to tackle large problems or reduce the time to solve them. In this context, the Alternating Direction Method of Multipliers (ADMM) arises as a method that can exploit distributed resources like the dual ascent method and has the robustness and improved convergence of the augmented Lagrangian method. Traditional approaches to accelerate the ADMM using multiple cores are problem-specific and often require
arXiv:1603.02526v1
fatcat:v37hen7ijfhdjl2wus6hauglwi
more »
... lti-core programming. By contrast, we propose a problem-independent scheme of accelerating the ADMM that does not require the user to write any parallel code. We show that this scheme, an interpretation of the ADMM as a message-passing algorithm on a factor-graph, can automatically exploit fine-grained parallelism both in GPUs and shared-memory multi-core computers and achieves significant speedup in such diverse application domains as combinatorial optimization, machine learning, and optimal control. Specifically, we obtain 10-18x speedup using a GPU, and 5-9x using multiple CPU cores, over a serial, optimized C-version of the ADMM, which is similar to the typical speedup reported for existing GPU-accelerated libraries, including cuFFT (19x), cuBLAS (17x), and cuRAND (8x).
The Boundary Forest Algorithm for Online Supervised and Unsupervised Learning
2015
arXiv
pre-print
We describe a new instance-based learning algorithm called the Boundary Forest (BF) algorithm, that can be used for supervised and unsupervised learning. The algorithm builds a forest of trees whose nodes store previously seen examples. It can be shown data points one at a time and updates itself incrementally, hence it is naturally online. Few instance-based algorithms have this property while being simultaneously fast, which the BF is. This is crucial for applications where one needs to
arXiv:1505.02867v1
fatcat:z2rq3vvabrbwrkyosmelehugga
more »
... d to input data in real time. The number of children of each node is not set beforehand but obtained from the training procedure, which makes the algorithm very flexible with regards to what data manifolds it can learn. We test its generalization performance and speed on a range of benchmark datasets and detail in which settings it outperforms the state of the art. Empirically we find that training time scales as O(DNlog(N)) and testing as O(Dlog(N)), where D is the dimensionality and N the amount of data,
Methods for Integrating Knowledge with the Three-Weight Optimization Algorithm for Hybrid Cognitive Processing
[article]
2013
arXiv
pre-print
Derbinsky and José Bento and Jonathan S. ...
the Three-Weight Optimization
Algorithm for Hybrid Cognitive Processing
Nate ...
arXiv:1311.4064v1
fatcat:qbw7jl54kvgpnenl5rz7bvkogq
Effective and efficient forgetting of learned knowledge in Soar's working and procedural memories
2013
Cognitive Systems Research
Empirical evaluation We extended an existing system where Soar controls a simulated mobile robot (Laird, Derbinsky, & Voigt, 2011) . ...
Empirical evaluation We extended an existing system (Laird, Derbinsky, & Tinkerhess, 2011) where Soar plays Liar's Dice, a multiplayer game of chance. ...
doi:10.1016/j.cogsys.2012.12.003
fatcat:nmtyvdjoqbepphqdf6pz665a3a
Scalable methods to integrate task knowledge with the Three-Weight Algorithm for hybrid cognitive processing via optimization
2014
Biologically Inspired Cognitive Architectures
& Laird, 2009; Derbinsky et al., 2010; Derbinsky & Laird, 2013) , our reasoner is able to easily maintain a real-time response rate for very large sudoku puzzles. ...
difference) and corresponding iteration time: even as the baseline TWA crosses 50 ms/iteration, a commonly accepted threshold for reactivity in the cognitive-architecture community (Rosenbloom, 2012; Derbinsky ...
doi:10.1016/j.bica.2014.03.007
fatcat:voctscpmvrcankeyv56k2ql56m
Testing Fine-Grained Parallelism for the ADMM on a Factor-Graph
2016
2016 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW)
There is an ongoing effort to develop tools that apply distributed computational resources to tackle large problems or reduce the time to solve them. In this context, the Alternating Direction Method of Multipliers (ADMM) arises as a method that can exploit distributed resources like the dual ascent method and has the robustness and improved convergence of the augmented Lagrangian method. Traditional approaches to accelerate the ADMM using multiple cores are problem-specific and often require
doi:10.1109/ipdpsw.2016.162
dblp:conf/ipps/HaoORDB16
fatcat:xduyssmxw5dpnbhuihiayjke2m
more »
... lti-core programming. By contrast, we propose a problem-independent scheme of accelerating the ADMM that does not require the user to write any parallel code. We show that this scheme, an interpretation of the ADMM as a message-passing algorithm on a factor-graph, can automatically exploit fine-grained parallelism both in GPUs and shared-memory multi-core computers and achieves significant speedup in such diverse application domains as combinatorial optimization, machine learning, and optimal control. Specifically, we obtain 10-18x speedup using a GPU, and 5-9x using multiple CPU cores, over a serial, optimized C-version of the ADMM, which is similar to the typical speedup reported for existing GPU-accelerated libraries, including cuFFT (19x), cuBLAS (17x), and cuRAND (8x).
Using domain knowledge in coevolution and reinforcement learning to simulate a logistics enterprise
2022
Proceedings of the Genetic and Evolutionary Computation Conference Companion
We demonstrate a framework (CoEv-Soar-RL) for a logistics enterprise to improve readiness, sustainment, and reduce operational risk. The CoEv-Soar-RL uses reinforcement learning and coevolutionary algorithms to improve the functions of a logistics enterprise value chain. We address: (1) holistic prediction, optimization, and simulation for the logistics enterprise readiness; (2) the uncertainty and lack of data which require large-scale systematic what-if scenarios to simulate potential new and
doi:10.1145/3520304.3528990
fatcat:lvzzxa7vyfc4fbmtm3yudim5ka
more »
... unknown situations. In this paper, we perform four experiments to investigate how to integrate prediction and simulation to modify a logistics enterprise's demand models and generate synthetic data based. We use general domain knowledge to design simple operators for the coevolutionary search algorithm that provide realistic solutions for the simulation of the logistic enterprise. In addition, to evaluate generated solutions we learn a surrogate model of a logistic enterprise environment from historical data with Soar reinforcement learning. From our experiments we discover, and verify with subject matter experts, novel realistic solutions for the logistic enterprise. These novel solutions perform better than the historical data and where only found when we include knowledge derived from the historical data in the coevolutionary search.
Reinforcement Learning for Modeling Large-Scale Cognitive Reasoning
2017
Proceedings of the 9th International Joint Conference on Knowledge Discovery, Knowledge Engineering and Knowledge Management
Accurate, relevant, and timely combat identification (CID) enables warfighters to locate and identify critical airborne targets with high precision. The current CID processes included a wide combination of platforms, sensors, networks, and decision makers. There are diversified doctrines, rules of engagements, knowledge databases, and expert systems used in the current process to make the decision making very complex. Furthermore, the CID decision process is still very manual. Decision makers
doi:10.5220/0006508702330238
dblp:conf/ic3k/ZhaoMD17
fatcat:odfo3mr6lbbc3dcpiigfykpi34
more »
... e constantly overwhelmed with the cognitive reasoning required. Soar is a cognitive architecture that can be used to model complex reasoning, cognitive functions, and decision making for warfighting processes like the ones in a kill chain. In this paper, we present a feasibility study of Soar, and in particular the reinforcement learning (RL) module, for optimal decision making using existing expert systems and smart data. The system has the potential to scale up and automate CID decision-making to reduce the cognitive load of human operators.
Cornhole: A Widely-Accessible AI Robotics Task
2017
PROCEEDINGS OF THE THIRTIETH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE AND THE TWENTY-EIGHTH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE
ASK As a first client of the vision dataset, two seniors in computer science developed the Automatic Score Keeper (ASK) for cornhole (Eshimkanov and Derbinsky 2017). ...
doi:10.1609/aaai.v31i1.10546
fatcat:3isrxsljljew3jroa4s4cc5fpy
« Previous
Showing results 1 — 15 out of 31 results