A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2019; you can also visit the original URL.
The file type is `application/pdf`

.

## Filters

##
###
Using probabilistic programs as proposals
[article]

2018
*
arXiv
*
pre-print

By convergence of standard self-normalized importance sampling,μ N a.s. → E (z,τ 1:K )∼π(·) [

arXiv:1801.03612v2
fatcat:i6tjdve2brdpjjgxcy2365leky
*f*(z)] = E z∼π(·) [*f*(z)] = µ. ... If π(z) is a target distribution, P is a proposal program such that π(z) > 0 =⇒ p(z; x) > 0 for all x,*f*is a function where E z∼π [*f*(z)] = µ < ∞, thenμ N a.s. → µ whereμ N is the importance sampling ...##
###
Quantifying the probable approximation error of probabilistic inference programs
[article]

2016
*
arXiv
*
pre-print

arange(num_particles)); likelihoods = map( (model_trace) -> likelihood(model_trace, x) model_traces); k = categorical(normalize(likelihoods)); y = (model_traces, k); z = model_traces[k]; return (y, z) }; (

arXiv:1606.00068v1
fatcat:w6x2t62u2najhda3qwiozta4om
*f*) ... = (x_coord) -> { a * x_coord + b }; ), observation_model: map( (i) -> [|normal(*f*(x_coordinates[$i]), 0.3)|], arange(11))) (d) Model program (model expressions bold) Figure 1 : 1 Estimating subjective ...##
###
Encapsulating models and approximate inference programs in probabilistic modules
[article]

2017
*
arXiv
*
pre-print

This paper introduces the probabilistic module interface, which allows encapsulation of complex probabilistic models with latent variables alongside custom stochastic approximate inference machinery, and provides a platform-agnostic abstraction barrier separating the model internals from the host probabilistic inference system. The interface can be seen as a stochastic generalization of a standard simulation and density interface for probabilistic primitives. We show that sound approximate

arXiv:1612.04759v2
fatcat:a6q5x5qchjg7lpcr7mtsnccks4
## more »

... ence algorithms can be constructed for networks of probabilistic modules, and we demonstrate that the interface can be implemented using learned stochastic inference networks and MCMC and SMC approximate inference programs.##
###
Probabilistic programs for inferring the goals of autonomous agents
[article]

2017
*
arXiv
*
pre-print

Tree and car 3D models in figures are from http://www.

arXiv:1704.04977v2
fatcat:4iholbzhobchja2lad55fhojvu
*f*-lohmueller.de/ [Lohmüller, 2016] . ... j (z π G (j) )) (2) The Metropolis-Hastings acceptance ratio is: α = j∈H∪A p tj (z j ;*f*j (z π G (j) )) j∈H∪A p tj (z j ;*f*j (z π G (j) )) · m(z i ; z ) j∈H p tj (z j ;*f*j (z π G (j) )) m(z i ; z) j ... ∈H p tj (z j ;*f*j (z π G (j) )) = m(z i ; z ) j∈A p tj (z j ;*f*j (z π G (j) )) m(z i ; z) j∈A p tj (z j ;*f*j (z π G (j) )) (3) We illustrate Cascading Resimulation MH in Figure 2 , on the task of inferring ...##
###
AIDE: An algorithm for measuring the accuracy of probabilistic inference algorithms
[article]

2017
*
arXiv
*
pre-print

Approximate probabilistic inference algorithms are central to many fields. Examples include sequential Monte Carlo inference in robotics, variational inference in machine learning, and Markov chain Monte Carlo inference in statistics. A key problem faced by practitioners is measuring the accuracy of an approximate inference algorithm on a specific data set. This paper introduces the auxiliary inference divergence estimator (AIDE), an algorithm for measuring the accuracy of approximate inference

arXiv:1705.07224v2
fatcat:3otnhj6qajbrhpdofztz7jdgui
## more »

... algorithms. AIDE is based on the observation that inference algorithms can be treated as probabilistic models and the random variables used within the inference algorithm can be viewed as auxiliary variables. This view leads to a new estimator for the symmetric KL divergence between the approximating distributions of two inference algorithms. The paper illustrates application of AIDE to algorithms for inference in regression, hidden Markov, and Dirichlet process mixture models. The experiments show that AIDE captures the qualitative behavior of a broad class of inference algorithms and can detect failure modes of inference algorithms that are missed by standard heuristics.##
###
Bringing clothing into desired configurations with limited perception

2011
*
2011 IEEE International Conference on Robotics and Automation
*

P (c a,gt |a, g t ) = 1

doi:10.1109/icra.2011.5980327
dblp:conf/icra/Cusumano-TownerSMOA11
fatcat:imll7sr4f5hnhfr7lfji6r2sau
*f*+ 1 d if c a,gt < f 1 f + 1 d e −dca,g t if c a,gt ≥ f. This distribution, shown in V. ...##
###
Measuring the non-asymptotic convergence of sequential Monte Carlo samplers using probabilistic programming
[article]

2017
*
arXiv
*
pre-print

A key limitation of sampling algorithms for approximate inference is that it is difficult to quantify their approximation error. Widely used sampling schemes, such as sequential importance sampling with resampling and Metropolis-Hastings, produce output samples drawn from a distribution that may be far from the target posterior distribution. This paper shows how to upper-bound the symmetric KL divergence between the output distribution of a broad class of sequential Monte Carlo (SMC) samplers

arXiv:1612.02161v2
fatcat:s2xz22zlunczrc35fvrn64e5pq
## more »

... d their target posterior distributions, subject to assumptions about the accuracy of a separate gold-standard sampler. The proposed method applies to samplers that combine multiple particles, multinomial resampling, and rejuvenation kernels. The experiments show the technique being used to estimate bounds on the divergence of SMC samplers for posterior inference in a Bayesian linear regression model and a Dirichlet process mixture model.##
###
Gen: a general-purpose probabilistic programming system with programmable inference

2019
*
Proceedings of the 40th ACM SIGPLAN Conference on Programming Language Design and Implementation - PLDI 2019
*

The output function

doi:10.1145/3314221.3314642
dblp:conf/pldi/Cusumano-Towner19
fatcat:bmjwmh7jhjf33gg6dywrdjop7y
*f*: X × {s : p(s; x ) > 0} → Y maps an argument x and an assignment s of all random choices (whose probability under p is non-zero) to a return value*f*(x, s) ∈ Y. ... We define Generative function A generative function G is a tuple G = (X, Y,*f*, p, q). ...##
###
Trace types and denotational semantics for sound programmable inference in probabilistic languages

2019
*
Proceedings of the ACM on Programming Languages (PACMPL)
*

Another approach would be to develop dynamic analyses of the KL divergence between an inference program and its target distribution, based on recently introduced Monte Carlo estimators [

doi:10.1145/3371087
fatcat:kf4vfvzt7rdvzencdtksbk3moq
*Cusumano*-*Towner*... , and the addresses modified by MCMC kernels, could also be used to improve the performance of inference, by compiling trace data structures specialized for a particular inference algorithm's needs [*Cusumano*-*Towner*...##
###
Bayesian synthesis of probabilistic programs for automatic data modeling

2019
*
Proceedings of the ACM on Programming Languages (PACMPL)
*

*f*(x ) : x ∈ X . ... Give time points {x 1 , . . . , x n } the vector of random variables [

*f*(x 1 ), . . . ,

*f*(x n )] is jointly Gaussian with mean vector [m(x 1 ), . . . , m(x n )] and covariance matrix [k (x i , x j )] ...

##
###
Towards Denotational Semantics of AD for Higher-Order, Recursive, Probabilistic Languages
[article]

2021
*
arXiv
*
pre-print

*Marco*

*F*

*Cusumano*-

*Towner*, Feras A Saad, Alexander K Lew, and Vikash K Mansinghka. Gen: a general-purpose probabilistic programming system with programmable inference. ...

*Marco*

*Cusumano*-

*Towner*, Alexander K Lew, and Vikash K Mansinghka. Automating involutive mcmc using probabilistic and differentiable programming. arXiv preprint arXiv:2007.09871, 2020. ...

##
###
Sampling Prediction-Matching Examples in Neural Networks: A Probabilistic Programming Approach
[article]

2020
*
arXiv
*
pre-print

Acknowledgements We thank Alex Lew,

arXiv:2001.03076v1
fatcat:dfa6ibvkefcxtem3xjcgysaany
*Marco**Cusumano*-*Towner*, Christian Muise, and Hendrik Strobelt for fruitful discussions. ... In our work, we use the state-of-the-art language Gen (*Cusumano*-*Towner*et al., 2019). ... Our technique empirically evaluates prediction level sets, or sets of a given confidence, through applying probabilistic programming (*Cusumano*-*Towner*et al., 2019) . ...##
###
Learning Proposals for Probabilistic Programs with Inference Combinators
[article]

2021
*
arXiv
*
pre-print

*Cusumano*-

*Towner*,

*Marco*

*F*., Saad, Feras A., Lew, Alexan- der K., Mansinghka, Vikash K. "Gen: A General-Purpose Probabilistic Programming System with Programmable Inference." ... et al., 2018) , Gen (

*Cusumano*-

*Towner*et al., 2019) , Pyro , and Edward2 (Tran et al., 2016) 2 . ... In the case where the set of random variables in the proposal and target program is the same, i.e. dom(τ 2 ) = dom(τ 1 ) and q 1 =

*f*is a primitive program, we can write w 1 = γ

*f*(τ 1 ; c 0 , φ)/p

*f*( ...

##
###
Bayes-TrEx: a Bayesian Sampling Approach to Model Transparency by Example

2021
*
PROCEEDINGS OF THE THIRTIETH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE AND THE TWENTY-EIGHTH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE
*

Acknowledgements The authors would like to thank: Alex Lew,

doi:10.1609/aaai.v35i13.17361
fatcat:dh6pe76ob5dzrcfswhlu6ojskq
*Marco**Cusumano*-*Towner*, and Tan Zhi-Xuan for their insights on how to formulate this inference problem and use probabilistic programming effectively ... State-of-the-art approaches require a gold standard inference algorithm (*Cusumano*-*Towner*and Mansinghka 2017) or specific posterior distribution properties, such as log-concavity (Gorham and Mackey 2015 ... We have u 1 |x ∼ N |*f*(x) i −*f*(x) j |, σ 2 1 , ( 7 ) u 2 |x ∼ N (min(*f*(x) i ,*f*(x) j ) − max k =i,j*f*(x) k , σ 2 2 ), (8) u * 1 = 0, u * 2 = 0.5. ( 9 ) σ 1 and σ 2 are hyperparameters. ...##
###
Causal Inference using Gaussian Processes with Structured Latent Confounders
[article]

2020
*
arXiv
*
pre-print

Acknowledgments Thanks to

arXiv:2007.07127v1
fatcat:ozv4qpjgbzairaf6abidimuzum
*Marco**Cusumano*-*Towner*, Feras Saad, Alex Lew, Cameron Freer, Rachel Paiste, Amanda Gentzel, Andy Zane, Jameson Quinn, and the anonymous reviewers for their helpful feedback and ... We implement the GP-SLC model using Gen (*Cusumano*-*Towner*et al., 2019), a probabilistic programming language with programmable inference. ... We implement both models in Gen (*Cusumano*-*Towner*et al., 2019) . For both models, we use α σy = 4.0, β σy = 4.0, µ (·) = 0, σ 2 α = 3.0, σ 2 β = 1.0, and σ 2 η = 10.0 as priors. ...
« Previous

*Showing results 1 — 15 out of 30 results*