Filters








364 Hits in 1.2 sec

Implicit Neural Video Compression [article]

Yunfan Zhang, Ties van Rozendaal, Johann Brehmer, Markus Nagel, Taco Cohen
2021 arXiv   pre-print
Brehmer Markus Nagel Taco S.  ...  In Proceedings of the European Conference on Com- puter Vision (ECCV), pages 416–431, 2018. 2 [70] Wenqi Xian, Jia-Bin Huang, Johannes Kopf, and Changil Kim.  ... 
arXiv:2112.11312v1 fatcat:ogd256n4qzfc7epkankzy4ktuy

Weakly supervised causal representation learning [article]

Johann Brehmer, Pim de Haan, Phillip Lippe, Taco Cohen
2022 arXiv   pre-print
In particular, relaxing the exact manifold in the weakly supervised data space to a "fuzzy" one renders our argument for the identifiability of noise encodings and intervention targets invalid; see Brehmer  ... 
arXiv:2203.16437v1 fatcat:3dcaetbjkrbzzgv5n2e5waulye

Simulation-based inference methods for particle physics [article]

Johann Brehmer, Kyle Cranmer
2020 arXiv   pre-print
Our predictions for particle physics processes are realized in a chain of complex simulators. They allow us to generate high-fidelity simulated data, but they are not well-suited for inference on the theory parameters with observed data. We explain why the likelihood function of high-dimensional LHC data cannot be explicitly evaluated, why this matters for data analysis, and reframe what the field has traditionally done to circumvent this problem. We then review new simulation-based inference
more » ... thods that let us directly analyze high-dimensional data by combining machine learning techniques and information from the simulator. Initial studies indicate that these techniques have the potential to substantially improve the precision of LHC measurements. Finally, we discuss probabilistic programming, an emerging paradigm that lets us extend inference to the latent process of the simulator.
arXiv:2010.06439v2 fatcat:vc2wfdaq5razvjcz42xt7pim5m

Likelihood-free inference with an improved cross-entropy estimator [article]

Markus Stoye, Johann Brehmer, Gilles Louppe, Juan Pavez, Kyle Cranmer
2018 arXiv   pre-print
We extend recent work (Brehmer, et. al., 2018) that use neural networks as surrogate models for likelihood-free inference.  ... 
arXiv:1808.00973v1 fatcat:gsprxh677fhe3e77p7ddadeswe

Back to the Formula – LHC Edition [article]

Anja Butter, Tilman Plehn, Nathalie Soybelman, Johann Brehmer
2021 arXiv   pre-print
While neural networks offer an attractive way to numerically encode functions, actual formulas remain the language of theoretical particle physics. We show how symbolic regression trained on matrix-element information provides, for instance, optimal LHC observables in an easily interpretable form. We introduce the method using the effect of a dimension-6 coefficient on associated ZH production. We then validate it for the known case of CP-violation in weak-boson-fusion Higgs production, including detector effects.
arXiv:2109.10414v2 fatcat:zawknh4dxfg6xnipl2ncnmwsfi

Constraining effective field theories with machine learning

Johann Brehmer, Kyle Stuart Cranmer, Gilles Louppe, Juan Guillermo Pavez Sepulveda, Alexander Held
2019 Zenodo  
An important part of the LHC legacy will be precise limits on indirect effects of new physics, framed for instance in terms of an effective field theory. These measurements often involve many theory parameters and observables, which makes them challenging for traditional analysis methods. We discuss the underlying problem of "likelihood-free" inference and present powerful new analysis techniques that combine physics insights, statistical methods, and the power of machine learning. We have
more » ... oped MadMiner, a new Python package that makes it straightforward to apply these techniques. In example LHC problems we show that the new approach lets us put stronger constraints on theory parameters than established methods, demonstrating its potential to improve the new physics reach of the LHC legacy measurements. While we present techniques optimized for particle physics, the likelihood-free inference formulation is much more general, and these ideas are part of a broader movement that is changing scientific inference in fields as diverse as cosmology, genetics, and epidemiology.
doi:10.5281/zenodo.3599580 fatcat:ce7ruvsi4ncqriptdzgdsoqb7i

Effective LHC measurements with matrix elements and machine learning [article]

Johann Brehmer, Kyle Cranmer, Irina Espejo, Felix Kling, Gilles Louppe, Juan Pavez
2019 arXiv   pre-print
One major challenge for the legacy measurements at the LHC is that the likelihood function is not tractable when the collected data is high-dimensional and the detector response has to be modeled. We review how different analysis strategies solve this issue, including the traditional histogram approach used in most particle physics analyses, the Matrix Element Method, Optimal Observables, and modern techniques based on neural density estimation. We then discuss powerful new inference methods
more » ... t use a combination of matrix element information and machine learning to accurately estimate the likelihood function. The MadMiner package automates all necessary data-processing steps. In first studies we find that these new techniques have the potential to substantially improve the sensitivity of the LHC legacy measurements.
arXiv:1906.01578v1 fatcat:sl4s36pt7fdttmb7muzgdvtcne

Monte-Carlo event generation for a two-Higgs-doublet model with maximal CP symmetry [article]

Johann Brehmer
2012 arXiv   pre-print
Rainer Stamen, Merle Reinhart, Antje Brehmer, Jennifer Kieselmann, Eric Wisotzky and Dr. Martin Spiegel.  ... 
arXiv:1206.7044v1 fatcat:kqds4okqefe3tmdmwffrvzscfi

Hierarchical clustering in particle physics through reinforcement learning [article]

Johann Brehmer, Sebastian Macaluso, Duccio Pappadopulo, Kyle Cranmer
2020 arXiv   pre-print
Particle physics experiments often require the reconstruction of decay patterns through a hierarchical clustering of the observed final-state particles. We show that this task can be phrased as a Markov Decision Process and adapt reinforcement learning algorithms to solve it. In particular, we show that Monte-Carlo Tree Search guided by a neural policy can construct high-quality hierarchical clusterings and outperform established greedy and beam search baselines.
arXiv:2011.08191v2 fatcat:3m6tx3ny4ndvlgrcg3wj4qyp7q

The frontier of simulation-based inference [article]

Kyle Cranmer, Johann Brehmer, Gilles Louppe
2020 arXiv   pre-print
Many domains of science have developed complex simulations to describe phenomena of interest. While these simulations provide high-fidelity models, they are poorly suited for inference and lead to challenging inverse problems. We review the rapidly developing field of simulation-based inference and identify the forces giving new momentum to the field. Finally, we describe how the frontier is expanding so that a broad audience can appreciate the profound change these developments may have on science.
arXiv:1911.01429v3 fatcat:kv32pqap5ne2hkvnekcck4hxkq

MadMiner: Machine learning-based inference for particle physics [article]

Johann Brehmer, Felix Kling, Irina Espejo, Kyle Cranmer
2020 arXiv   pre-print
Precision measurements at the LHC often require analyzing high-dimensional event data for subtle kinematic signatures, which is challenging for established analysis methods. Recently, a powerful family of multivariate inference techniques that leverage both matrix element information and machine learning has been developed. This approach neither requires the reduction of high-dimensional data to summary statistics nor any simplifications to the underlying physics or detector response. In this
more » ... per we introduce MadMiner, a Python module that streamlines the steps involved in this procedure. Wrapping around MadGraph5_aMC and Pythia 8, it supports almost any physics process and model. To aid phenomenological studies, the tool also wraps around Delphes 3, though it is extendable to a full Geant4-based detector simulation. We demonstrate the use of MadMiner in an example analysis of dimension-six operators in ttH production, finding that the new techniques substantially increase the sensitivity to new physics.
arXiv:1907.10621v2 fatcat:hlgs24amf5bvnb4lupvh3bmfsq

Nonconcave Utility Maximisation in the MIMO Broadcast Channel

Johannes Brehmer, Wolfgang Utschick
2008 EURASIP Journal on Advances in Signal Processing  
The problem of determining an optimal parameter setup at the physical layer in a multiuser, multiantenna downlink is considered. An aggregate utility, which is assumed to depend on the users' rates, is used as performance metric. It is not assumed that the utility function is concave, allowing for more realistic utility models of applications with limited scalability. Due to the structure of the underlying capacity region, a two step approach is necessary. First, an optimal rate vector is
more » ... ined. Second, the optimal parameter setup is derived from the optimal rate vector. Two methods for computing an optimal rate vector are proposed. First, based on the differential manifold structure offered by the boundary of the MIMO BC capacity region, a gradient projection method on the boundary is developed. Being a local algorithm, the method converges to a rate vector which is not guaranteed to be a globally optimal solution. Second, the monotonic structure of the rate space problem is exploited to compute a globally optimal rate vector with an outer approximation algorithm. While the second method yields the global optimum, the first method is shown to provide an attractive tradeoff between utility performance and computational complexity.
doi:10.1155/2009/645041 fatcat:huaw6ckrobdhfmdx3fllxc2shy

Instance-Adaptive Video Compression: Improving Neural Codecs by Training on the Test Set [article]

Ties van Rozendaal, Johann Brehmer, Yunfan Zhang, Reza Pourreza, Taco S. Cohen
2021 arXiv   pre-print
We introduce a video compression algorithm based on instance-adaptive learning. On each video sequence to be transmitted, we finetune a pretrained compression model. The optimal parameters are transmitted to the receiver along with the latent code. By entropy-coding the parameter updates under a suitable mixture model prior, we ensure that the network parameters can be encoded efficiently. This instance-adaptive compression algorithm is agnostic about the choice of base model and has the
more » ... al to improve any neural video codec. On UVG, HEVC, and Xiph datasets, our codec improves the performance of a low-latency scale-space flow model by between 21% and 26% BD-rate savings, and that of a state-of-the-art B-frame model by 17 to 20% BD-rate savings. We also demonstrate that instance-adaptive finetuning improves the robustness to domain shift. Finally, our approach reduces the capacity requirements on compression models. We show that it enables a state-of-the-art performance even after reducing the network size by 72%.
arXiv:2111.10302v1 fatcat:pfwpvuryfnfmzak5mkmkbtyzma

Symmetry restored in dibosons at the LHC?

Johann Brehmer, JoAnne Hewett, Joachim Kopp, Thomas Rizzo, Jamie Tattersall
2015 Journal of High Energy Physics  
A number of LHC resonance search channels display an excess in the invariant mass region of 1.8-2.0 TeV. Among them is a 3.4 σ excess in the fully hadronic decay of a pair of Standard Model electroweak gauge bosons, in addition to potential signals in the HW and dijet final states. We perform a model-independent cross-section fit to the results of all ATLAS and CMS searches sensitive to these final states. We then interpret these results in the context of the Left-Right Symmetric Model, based
more » ... the extended gauge group SU(2) L × SU(2) R × U(1) , and show that a heavy right-handed gauge boson W R can naturally explain the current measurements with just a single coupling g R ∼ 0.4. In addition, we discuss a possible connection to dark matter.
doi:10.1007/jhep10(2015)182 fatcat:3qhkzt2dqrd7nipeqlthhfzcme

Flows for simultaneous manifold learning and density estimation [article]

Johann Brehmer, Kyle Cranmer
2020 arXiv   pre-print
0.0 - 0.17 ± 1.18 PIE 139.5 ± 5.0 5539 ± 56 32.2 ± 0.8 4155 ± 31 −6.40 ± 1.54 M-flow 43.9 ± 0.2 332 ± 9 20.8 ± 0.5 1430 ± 4 2.67 ± 0.27 Me-flow 43.5 ± 0.2 303 ± 4 23.7 ± 0.2 1555 ± 3 1.81 ± 0.70 Brehmer  ... 
arXiv:2003.13913v3 fatcat:5vxteboyh5evndwsh2fspn3tzq
« Previous Showing results 1 — 15 out of 364 results