IA Scholar Query: Argument Interpretation Using Minimum Message Length.
https://scholar.archive.org/
Internet Archive Scholar query results feedeninfo@archive.orgSat, 31 Dec 2022 00:00:00 GMTfatcat-scholarhttps://scholar.archive.org/help14402 Sorting Students, Determining Fates
https://scholar.archive.org/work/wqu2i6cwinc73bh5zptavi7mqi
work_wqu2i6cwinc73bh5zptavi7mqiSat, 31 Dec 2022 00:00:00 GMTDescriptive Combinatorics and Distributed Algorithms
https://scholar.archive.org/work/pjgjlnfrkzd5vmd7p7c5yr66pe
In this article we shall explore a fascinating area called descriptive combinatorics and its recently discovered connections to distributed algorithms-a fundamental part of computer science that is becoming increasingly important in the modern era of decentralized computation. The interdisciplinary nature of these connections means that there is very little common background shared by the researchers who are interested in them. With this in mind, this article was written under the assumption that the reader would have close to no background in either descriptive set theory or computer science. The reader will judge to what degree this endeavor was successful. The article comprises two parts. In the first part we give a brief introduction to some of the central notions and problems of descriptive combinatorics. The second part is devoted to a survey of some of the results concerning theAnton Bernshteynwork_pjgjlnfrkzd5vmd7p7c5yr66peSat, 01 Oct 2022 00:00:00 GMTCrude Oil Forecasting via Events and Outlook Extraction from Commodity News
https://scholar.archive.org/work/yru4ov4fubar3lc4rlblz45ozu
The thesis is about using news events to predict crude oil prices. The three research objectives of this research are: (1) build an annotated crude oil dataset for event extraction, (2) train machine learning models on event extraction and (3) use extracted events for crude oil forecasting.Meisin Leework_yru4ov4fubar3lc4rlblz45ozuSat, 01 Oct 2022 00:00:00 GMTEnhancing the Use of Evidence by Policymakers in Indonesia
https://scholar.archive.org/work/m2ae63b66ncpfhvn47vpa7gl2u
The RTI Press mission is to disseminate information about RTI research, analytic tools, and technical expertise to a national and international audience. RTI Press publications are peer-reviewed by at least two independent substantive experts and one or more Press editors. RTI International is an independent, nonprofit research institute dedicated to improving the human condition. We combine scientific rigor and technical expertise in social and laboratory sciences, engineering, and international development to deliver solutions to the critical needs of clients worldwide. Acknowledgments ix 1. Knowledge Systems Theory, Development, and Application Contents List of Figures v List of Tables viiIshak Fatonie, Primatia Romana Wulandari, Budiati Prasetiamartatiwork_m2ae63b66ncpfhvn47vpa7gl2uFri, 30 Sep 2022 00:00:00 GMTUsing Defect Prediction to Improve the Bug Detection Capability of Search-Based Software Testing
https://scholar.archive.org/work/rky5vhwpebglxnnvodfp3vraza
Software systems have a direct and indirect impact on the lives of humans, animals and other living things. They need to be tested thoroughly to minimise software failures. Automated test generators, like search-based software testing (SBST) techniques, replace the tedious and expensive task of manually writing tests. Despite achieving high code coverage, current SBST techniques perform rather poorly in terms of detecting bugs. This thesis proposes novel SBST approaches guided by defect prediction and demonstrates that to effectively and efficiently detect bugs SBST needs to focus test generation more on likely buggy areas in programs guided by defect prediction.BALASURIYAGE ANJANA VISULA PERERAwork_rky5vhwpebglxnnvodfp3vrazaFri, 30 Sep 2022 00:00:00 GMTDipole Cosmology: The Copernican Paradigm Beyond FLRW
https://scholar.archive.org/work/re5hzm66qng2hhyekdrgvudixa
We introduce the dipole cosmological principle, the idea that the Universe is a maximally Copernican cosmology, compatible with a cosmic flow. It serves as the most symmetric paradigm that generalizes the FLRW ansatz, in light of the increasingly numerous (but still tentative) hints that have emerged in the last two decades for a non-kinematic component in the CMB dipole. Einstein equations in our "dipole cosmology" are still ordinary differential equations – but instead of the two Friedmann equations, now we have four. The two new functions can be viewed as an anisotropic scale factor that breaks the isotropy group from SO(3) to U(1), and a "tilt" that captures the cosmic flow velocity. The result is an axially isotropic, tilted Bianchi V/VII_h cosmology. We assess the possibility of model building within the dipole cosmology paradigm, and discuss the dynamics of expansion rate, anisotropic shear and tilt, in various examples. A key observation is that the cosmic flow (tilt) can grow even while the anisotropy (shear) dies down. Remarkably, this can happen even in an era of late time acceleration.Chethan Krishnan, Ranjini Mondol, M. M. Sheikh-Jabbariwork_re5hzm66qng2hhyekdrgvudixaThu, 29 Sep 2022 00:00:00 GMTA Tutorial Introduction to Lattice-based Cryptography and Homomorphic Encryption
https://scholar.archive.org/work/vlqa6rnsa5d3vnpa3qeaizot6a
Why study Lattice-based Cryptography? There are a few ways to answer this question. 1. It is useful to have cryptosystems that are based on a variety of hard computational problems so the different cryptosystems are not all vulnerable in the same way. 2. The computational aspects of lattice-based cryptosystem are usually simple to understand and fairly easy to implement in practice. 3. Lattice-based cryptosystems have lower encryption/decryption computational complexities compared to popular cryptosystems that are based on the integer factorisation or the discrete logarithm problems. 4. Lattice-based cryptosystems enjoy strong worst-case hardness security proofs based on approximate versions of known NP-hard lattice problems. 5. Lattice-based cryptosystems are believed to be good candidates for post-quantum cryptography, since there are currently no known quantum algorithms for solving lattice problems that perform significantly better than the best-known classical (non-quantum) algorithms, unlike for integer factorisation and (elliptic curve) discrete logarithm problems. 6. Last but not least, interesting structures in lattice problems have led to significant advances in Homomorphic Encryption, a new research area with wide-ranging applications.Yang Li, Kee Siong Ng, Michael Purcellwork_vlqa6rnsa5d3vnpa3qeaizot6aWed, 28 Sep 2022 00:00:00 GMTThe Network Propensity Score: Spillovers, Homophily, and Selection into Treatment
https://scholar.archive.org/work/zvprdib43na7lbhznb4zsmzhyi
I establish primitive conditions for unconfoundedness in a coherent model that features heterogeneous treatment effects, spillovers, selection-on-observables, and network formation. I identify average partial effects under minimal exchangeability conditions. If social interactions are also anonymous, I derive a three-dimensional network propensity score, characterize its support conditions, relate it to recent work on network pseudo-metrics, and study extensions. I propose a two-step semiparametric estimator for a random coefficients model which is consistent and asymptotically normal as the number and size of the networks grows. I apply my estimator to a political participation intervention Uganda and a microfinance application in India.Alejandro Sanchez-Becerrawork_zvprdib43na7lbhznb4zsmzhyiWed, 28 Sep 2022 00:00:00 GMTImproving alignment of dialogue agents via targeted human judgements
https://scholar.archive.org/work/mjeq2llswnflhb5djrmeqr27d4
We present Sparrow, an information-seeking dialogue agent trained to be more helpful, correct, and harmless compared to prompted language model baselines. We use reinforcement learning from human feedback to train our models with two new additions to help human raters judge agent behaviour. First, to make our agent more helpful and harmless, we break down the requirements for good dialogue into natural language rules the agent should follow, and ask raters about each rule separately. We demonstrate that this breakdown enables us to collect more targeted human judgements of agent behaviour and allows for more efficient rule-conditional reward models. Second, our agent provides evidence from sources supporting factual claims when collecting preference judgements over model statements. For factual questions, evidence provided by Sparrow supports the sampled response 78% of the time. Sparrow is preferred more often than baselines while being more resilient to adversarial probing by humans, violating our rules only 8% of the time when probed. Finally, we conduct extensive analyses showing that though our model learns to follow our rules it can exhibit distributional biases.Amelia Glaese, Nat McAleese, Maja Trębacz, John Aslanides, Vlad Firoiu, Timo Ewalds, Maribeth Rauh, Laura Weidinger, Martin Chadwick, Phoebe Thacker, Lucy Campbell-Gillingham, Jonathan Uesato, Po-Sen Huang, Ramona Comanescu, Fan Yang, Abigail See, Sumanth Dathathri, Rory Greig, Charlie Chen, Doug Fritz, Jaume Sanchez Elias, Richard Green, Soňa Mokrá, Nicholas Fernando, Boxi Wu, Rachel Foley, Susannah Young, Iason Gabriel, William Isaac, John Mellor, Demis Hassabis, Koray Kavukcuoglu, Lisa Anne Hendricks, Geoffrey Irvingwork_mjeq2llswnflhb5djrmeqr27d4Wed, 28 Sep 2022 00:00:00 GMTTEI2022 Conference Book
https://scholar.archive.org/work/fo5eliwof5bnnn3ms3vetgbpoi
A Book of Abstracts and more for the TEI2022 Conference!James Cummingswork_fo5eliwof5bnnn3ms3vetgbpoiWed, 28 Sep 2022 00:00:00 GMTMathematical Components
https://scholar.archive.org/work/ahuebtxoqbcrbebz5rb2ulla4q
Mathematical Components is the name of a library of formalized mathematics for the Coq system. It covers a variety of topics, from the theory of basic data structures (e.g., numbers, lists, finite sets) to advanced results in various flavors of algebra. This library constitutes the infrastructure for the machine-checked proofs of the Four Color Theorem and of the Odd Order Theorem. The reason of existence of this book is to break down the barriers to entry. While there are several books around covering the usage of the Coq system and the theory it is based on, the Mathematical Components library is built in an unconventional way. As a consequence, this book provides a non-standard presentation of Coq, putting upfront the formalization choices and the proof style that are the pillars of the library. This books targets two classes of public. On the one hand, newcomers, even the more mathematically inclined ones, find a soft introduction to the programming language of Coq, Gallina, and the SSReflect proof language. On the other hand accustomed Coq users find a substantial account of the formalization style that made the Mathematical Components library possible.Assia Mahboubi, Enrico Tassiwork_ahuebtxoqbcrbebz5rb2ulla4qWed, 28 Sep 2022 00:00:00 GMTPerformance and limitations of the QAOA at constant levels on large sparse hypergraphs and spin glass models
https://scholar.archive.org/work/b7v7zym4obhz3dflgu2lsusski
The Quantum Approximate Optimization Algorithm (QAOA) is a general purpose quantum algorithm designed for combinatorial optimization. We analyze its expected performance and prove concentration properties at any constant level (number of layers) on ensembles of random combinatorial optimization problems in the infinite size limit. These ensembles include mixed spin models and Max-q-XORSAT on sparse random hypergraphs. Our analysis can be understood via a saddle-point approximation of a sum-over-paths integral. This is made rigorous by proving a generalization of the multinomial theorem, which is a technical result of independent interest. We then show that the performance of the QAOA at constant levels for the pure q-spin model matches asymptotically the ones for Max-q-XORSAT on random sparse Erdős-Rényi hypergraphs and every large-girth regular hypergraph. Through this correspondence, we establish that the average-case value produced by the QAOA at constant levels is bounded away from optimality for pure q-spin models when q≥ 4 and is even. This limitation gives a hardness of approximation result for quantum algorithms in a new regime where the whole graph is seen.Joao Basso, David Gamarnik, Song Mei, Leo Zhouwork_b7v7zym4obhz3dflgu2lsusskiWed, 28 Sep 2022 00:00:00 GMTReport of the Instrumentation Frontier Working Group for Snowmass 2021
https://scholar.archive.org/work/mu3xhubw4rfj7lzdipobiggxwi
Detector instrumentation is at the heart of scientific discoveries. Cutting edge technologies enable US particle physics to play a leading role worldwide. This report summarizes the current status of instrumentation for High Energy Physics (HEP), the challenges and needs of future experiments and indicates high priority research areas. The Snowmass Instrumentation Frontier studies detector technologies and Research and Development (R&D) needed for future experiments in collider physics, neutrino physics, rare and precision physics and at the cosmic frontier. It is divided into more or less diagonal areas with some overlap among a few of them. We lay out five high-level key messages that are geared towards ensuring the health and competitiveness of the US detector instrumentation community, and thus the entire particle physics landscape.Phil Barbeauwork_mu3xhubw4rfj7lzdipobiggxwiWed, 28 Sep 2022 00:00:00 GMTWhat Can Cryptography Do For Decentralized Mechanism Design
https://scholar.archive.org/work/hjs3xqwb2vfsjexmncaulikdyi
Recent works of Roughgarden (EC'21) and Chung and Shi (Highlights Beyond EC'22) initiate the study of a new decentralized mechanism design problem called transaction fee mechanism design (TFM). Unlike the classical mechanism design literature, in the decentralized environment, even the auctioneer (i.e., the miner) can be a strategic player, and it can even collude with a subset of the users facilitated by binding side contracts. Chung and Shi showed two main impossibility results that rule out the existence of a dream TFM. First, any TFM that provides incentive compatibility for individual users and miner-user coalitions must always have zero miner revenue, no matter whether the block size is finite or infinite. Second, assuming finite block size, no non-trivial TFM can simultaenously provide incentive compatibility for any individual user, and for any miner-user coalition. In this work, we explore what new models and meaningful relaxations can allow us to circumvent the impossibility results of Chung and Shi. Besides today's model that does not employ cryptography, we introduce a new MPC-assisted model where the TFM is implemented by a joint multi-party computation (MPC) protocol among the miners. We prove several feasibility and infeasibility results for achieving strict and approximate incentive compatibility, respectively, in the plain model as well as the MPC-assisted model. We show that while cryptography is not a panacea, it indeed allows us to overcome some impossibility results pertaining to the plain model, leading to non-trivial mechanisms with useful guarantees that are otherwise impossible in the plain model. Our work is also the first to characterize the mathematical landscape of transaction fee mechanism design under approximate incentive compatibility, as well as in a cryptography-assisted model.Elaine Shi, Hao Chung, Ke Wuwork_hjs3xqwb2vfsjexmncaulikdyiWed, 28 Sep 2022 00:00:00 GMTReport of the Topical Group on Physics Beyond the Standard Model at Energy Frontier for Snowmass 2021
https://scholar.archive.org/work/pvmnwp55ojb23ltmretmyhvpli
This is the Snowmass2021 Energy Frontier (EF) Beyond the Standard Model (BSM) report. It combines the EF topical group reports of EF08 (Model-specific explorations), EF09 (More general explorations), and EF10 (Dark Matter at Colliders). The report includes a general introduction to BSM motivations and the comparative prospects for proposed future experiments for a broad range of potential BSM models and signatures, including compositeness, SUSY, leptoquarks, more general new bosons and fermions, long-lived particles, dark matter, charged-lepton flavor violation, and anomaly detection.Tulika Bose, Antonio Boveia, Caterina Doglioni, Simone Pagan Griso, James Hirschauer, Elliot Lipeles, Zhen Liu, Nausheen R. Shah, Lian-Tao Wang, Kaustubh Agashe, Juliette Alimena, Sebastian Baum, Mohamed Berkat, Kevin Black, Gwen Gardner, Tony Gherghetta, Josh Greaves, Maxx Haehn, Phil C. Harris, Robert Harris, Julie Hogan, Suneth Jayawardana, Abraham Kahn, Jan Kalinowski, Simon Knapen, Ian M. Lewis, Meenakshi Narain, Katherine Pachal, Matthew Reece, Laura Reina, Tania Robens, Alessandro Tricoli, Carlos E.M. Wagner, Riley Xu, Felix Yu, Filip Zarnecki, Andreas Albert, Michael Albrow, Wolfgang Altmannshofer, Gerard Andonian, Artur Apresyan, Kétévi Adikle Assamagan, P. Azzi, Howard Baer, Avik Banerjee, Vernon Barger, Brian Batell, M. Bauer, Hugues Beauchesne, Samuel Bein, Alexander Belyaev, Ankit Beniwal, M. Berggren, Prudhvi N. Bhattiprolu, Nikita Blinov, A. Blondel, Oleg Brandt, Giacomo Cacciapaglia, Rodolfo Capdevilla, Marcela Carena, Francesco Giovanni Celiberto, S.V. Chekanov, Hsin-Chia Cheng, Thomas Y. Chen, Yuze Chen, R. Sekhar Chivukula, Matthew Citron, James Cline, Tim Cohen, Jack H. Collins, Eric Corrigan, Nathaniel Craig, Daniel Craik, Andreas Crivellin, David Curtin, Smita Darmora, Arindam Das, Sridhara Dasu, Aldo Deandrea, Antonio Delgado, Zeynep Demiragli, David d'Enterria, Frank F. Deppisch, Radovan Dermisek, Nishita Desai, Abhay Deshpande, Jordy de Vries, Jennet Dickinson, Keith R. Dienes, K.F. Di Petrillo, Matthew J. Dolan, Peter Dong, Patrick Draper, M. Drewes, Etienne Dreyer, Peizhi Du, Majid Ekhterachian, Motoi Endo, Rouven Essig, J.N. Farr, Farida Fassi, Jonathan L. Feng, Gabriele Ferretti, Daniele Filipetto, Thomas Flacke, Karri Folan Di Petrillo, Roberto Franceschini, Diogo Buarque Franzosi, Keisuke Fujii, Benjamin Fuks, Sri Aditya Gadam, Boyu Gao, Aran Garcia-Bellido, Isabel Garcia Garcia, Maria Vittoria Garzelli, Stephen Gedney, Marie-Hélène Genest, Tathagata Ghosh, Mark Golkowski, Giovanni Grilli di Cortona, Emine Gurpinar Guler, Yalcin Guler, C. Guo, Ulrich Haisch, Jan Hajer, Koichi Hamaguchi, Tao Han, Philip Harris, Sven Heinemeyer, Christopher S. Hill, Joshua Hiltbrand, T.R. Holmes, Samuel Homiller, Sungwoo Hong, Walter Hopkins, Shih-Chieh Hsu, Phil Ilten, Wasikul Islam, Sho Iwamoto, Daniel Jeans, Laura Jeanty, Haoyi Jia, Sergo Jindariani, Daniel Johnson, Felix Kahlhoefer, Yonatan Kahn, Paul Karchin, Thomas Katsouleas, Shin-ichi Kawada, Junichiro Kawamura, Chris Kelso, Valery Khoze, Doojin Kim, Teppei Kitahara, J. Klaric, Michael Klasen, Kyoungchul Kong, Wojciech Kotlarski, A.V. Kotwal, Jonathan Kozaczuk, Richard Kriske, S. Kulkarni, Jason Kumar, Manuel Kunkel, Greg Landsberg, Kenneth Lane, Clemens Lange, Lawrence Lee, Jiajun Liao, Benjamin Lillard, Shuailong Li, Shu Li, J. List, Tong Li, Hongkai Liu, Jia Liu, Jonathan D Long, Enrico Lunghi, Kun-Feng Lyu, Danny Marfatia, Dakotah Martinez, Stephen P. Martin, Navin McGinnis, Krzysztof Mękała, Federico Meloni, O. Mikulenko, Rashmish K. Mishra, Manimala Mitra, Vasiliki A. Mitsou, Chang-Seong Moon, Alexander Moreno, Takeo Moroi, Gerard Mourou, Malte Mrowietz, Patric Muggli, Jurina Nakajima, Pran Nath, J. Nelson, M. Neubert, Laura Nosler, M.T. Núñez Pardo de Vera, Nobuchika Okada, Satomi Okada, Vitalii A. Okorokov, Yasar Onel, Tong Ou, M. Ovchynnikov, Rojalin Padhan, Priscilla Pani, Luca Panizzi, Andreas Papaefstathiou, Kevin Pedro, Cristián Peña, Federica Piazza, James Pinfold, Deborah Pinna, Werner Porod, Chris Potter, Markus Tobias Prim, Stefano Profumo, J. Proudfoot, Mudit Rai, Filip Rajec, Michael J. Ramsey-Musolf, Javier Resta-Lopez, Jürgen Reuter, Andreas Ringwald, C. Rizzi, Thomas G. Rizzo, Giancarlo Rossi, Richard Ruiz, L. Rygaard, Aakash A. Sahai, Shadman Salam, Pearl Sandick, Deepak Sathyan, Christiane Scherb, Pedro Schwaller, Leonard Schwarze, Pat Scott, Sezen Sekmen, Dibyashree Sengupta, S. Sen, A. Sfyrla, T. Sharma, Varun Sharma, Jessie Shelton, William Shepherd, Seodong Shin, Elizabeth H. Simmons, Torbjörn Sjöstrand, Scott Snyder, Giordon Stark, Patrick Stengel, Joachim Stohr, Daniel Stolarski, Matt Strassler, Nadja Strobbe, R. Gonzalez Suarez, Taikan Suehara, Shufang Su, Wei Su, Raza M. Syed, Tim M.P. Tait, Toshiki Tajima, Xerxes Tata, A. Thamm, Brooks Thomas, Natalia Toro, N.V. Tran, Loan Truong, Yu-Dai Tsai, Nikhilesh Venkatasubramanian, C.B. Verhaaren, Carl Vuosalo, Xiao-Ping Wang, Xing Wang, Yikun Wang, Zhen Wang, Christian Weber, Glen White, Martin White, Anthony G. Williams, Mike Williams, Stephane Willocq, Alex Woodcock, Yongcheng Wu, Ke-Pan Xie, Keping Xie, Si Xie, C.-H. Yeh, Ryo Yonamine, David Yu, S.-S. Yu, Mohamed Zaazoua, Aleksander Filip Żarnecki, Kamil Zembaczynski, Danyi Zhang, Jinlong Zhang, Frank Zimmermann, Jose Zuritawork_pvmnwp55ojb23ltmretmyhvpliTue, 27 Sep 2022 00:00:00 GMTCausalSim: A Causal Inference Framework for Unbiased Trace-Driven Simulation
https://scholar.archive.org/work/bnwdcontfjcodhonn4vlzftr34
We present CausalSim, a causal inference framework for unbiased trace-driven simulation. Current trace-driven simulators assume that the interventions being simulated (e.g., a new algorithm) would not affect the validity of the traces. However, real-world traces are often biased by the choices of algorithms made during trace collection, and hence replaying traces under an intervention may lead to incorrect results. CausalSim addresses this challenge by learning a causal model of the system dynamics and latent factors capturing the underlying system conditions during trace collection. It learns these models using an initial randomized control trial (RCT) under a fixed set of algorithms, and then applies them to remove biases from trace data when simulating new algorithms. Key to CausalSim is mapping unbiased trace-driven simulation to a tensor completion problem with extremely sparse observations. By exploiting a basic distributional invariance property present in RCT data, CausalSim enables a novel tensor completion method despite the sparsity of observations. Our extensive evaluation of CausalSim on both real and synthetic datasets, including more than ten months of real data from the Puffer video streaming system show it improves simulation accuracy, reducing errors by 53% and 61% on average compared to expert-designed and supervised learning baselines. Moreover, CausalSim provides markedly different insights about ABR algorithms compared to the biased baseline simulator, which we validate with a real deployment.Abdullah Alomar, Pouya Hamadanian, Arash Nasr-Esfahany, Anish Agarwal, Mohammad Alizadeh, Devavrat Shahwork_bnwdcontfjcodhonn4vlzftr34Tue, 27 Sep 2022 00:00:00 GMTHigh-Dimensional Geometric Streaming in Polynomial Space
https://scholar.archive.org/work/btmazlynqvc77kwthohpbovcia
Many existing algorithms for streaming geometric data analysis have been plagued by exponential dependencies in the space complexity, which are undesirable for processing high-dimensional data sets. In particular, once d≥log n, there are no known non-trivial streaming algorithms for problems such as maintaining convex hulls and Löwner-John ellipsoids of n points, despite a long line of work in streaming computational geometry since [AHV04]. We simultaneously improve these results to poly(d,log n) bits of space by trading off with a poly(d,log n) factor distortion. We achieve these results in a unified manner, by designing the first streaming algorithm for maintaining a coreset for ℓ_∞ subspace embeddings with poly(d,log n) space and poly(d,log n) distortion. Our algorithm also gives similar guarantees in the online coreset model. Along the way, we sharpen results for online numerical linear algebra by replacing a log condition number dependence with a log n dependence, answering a question of [BDM+20]. Our techniques provide a novel connection between leverage scores, a fundamental object in numerical linear algebra, and computational geometry. For ℓ_p subspace embeddings, we give nearly optimal trade-offs between space and distortion for one-pass streaming algorithms. For instance, we give a deterministic coreset using O(d^2log n) space and O((dlog n)^1/2-1/p) distortion for p>2, whereas previous deterministic algorithms incurred a poly(n) factor in the space or the distortion [CDW18]. Our techniques have implications in the offline setting, where we give optimal trade-offs between the space complexity and distortion of subspace sketch data structures. To do this, we give an elementary proof of a "change of density" theorem of [LT80] and make it algorithmic.David P. Woodruff, Taisuke Yasudawork_btmazlynqvc77kwthohpbovciaTue, 27 Sep 2022 00:00:00 GMTTesting Explanations of Short Baseline Neutrino Anomalies
https://scholar.archive.org/work/ojolauja4fe3doblxx4sl4cr4e
The experimental observation of neutrino oscillations profoundly impacted the physics of neutrinos, from being well understood theoretically to requiring new physics beyond the standard model of particle physics. Indeed, the mystery of neutrino masses implies the presence of new particles never observed before, often called sterile neutrinos, as they would not undergo standard weak interactions. And while neutrino oscillation measurements entered the precision era, reaching percent-level precision, many experimental results show significant discrepancies with the standard model, at baselines much shorter than typical oscillation baselines, like LSND, MiniBooNE, gallium experiments, and reactor antineutrino measurements. These short baseline anomalies could be explained by the addition of a light sterile neutrino, with mass in the 1-10 eV range, however, in strong tension with many null experimental observations. Other explanations that rely on sterile neutrinos with masses in the 1-500 MeV could resolve the tension. Here we test both classes of models. On the one hand, we look for datasets collected at a short baseline which can constrain heavy sterile neutrino models. We find that the minimal model is fully constrained, but several extensions of this model could weaken the current constraint and be tested with current and future datasets. On the other hand, we test the presence of neutrino oscillations at short baselines, induced by a light sterile state, with the data collected by the MicroBooNE experiment, a liquid argon time projection chamber specifically designed to resolve the details of each neutrino interaction. We report null results from both analyses, further constraining the space of possible explanations for the short baseline anomalies. If new physics lies behind the short baseline anomaly puzzle, it is definitely not described by a simple model.Nicolò Foppianiwork_ojolauja4fe3doblxx4sl4cr4eTue, 27 Sep 2022 00:00:00 GMTProceedings of ULAB 2022
https://scholar.archive.org/work/eug3pz3eorcn3amsth65q6mcui
Presentations given at the 12th annual conference of the Undergraduate Linguistics Association of Britain, hosted 9-11 April 2022 by the University of Edinburgh.Caitlin Wilson, Beatrix Livesey-Stephens, Andrew Tobin, Hui Zhu, Ariane Branigan, Aaliyah Bullen, Evelyn R. Burrows, Patrick Das, Diana R. Davidson, Grace Ephraums, Wing Yin Ho, Cliodhna Hughes, Jaidan McLean, Leah Palmer, Suzy Park, Emily Shepherdson, Eleonor Streatfield, Chloé Vanrapenbuschwork_eug3pz3eorcn3amsth65q6mcuiTue, 27 Sep 2022 00:00:00 GMTDescriptive vs. inferential community detection in networks: pitfalls, myths, and half-truths
https://scholar.archive.org/work/7kx6vshwkjabhnaukqodb5j6ka
Community detection is one of the most important methodological fields of network science, and one which has attracted a significant amount of attention over the past decades. This area deals with the automated division of a network into fundamental building blocks, with the objective of providing a summary of its large-scale structure. Despite its importance and widespread adoption, there is a noticeable gap between what is arguably the state-of-the-art and the methods that are actually used in practice in a variety of fields. Here we attempt to address this discrepancy by dividing existing methods according to whether they have a "descriptive" or an "inferential" goal. While descriptive methods find patterns in networks based on context-dependent notions of community structure, inferential methods articulate generative models, and attempt to fit them to data. In this way, they are able to provide insights into the mechanisms of network formation, and separate structure from randomness in a manner supported by statistical evidence. We review how employing descriptive methods with inferential aims is riddled with pitfalls and misleading answers, and thus should be in general avoided. We argue that inferential methods are more typically aligned with clearer scientific questions, yield more robust results, and should be in many cases preferred. We attempt to dispel some myths and half-truths often believed when community detection is employed in practice, in an effort to improve both the use of such methods as well as the interpretation of their results.Tiago P. Peixotowork_7kx6vshwkjabhnaukqodb5j6kaMon, 26 Sep 2022 00:00:00 GMT