IA Scholar Query: Stepwise Construction of Algebraic Specifications.
https://scholar.archive.org/
Internet Archive Scholar query results feedeninfo@archive.orgMon, 03 Oct 2022 00:00:00 GMTfatcat-scholarhttps://scholar.archive.org/help1440Requirements Engineering for Machine Learning: A Review and Reflection
https://scholar.archive.org/work/hm6nob3tszemjcenfbnbjp3v6a
Today, many industrial processes are undergoing digital transformation, which often requires the integration of well-understood domain models and state-of-the-art machine learning technology in business processes. However, requirements elicitation and design decision making about when, where and how to embed various domain models and end-to-end machine learning techniques properly into a given business workflow requires further exploration. This paper aims to provide an overview of the requirements engineering process for machine learning applications in terms of cross domain collaborations. We first review the literature on requirements engineering for machine learning, and then go through the collaborative requirements analysis process step-by-step. An example case of industrial data-driven intelligence applications is also discussed in relation to the aforementioned steps.Zhongyi Pei, Lin Liu, Chen Wang, Jianmin Wangwork_hm6nob3tszemjcenfbnbjp3v6aMon, 03 Oct 2022 00:00:00 GMTProper Cartan Groupoids: Reduction to the Regular Case
https://scholar.archive.org/work/ldp6a3whfbeyfoqlny7sn6xxau
We discuss a method for constructing multiplicative connections on proper Lie groupoids or, more exactly, for reducing the task of constructing such connections to a number of in principle simpler tasks involving only Lie groupoids that are both proper and regular.Giorgio Trentinagliawork_ldp6a3whfbeyfoqlny7sn6xxauSat, 01 Oct 2022 00:00:00 GMTTyranny-of-the-minority regression adjustment in randomized experiments
https://scholar.archive.org/work/szndnkke4vdaxdpnzmaz24i2p4
Regression adjustment is widely used for the analysis of randomized experiments to improve the estimation efficiency of the treatment effect. This paper reexamines a weighted regression adjustment method termed as tyranny-of-the-minority (ToM), wherein units in the minority group are given greater weights. We demonstrate that the ToM regression adjustment is more robust than Lin 2013's regression adjustment with treatment-covariate interactions, even though these two regression adjustment methods are asymptotically equivalent in completely randomized experiments. Moreover, we extend ToM regression adjustment to stratified randomized experiments, completely randomized survey experiments, and cluster randomized experiments. We obtain design-based properties of the ToM regression-adjusted average treatment effect estimator under such designs. In particular, we show that ToM regression-adjusted estimator improves the asymptotic estimation efficiency compared to the unadjusted estimator even when the regression model is misspecified, and is optimal in the class of linearly adjusted estimators. We also study the asymptotic properties of various heteroscedasticity-robust standard error estimators and provide recommendations for practitioners. Simulation studies and real data analysis demonstrate ToM regression adjustment's superiority over existing methods.Xin Lu, Hanzhong Liuwork_szndnkke4vdaxdpnzmaz24i2p4Sat, 01 Oct 2022 00:00:00 GMTActionable Neural Representations: Grid Cells from Minimal Constraints
https://scholar.archive.org/work/rhdzuisouza6vaxw3nv37ccrsy
To afford flexible behaviour, the brain must build internal representations that mirror the structure of variables in the external world. For example, 2D space obeys rules: the same set of actions combine in the same way everywhere (step north, then south, and you won't have moved, wherever you start). We suggest the brain must represent this consistent meaning of actions across space, as it allows you to find new short-cuts and navigate in unfamiliar settings. We term this representation an 'actionable representation'. We formulate actionable representations using group and representation theory, and show that, when combined with biological and functional constraints - non-negative firing, bounded neural activity, and precise coding - multiple modules of hexagonal grid cells are the optimal representation of 2D space. We support this claim with intuition, analytic justification, and simulations. Our analytic results normatively explain a set of surprising grid cell phenomena, and make testable predictions for future experiments. Lastly, we highlight the generality of our approach beyond just understanding 2D space. Our work characterises a new principle for understanding and designing flexible internal representations: they should be actionable, allowing animals and machines to predict the consequences of their actions, rather than just encode.William Dorrell, Peter E. Latham, Timothy E.J. Behrens, James C.R. Whittingtonwork_rhdzuisouza6vaxw3nv37ccrsyFri, 30 Sep 2022 00:00:00 GMTThe Coupled Bootstrap Framework for Risk and Error Estimation
https://scholar.archive.org/work/nod6rlauobcklcljtnbduxqyh4
Test error estimation is a fundamental problem in statistical learning. Its goal is to correctly evaluate how an algorithm that learned from training data will perform on new and unseen data. With the development of complex machine learning models, hyperparameter tuning and model selection are critical to obtaining good performance with learning algorithms, and these tasks strongly rely on a good test error estimator. Existing estimators in the literature require smoothness assumptions on the data generating distribution and/or the fitting algorithm, resampling schemes that rely on symmetry of the data, or can be very computationally expensive. We propose a new test error and risk estimator named coupled bootstrap, or CB, which is easily computable, model agnostic, and does not rely on sample splitting. By exploiting the distributional properties for some noise classes, we create a pair of perturbed datasets with certain independence properties such that one of these perturbed datasets acts as a training set and the other as a test set. The CB estimator is shown to be unbiased for a slightly perturbed version of the original problem, and converges to the original test error as the magnitude of the added perturbation decreases. We focus on two very important noise classes for the response variable: Gaussian and Poisson. For both cases, we study the bias behavior as a function of the perturbation magnitude, control the error estimator's variability as a function of the perturbation size and number of bootstrap samples, and derive limiting results. We compare CB to existing estimators in the literature, both in simulated and real settings. In the Gaussian scenario, we also provide new findings for existing methods in the literature. In the Poisson case, we propose a new estimator based on the computationally costly gold-standard method in the literature and compare it against the CB approach. In general, CB performs favorably iii when compared to other estimators, in particular when the algorithm is highly variable, [...]Natalia Lombardi de Oliveirawork_nod6rlauobcklcljtnbduxqyh4Fri, 30 Sep 2022 00:00:00 GMTBuilding Specifications in the Event-B Institution
https://scholar.archive.org/work/defbxururnbjdp4u3ktl5jjwmq
This paper describes a formal semantics for the Event-B specification language using the theory of institutions. We define an institution for Event-B, EVT, and prove that it meets the validity requirements for satisfaction preservation and model amalgamation. We also present a series of functions that show how the constructs of the Event-B specification language can be mapped into our institution. Our semantics sheds new light on the structure of the Event-B language, allowing us to clearly delineate three constituent sub-languages: the superstructure, infrastructure and mathematical languages. One of the principal goals of our semantics is to provide access to the generic modularisation constructs available in institutions, including specification-building operators for parameterisation and refinement. We demonstrate how these features subsume and enhance the corresponding features already present in Event-B through a detailed study of their use in a worked example. We have implemented our approach via a parser and translator for Event-B specifications, EBtoEVT, which also provides a gateway to the Hets toolkit for heterogeneous specification.Marie Farrell, Rosemary Monahan, James F. Powerwork_defbxururnbjdp4u3ktl5jjwmqThu, 29 Sep 2022 00:00:00 GMTTurbulence as Clebsch Confinement
https://scholar.archive.org/work/qrlmjshh65cfddfvw4x3lbhb44
We argue that in the strong turbulence phase, as opposed to the weak one, the Clebsch variables compactify to the sphere S_2 and are not observable as wave excitations like weak turbulence. Various topologically nontrivial configurations of this confined Clebsch field are responsible for vortex sheets. Stability equations (CVS) for closed vortex surfaces (bubbles of Clebsch field) are derived and investigated. The exact non-compact solution for the stable vortex sheet family is presented. Compact solutions are proven not to exist by De Lellis and Brué. Asymptotic conservation of anomalous dissipation on stable vortex surfaces in the turbulent limit is discovered. We derive an exact formula for this anomalous dissipation as a surface integral of the square of velocity gap times the square root of minus local normal strain. Topologically stable time-dependent solutions, which we call Kelvinons, are introduced. They have a conserved velocity circulation around static loop; this makes them responsible for asymptotic PDF tails of velocity circulation, perfectly matching numerical simulations. The loop equation for fluid dynamics is derived and studied. This equation is exactly equivalent to the Schrödinger equation in loop space, with viscosity ν playing the role of Planck's constant. Area law and the asymptotic scaling law for mean circulation at a large area are derived. The exact representation of the solution of the loop equation in terms of a singular stochastic equation for momentum loop trajectory is presented. Kelvinons are fixed points of the loop equation at turbulent limit ν→ 0. The Loop equation's linearity makes the PDF's general solution to be a superposition of Kelvinon solutions with different winding numbers.Alexander Migdalwork_qrlmjshh65cfddfvw4x3lbhb44Wed, 28 Sep 2022 00:00:00 GMTQuantum LDPC Codes for Modular Architectures
https://scholar.archive.org/work/axymjwriprav3icqy7qeu7cfka
In efforts to scale the size of quantum computers, modularity plays a central role across most quantum computing technologies. In the light of fault tolerance, this necessitates designing quantum error-correcting codes that are compatible with the connectivity arising from the architectural layouts. In this paper, we aim to bridge this gap by giving a novel way to view and construct quantum LDPC codes tailored for modular architectures. We demonstrate that if the intra- and inter-modular qubit connectivity can be viewed as corresponding to some classical or quantum LDPC codes, then their hypergraph product code fully respects the architectural connectivity constraints. Finally, we show that relaxed connectivity constraints that allow twists of connections between modules pave a way to construct codes with better parameters.Armands Strikis, Lucas Berentwork_axymjwriprav3icqy7qeu7cfkaWed, 28 Sep 2022 00:00:00 GMTSparse Bayesian Learning for Complex-Valued Rational Approximations
https://scholar.archive.org/work/lw47eslxibf3lka6r3n3d7vwim
Surrogate models are used to alleviate the computational burden in engineering tasks, which require the repeated evaluation of computationally demanding models of physical systems, such as the efficient propagation of uncertainties. For models that show a strongly non-linear dependence on their input parameters, standard surrogate techniques, such as polynomial chaos expansion, are not sufficient to obtain an accurate representation of the original model response. Through applying a rational approximation instead, the approximation error can be efficiently reduced for models whose non-linearity is accurately described through a rational function. Specifically, our aim is to approximate complex-valued models. A common approach to obtain the coefficients in the surrogate is to minimize the sample-based error between model and surrogate in the least-square sense. In order to obtain an accurate representation of the original model and to avoid overfitting, the sample set has be two to three times the number of polynomial terms in the expansion. For models that require a high polynomial degree or are high-dimensional in terms of their input parameters, this number often exceeds the affordable computational cost. To overcome this issue, we apply a sparse Bayesian learning approach to the rational approximation. Through a specific prior distribution structure, sparsity is induced in the coefficients of the surrogate model. The denominator polynomial coefficients as well as the hyperparameters of the problem are determined through a type-II-maximum likelihood approach. We apply a quasi-Newton gradient-descent algorithm in order to find the optimal denominator coefficients and derive the required gradients through application of ℂℝ-calculus.Felix Schneider and Iason Papaioannou and Gerhard Müllerwork_lw47eslxibf3lka6r3n3d7vwimTue, 27 Sep 2022 00:00:00 GMTFermionic Wigner functional theory
https://scholar.archive.org/work/ytsqyz6lrbbqjbnklxo3yuqysu
A Grassmann functional phase space is formulated for the definition of fermionic Wigner functionals. The formulation follows a stepwise process, starting with the identification of suitable fermionic operators that are analogues to bosonic quadrature operators. The Majorana operators do not suffice for this purpose. Instead, a set of fermionic Bogoliubov operators are used. The eigenstates of these operators are shown to provide a complete orthogonal basis, provided that the dual space is defined by augmenting the Hermitian conjugation with a spin transformation. These bases serve as quadrature bases in terms of which the Wigner functionals can be defined in a way that is analogues to the bosonic case.Filippus S. Rouxwork_ytsqyz6lrbbqjbnklxo3yuqysuTue, 27 Sep 2022 00:00:00 GMTIntegrating Feature Engineering with Deep Learning to Conduct Diagnostic and Predictive Analytics for Turbofan Engines
https://scholar.archive.org/work/rclvsy6ggbgj7nqgyc5skwh5me
The prediction of remaining useful life (RUL) is a critical issue in many areas, such as aircrafts, ships, automobile, and facility equipment. Although numerous methods have been presented to address this issue, most of them do not consider the impacts of feature engineering. Typical techniques include the wrapper approach (using metaheuristics), the embedded approach (using machine learning), and the extraction approach (using component analysis). For simplicity, this research considers feature selection and feature extraction. In particular, principal component analysis (PCA) and sliced inverse regression (SIR) are adopted in feature extraction while stepwise regression (SR), multivariate adaptive regression splines (MARS), random forest (RF), and extreme gradient boosting (XGB) are used in feature selection. In feature selection, the original 15 sensors can be reduced to only four sensors that accumulate more than 80% degrees of importance and not seriously decrease the predictive performances. In feature extraction, only the top three principal components can account for more than 80% variances of original 15 sensors. Further, PCA combined with RF is more recommended than PCA and CNN (convolutional neural network) because it can achieve satisfactory performances without incurring tedious computation.Chih-Hsuan Wang, Ji-Yu Liu, Zijian Qiaowork_rclvsy6ggbgj7nqgyc5skwh5meMon, 26 Sep 2022 00:00:00 GMTVariance estimation for the average treatment effects on the treated and on the controls
https://scholar.archive.org/work/aswogj3mgfepvesokyeqcyedpa
Common causal estimands include the average treatment effect (ATE), the average treatment effect of the treated (ATT), and the average treatment effect on the controls (ATC). Using augmented inverse probability weighting methods, parametric models are judiciously leveraged to yield doubly robust estimators, i.e., estimators that are consistent when at least one the parametric models is correctly specified. Three sources of uncertainty are associated when we evaluate these estimators and their variances, i.e., when we estimate the treatment and outcome regression models as well as the desired treatment effect. In this paper, we propose methods to calculate the variance of the normalized, doubly robust ATT and ATC estimators and investigate their finite sample properties. We consider both the asymptotic sandwich variance estimation, the standard bootstrap as well as two wild bootstrap methods. For the asymptotic approximations, we incorporate the aforementioned uncertainties via estimating equations. Moreover, unlike the standard bootstrap procedures, the proposed wild bootstrap methods use perturbations of the influence functions of the estimators through independently distributed random variables. We conduct an extensive simulation study where we vary the heterogeneity of the treatment effect as well as the proportion of participants assigned to the active treatment group. We illustrate the methods using an observational study of critical ill patients on the use of right heart catherization.Roland A. Matsouaka, Yi Liu, Yunji Zhouwork_aswogj3mgfepvesokyeqcyedpaThu, 22 Sep 2022 00:00:00 GMTThe Axiomatic Approach to Non-Classical Model Theory
https://scholar.archive.org/work/eis66kn4lvfj7ewxu3xnqlxzwy
Institution theory represents the fully axiomatic approach to model theory in which all components of logical systems are treated fully abstractly by reliance on category theory. Here, we survey some developments over the last decade or so concerning the institution theoretic approach to non-classical aspects of model theory. Our focus will be on many-valued truth and on models with states, which are addressed by the two extensions of ordinary institution theory known as L-institutions and stratified institutions, respectively. The discussion will include relevant concepts, techniques, and results from these two areas.Răzvan Diaconescuwork_eis66kn4lvfj7ewxu3xnqlxzwyWed, 21 Sep 2022 00:00:00 GMTAitchison's Compositional Data Analysis 40 Years On: A Reappraisal
https://scholar.archive.org/work/hyokx6lmp5hizin7qxrzfu44va
The development of John Aitchison's approach to compositional data analysis is followed since his paper read to the Royal Statistical Society in 1982. Aitchison's logratio approach, which was proposed to solve the problematic aspects of working with data with a fixed sum constraint, is summarized and reappraised. It is maintained that the principles on which this approach was originally built, the main one being subcompositional coherence, are not required to be satisfied exactly -- quasi-coherence is sufficient, that is near enough to being coherent for all practical purposes. This opens up the field to using simpler data transformations, such as power transformations, that permit zero values in the data. The additional principle of exact isometry, which was subsequently introduced and not in Aitchison's original conception, imposed the use of isometric logratio transformations, but these are complicated and problematic to interpret, involving ratios of geometric means. If this principle is regarded as important in certain analytical contexts, for example unsupervised learning, it can be relaxed by showing that regular pairwise logratios, as well as the alternative quasi-coherent transformations, can also be quasi-isometric, meaning they are close enough to exact isometry for all practical purposes. It is concluded that the isometric and related logratio transformations such as pivot logratios are not a prerequisite for good practice, although many authors insist on their obligatory use. This conclusion is fully supported here by case studies in geochemistry and in genomics, where the good performance is demonstrated of pairwise logratios, as originally proposed by Aitchison, or Box-Cox power transforms of the original compositions where no zero replacements are necessary.Michael Greenacre, Eric Grunsky, John Bacon-Shone, Ionas Erb, Thomas Quinnwork_hyokx6lmp5hizin7qxrzfu44vaTue, 20 Sep 2022 00:00:00 GMTApplication of a Spectral Method to Simulate Quasi-Three-Dimensional Underwater Acoustic Fields
https://scholar.archive.org/work/772brll6wza4zknplcvyjmodpu
The calculation of a three-dimensional underwater acoustic field has always been a key problem in computational ocean acoustics. Traditionally, this solution is usually obtained by directly solving the acoustic Helmholtz equation using a finite difference or finite element algorithm. Solving the three-dimensional Helmholtz equation directly is computationally expensive. For quasi-three-dimensional problems, the Helmholtz equation can be processed by the integral transformation approach, which can greatly reduce the computational cost. In this paper, a numerical algorithm for a quasi-three-dimensional sound field that combines an integral transformation technique, stepwise coupled modes and a spectral method is designed. The quasi-three-dimensional problem is transformed into a two-dimensional problem using an integral transformation strategy. A stepwise approximation is then used to discretize the range dependence of the two-dimensional problem; this approximation is essentially a physical discretization that further reduces the range-dependent two-dimensional problem to a one-dimensional problem. Finally, the Chebyshev--Tau spectral method is employed to accurately solve the one-dimensional problem. We provide the corresponding numerical program SPEC3D for the proposed algorithm and describe some representative numerical examples. In the numerical experiments, the consistency between SPEC3D and the analytical solution/high-precision finite difference program COACH verifies the reliability and capability of the proposed algorithm. A comparison of running times illustrates that the algorithm proposed in this paper is significantly faster than the full three-dimensional algorithm in terms of computational speed.Houwang Tu, Yongxian Wang, Wei Liu, Chunmei Yang, Jixing Qin, Shuqing Ma, Xiaodong Wangwork_772brll6wza4zknplcvyjmodpuFri, 16 Sep 2022 00:00:00 GMTBehavioral Theory for Stochastic Systems? A Data-driven Journey from Willems to Wiener and Back Again
https://scholar.archive.org/work/x3mzutns4bcppamrewatyxbbzm
The fundamental lemma by Jan C. Willems and co-workers has become one of the supporting pillars of the recent progress on data-driven control and system analysis. The lemma is deeply rooted in behavioral systems theory, which so far has been focused on finite-dimensional deterministic systems. This tutorial combines recent insights into stochastic and descriptor system formulations of the lemma to establish the formal basis for behavioral theory of stochastic descriptor systems. We show that Polynomial Chaos Expansions (PCE) of L^2-random variables, which date back to Norbert Wiener's seminal work, enable equivalent behavioral characterizations of linear stochastic descriptor systems. Specifically, we prove that under mild assumptions the behavior of L^2-random variables is equivalent to the behavior of the PCE coefficients and that it entails the behavior of realization trajectories. We also illustrate the short-comings of behaviors in statistical moments. The paper culminates in the formulation of the stochastic fundamental lemma for linear descriptor systems, which in turn enables numerically tractable formulations of data-driven stochastic optimal control combining Hankel matrices in realization data with PCE conceptsTimm Faulwasser, Ruchuan Ou, Guanru Pan, Philipp Schmitz, Karl Worthmannwork_x3mzutns4bcppamrewatyxbbzmWed, 14 Sep 2022 00:00:00 GMTSocially Enhanced Situation Awareness from Microblogs using Artificial Intelligence: A Survey
https://scholar.archive.org/work/7eh6lplobzgk3oasm23x6wgqce
The rise of social media platforms provides an unbounded, infinitely rich source of aggregate knowledge of the world around us, both historic and real-time, from a human perspective. The greatest challenge we face is how to process and understand this raw and unstructured data, go beyond individual observations and see the "big picture"--the domain of Situation Awareness. We provide an extensive survey of Artificial Intelligence research, focusing on microblog social media data with applications to Situation Awareness, that gives the seminal work and state-of-the-art approaches across six thematic areas: Crime, Disasters, Finance, Physical Environment, Politics, and Health and Population. We provide a novel, unified methodological perspective, identify key results and challenges, and present ongoing research directions.Rabindra Lamsal, Aaron Harwood, Maria Rodriguez Readwork_7eh6lplobzgk3oasm23x6wgqceTue, 13 Sep 2022 00:00:00 GMTOn the nature of Mersenne fluctuations
https://scholar.archive.org/work/k4tyj2xn5bekfmfldnjxxrgloi
In Part I, crotons are introduced, multifaceted pre-geometric objects that occur both as labels encoded on the boundary of a "volume" and as complementary aspects of geometric fluctuations within that volume. If you think of crotons as linear combinations, then the scalars used are croton base numbers. Croton base numbers can be combined to form the amplitudes and phases of Mersenne fluctuations which, in turn, form qphyla. Volume normally requires space or space-time as a prerequisite; in a pregeometric setting, however, "volume" is represented by a qphyletic assembly. Various stages of pre-geometric refinement, expressed through the aspects crotonic amplitude or phase, combine to eventually form and/or dissolve sphere-packed chunks of Euclidean space. A time-like crotonic refinement is a rough analog of temporal resolution in tenacious time, whereas space-like crotonic refinement is analogous to spatial resolution in sustained space. The analogy suggests a conceptual link between the ever-expanding scope of Mersenne fluctuations and the creation and lifetime patterns of massive elementary particles. A three-stage process of ideation, organization and intraworldly action is introduced to back this up. In Part II, the intrawordly aspect is analyzed first, including our preon model of subnuclear structure, and the organizer aspect thereafter, based on three types of Mersenne numbers, M_reg, M_5/8, M_9/8, and two formal principles: juxtaposition x vs. x±1(2) and the interordinal application of functional 1⋇(f^(a)∘(f^(b)⋇ f^(c))).U. Merkelwork_k4tyj2xn5bekfmfldnjxxrgloiTue, 13 Sep 2022 00:00:00 GMTEfficient query evaluation techniques over large amount of distributed linked data
https://scholar.archive.org/work/mdwcp7smqzax5gzl4sfqar5w4a
As RDF becomes more widely established and the amount of linked data is rapidly increasing, the efficient querying of large amount of data becomes a significant challenge. In this paper, we propose a family of algorithms for querying large amount of linked data in a distributed manner. These query evaluation algorithms are independent of the way the data is stored, as well as of the particular implementation of the query evaluation. We then use the MapReduce paradigm to present a distributed implementation of these algorithms and experimentally evaluate them, although the algorithms could be straightforwardly translated into other distributed processing frameworks. We also investigate and propose multiple query decomposition approaches of Basic Graph Patterns (subclass of SPARQL queries) that are used to improve the overall performance of the distributed query answering. A deep analysis of the effectiveness of these decomposition algorithms is also provided.Eleftherios Kalogeros, Manolis Gergatsoulis, Matthew Damigos, Christos Nomikoswork_mdwcp7smqzax5gzl4sfqar5w4aMon, 12 Sep 2022 00:00:00 GMTSatoshi Nakamoto and the Origins of Bitcoin – The Profile of a 1-in-a-Billion Genius
https://scholar.archive.org/work/j4qedozq7jcwhkwcoi3szt7qba
The mystery about the ingenious creator of Bitcoin concealing behind the pseudonym Satoshi Nakamoto has been fascinating the global public for more than a decade. Suddenly jumping out of the dark in 2008, this persona hurled the decentralized electronic cash system "Bitcoin", which has reached a peak market capitalization in the region of 1 trillion USD. In a purposely agnostic, and meticulous "lea-ving no stone unturned" approach, this study presents new hard facts, which evidently slipped through Satoshi Nakamoto's elaborate privacy shield, and derives meaningful pointers that are primarily inferred from Bitcoin's whitepaper, its blockchain parameters, and data that were widely up to his discretion. This ample stack of established and novel evidence is systematically categorized, analyzed, and then connected to its related, real-world ambient, like relevant locations and happenings in the past, and at the time. Evidence compounds towards a substantial role of the Benelux cryptography ecosystem, with strong transatlantic links, in the creation of Bitcoin. A consistent biography, a psychogram, and gripping story of an ingenious, multi-talented, autodidactic, reticent, and capricious polymath transpire, which are absolutely unique from a history of science and technology perspective. A cohort of previously fielded and best matches emerging from the investigations are probed against an unprecedently restrictive, multi-stage exclusion filter, which can, with maximum certainty, rule out most "Satoshi Nakamoto" candidates, while some of them remain to be confirmed. With this article, you will be able to decide who is not, or highly unlikely to be Satoshi Nakamoto, be equipped with an ample stack of systematically categorized evidence and efficient methodologies to find suitable candidates, and can possibly unveil the real identity of the creator of Bitcoin - if you want.Jens Ducréework_j4qedozq7jcwhkwcoi3szt7qbaFri, 09 Sep 2022 00:00:00 GMT