IA Scholar Query: On lines avoiding unit balls in three dimensions.
https://scholar.archive.org/
Internet Archive Scholar query results feedeninfo@archive.orgSat, 31 Dec 2022 00:00:00 GMTfatcat-scholarhttps://scholar.archive.org/help1440Introduction
https://scholar.archive.org/work/eky4gyujtzbhtm3nvlzahz6ck4
work_eky4gyujtzbhtm3nvlzahz6ck4Sat, 31 Dec 2022 00:00:00 GMTAccess Journal Volume 02, Number 03
https://scholar.archive.org/work/keh5bvanyrh5ths3uzlqnh4xaa
ACCESS (Adoption of Contemporary research Concentrating Education Science & Social Studies) Journal is an initiative of GEIST International Foundation where it invites researchers, enthusiasts, experts, teachers, educators, faculty members, policy makers, journalists and regulators and students of the education sector from the different parts of the world share their research experiences and results on education, educational development, teaching methodologies, innovation, trends and future. GEIST International Foundation welcomes honorable contributors from the global education sector to submit their research and study contents so that this journal can become a platform for knowledge and experience give-and-take.GEIST International Foundationwork_keh5bvanyrh5ths3uzlqnh4xaaTue, 01 Nov 2022 00:00:00 GMTSpherical frame projections for visualising joint range of motion, and a complementary method to capture mobility data
https://scholar.archive.org/work/nxlirj7lc5d55odayymwyiw5zi
Quantifying joint range of motion (RoM), the reachable poses at a joint, has many applications in research and clinical care. Joint RoM measurements can be used to investigate the link between form and function in extant and extinct animals, to diagnose musculoskeletal disorders and injuries or monitor rehabilitation progress. However, it is difficult to visually demonstrate how the rotations of the joint axes interact to produce joint positions. Here, we introduce the spherical frame projection (SFP), which is a novel 3D visualisation technique, paired with a complementary data collection approach. SFP visualisations are intuitive to interpret in relation to the joint anatomy because they 'trace' the motion of the coordinate system of the distal bone at a joint relative to the proximal bone. Furthermore, SFP visualisations incorporate the interactions of degrees of freedom, which is imperative to capture the full joint RoM. For the collection of such joint RoM data, we designed a rig using conventional motion capture systems, including live audio-visual feedback on torques and sampled poses. Thus, we propose that our visualisation and data collection approach can be adapted for wide use in the study of joint function.Eva C Herbst, Enrico A Eberhard, John R Hutchinson, Christopher T Richardswork_nxlirj7lc5d55odayymwyiw5ziSat, 01 Oct 2022 00:00:00 GMTBlow-up of solutions of critical elliptic equations in three dimensions
https://scholar.archive.org/work/4e7ybao3wjbx7ja2lbkjippiga
We describe the asymptotic behavior of positive solutions u_ϵ of the equation -Δ u + au = 3 u^5-ϵ in Ω⊂ℝ^3 with a homogeneous Dirichlet boundary condition. The function a is assumed to be critical in the sense of Hebey and Vaugon and the functions u_ϵ are assumed to be an optimizing sequence for the Sobolev inequality. Under a natural nondegeneracy assumption we derive the exact rate of the blow-up and the location of the concentration point, thereby proving a conjecture of Brézis and Peletier (1989). Similar results are also obtained for solutions of the equation -Δ u + (a+ϵ V) u = 3 u^5 in Ω.Rupert L. Frank, Tobias König, Hynek Kovaříkwork_4e7ybao3wjbx7ja2lbkjippigaTue, 27 Sep 2022 00:00:00 GMTRigid comparison geometry for Riemannian bands and open incomplete manifolds
https://scholar.archive.org/work/37crgel3nbfinjzemh6o2sxgdi
Comparison theorems are foundational to our understanding of the geometric features implied by various curvature constraints. This paper considers manifolds with a positive lower bound on either scalar, 2-Ricci, or Ricci curvature, and contains a variety of theorems which provide sharp relationships between this bound and notions of width. Some inequalities leverage geometric quantities such as boundary mean curvature, while others involve topological conditions in the form of linking requirements or homological constraints. In several of these results open and incomplete manifolds are studied, one of which partially addresses a conjecture of Gromov in this setting. The majority of results are accompanied by rigidity statements which isolate various model geometries – both complete and incomplete – including a new characterization of round lens spaces, and other models that have not appeared elsewhere. As a byproduct, we additionally give new and quantitative proofs of several classical comparison statements such as Bonnet-Myers' and Frankel's Theorem, as well as a version of Llarull's Theorem and a notable fact concerning asymptotically flat manifolds. The results that we present vary significantly in character, however a common theme is present in that the lead role in each proof is played by spacetime harmonic functions, which are solutions to a certain elliptic equation originally designed to study mass in mathematical general relativity.Sven Hirsch, Demetre Kazaras, Marcus Khuri, Yiyue Zhangwork_37crgel3nbfinjzemh6o2sxgdiMon, 26 Sep 2022 00:00:00 GMTSampling Constrained Continuous Probability Distributions: A Review
https://scholar.archive.org/work/33oxrxzkpfetbkevd7matuojam
The problem of sampling constrained continuous distributions has frequently appeared in many machine/statistical learning models. Many Monte Carlo Markov Chain (MCMC) sampling methods have been adapted to handle different types of constraints on the random variables. Among these methods, Hamilton Monte Carlo (HMC) and the related approaches have shown significant advantages in terms of computational efficiency compared to other counterparts. In this article, we first review HMC and some extended sampling methods, and then we concretely explain three constrained HMC-based sampling methods, reflection, reformulation, and spherical HMC. For illustration, we apply these methods to solve three well-known constrained sampling problems, truncated multivariate normal distributions, Bayesian regularized regression, and nonparametric density estimation. In this review, we also connect constrained sampling with another similar problem in the statistical design of experiments of constrained design space.Shiwei Lan, Lulu Kangwork_33oxrxzkpfetbkevd7matuojamMon, 26 Sep 2022 00:00:00 GMTA Straight Forward Path to a Path Integration of Einstein's Gravity
https://scholar.archive.org/work/yjgyzvpyjjevje7uopmrgi55ci
Path integration is a respected form of quantization that all theoretical quantum physicists should welcome. This elaboration begins with simple examples of three different versions of path integration. After an important clarification of how gravity can be properly quantized, an appropriate path integral, that also incorporates necessary constraint issues, becomes a proper path integral for gravity that can effectively be obtained. How to evaluate such path integrals is another aspect, but most likely best done by computational efforts including Monte Carlo-like procedures.John R. Klauderwork_yjgyzvpyjjevje7uopmrgi55ciMon, 26 Sep 2022 00:00:00 GMTRoadmap on Electronic Structure Codes in the Exascale Era
https://scholar.archive.org/work/72kg6tpwnncrxlfs77h32a3zde
Electronic structure calculations have been instrumental in providing many important insights into a range of physical and chemical properties of various molecular and solid-state systems. Their importance to various fields, including materials science, chemical sciences, computational chemistry and device physics, is underscored by the large fraction of available public supercomputing resources devoted to these calculations. As we enter the exascale era, exciting new opportunities to increase simulation numbers, sizes, and accuracies present themselves. In order to realize these promises, the community of electronic structure software developers will however first have to tackle a number of challenges pertaining to the efficient use of new architectures that will rely heavily on massive parallelism and hardware accelerators. This roadmap provides a broad overview of the state-of-the-art in electronic structure calculations and of the various new directions being pursued by the community. It covers 14 electronic structure codes, presenting their current status, their development priorities over the next five years, and their plans towards tackling the challenges and leveraging the opportunities presented by the advent of exascale computing.Vikram Gavini, Stefano Baroni, Volker Blum, David R. Bowler, Alexander Buccheri, James R. Chelikowsky, Sambit Das, William Dawson, Pietro Delugas, Mehmet Dogan, Claudia Draxl, Giulia Galli, Luigi Genovese, Paolo Giannozzi, Matteo Giantomassi, Xavier Gonze, Marco Govoni, Andris Gulans, François Gygi, John M. Herbert, Sebastian Kokott, Thomas D. Kühne, Kai-Hsin Liou, Tsuyoshi Miyazaki, Phani Motamarri, Ayako Nakata, John E. Pask, Christian Plessl, Laura E. Ratcliff, Ryan M. Richard, Mariana Rossi, Robert Schade, Matthias Scheffler, Ole Schütt, Phanish Suryanarayana, Marc Torrent, Lionel Truflandier, Theresa L. Windus, Qimen Xu, Victor W.-Z. Yu, Danny Perezwork_72kg6tpwnncrxlfs77h32a3zdeMon, 26 Sep 2022 00:00:00 GMTOn Variance Estimation of Random Forests
https://scholar.archive.org/work/rbhst34b7ve7noy6kqamxvirca
Ensemble methods, such as random forests, are popular in applications due to their high predictive accuracy. Existing literature views a random forest prediction as an infinite-order incomplete U-statistic to quantify its uncertainty. However, these methods focus on a small subsampling size of each tree, which is theoretically valid but practically limited. This paper develops an unbiased variance estimator based on incomplete U-statistics, which allows the tree size to be comparable with the overall sample size, making statistical inference possible in a broader range of real applications. Simulation results demonstrate that our estimators enjoy lower bias and more accurate coverage rate without additional computational costs. We also propose a local smoothing procedure to reduce the variation of our estimator, which shows improved numerical performance when the number of trees is relatively small. Further, we investigate the ratio consistency of our proposed variance estimator under specific scenarios. In particular, we develop a new "double U-statistic" formulation to analyze the Hoeffding decomposition of the estimator's variance.Tianning Xu, Ruoqing Zhu, Xiaofeng Shaowork_rbhst34b7ve7noy6kqamxvircaMon, 26 Sep 2022 00:00:00 GMTOptimal Binary Classification Beyond Accuracy
https://scholar.archive.org/work/zfygadxqnbbr7g4uz77hnzahcq
The vast majority of statistical theory on binary classification characterizes performance in terms of accuracy. However, accuracy is known in many cases to poorly reflect the practical consequences of classification error, most famously in imbalanced binary classification, where data are dominated by samples from one of two classes. The first part of this paper derives a novel generalization of the Bayes-optimal classifier from accuracy to any performance metric computed from the confusion matrix. Specifically, this result (a) demonstrates that stochastic classifiers sometimes outperform the best possible deterministic classifier and (b) removes an empirically unverifiable absolute continuity assumption that is poorly understood but pervades existing results. We then demonstrate how to use this generalized Bayes classifier to obtain regret bounds in terms of the error of estimating regression functions under uniform loss. Finally, we use these results to develop some of the first finite-sample statistical guarantees specific to imbalanced binary classification. Specifically, we demonstrate that optimal classification performance depends on properties of class imbalance, such as a novel notion called Uniform Class Imbalance, that have not previously been formalized. We further illustrate these contributions numerically in the case of k-nearest neighbor classificationShashank Singh, Justin Khimwork_zfygadxqnbbr7g4uz77hnzahcqMon, 26 Sep 2022 00:00:00 GMTConvergence guarantees for coefficient reconstruction in PDEs from boundary measurements by variational and Newton type methods via range invariance
https://scholar.archive.org/work/n5th2degavgk5c3vkq3reeb5k4
A key observation underlying this paper is the fact that the range invariance condition for convergence of regularization methods for nonlinear ill-posed operator equations -- such as coefficient identification in partial differential equiations (PDE)s from boundary observations -- can often be achieved by extending the seached for parameter in the sense of allowing it to depend on additional variables. This clearly counteracts unique identifiability of the parameter, though. The second key idea of this paper is now to restore the original restricted dependency of the parameter by penalization. This is shown to lead to convergence of variational (Tikhonov type) and iterative (Newton type) regularization methods. We concretize the abstract convergence analysis in a framework typical of parameter identification in PDEs in a reduced and an all-at-once setting. This is further illustrated by three examples of coefficient identification from boundary observations in elliptic and parabolic PDEs.Barbara Kaltenbacherwork_n5th2degavgk5c3vkq3reeb5k4Mon, 26 Sep 2022 00:00:00 GMTConceptual basis of probability and quantum information theory
https://scholar.archive.org/work/i6rfek2r7vablenevi4xyztebm
These notes present a probabilistic framework that enables a formulation of classical probability theory, thermodynamics, and quantum probability with a common set of four principles or axioms. It explains everything that usual quantum mechanics and classical probability theory does. We emphasize that this framework is not an interpretation of quantum mechanics, such as "many worlds", "Kopenhagen interpretation", or others. It is a probability algorithm that computes probabilities of future events and additionally enables a reconstruction of quantum theory, thermodynamics, diffusion, and Wiener processes. We distinguish strictly between possibilities and outcomes. Moreover, we use a time concept based on the classification of future, present, and past. Well-known paradoxes are resolved. The superposition principle obtains a new meaning. The inclusion-exclusion principle, well-known in probability theory and number theory, is generalized to complex numbers. Our probabilistic framework is not based on the Hilbert space formalism. It requires only simple set theory and complex numbers. Thus, this theory can be taught in schools. Our framework may be viewed as an axiomatic approach to probability in the sense of Hilbert, who asked for an axiomatic probability theory in his sixth of the twenty-three open problems presented to the International Congress of Mathematicians in Paris in 1900. We have applied our probabilistic algorithm to several problems, including classical problems, statistical mechanics and thermodynamics, diffraction at multiple slits, light reflection, interferometer, delayed-choice experiments, and Hardy's Paradox.Christian Jansson, TUHH Universitätsbibliothekwork_i6rfek2r7vablenevi4xyztebmMon, 26 Sep 2022 00:00:00 GMTOptimization problems in graphs with locational uncertainty
https://scholar.archive.org/work/gswwrkoycrbexdtpahixxxpwrm
Many discrete optimization problems amount to selecting a feasible set of edges of least weight. We consider in this paper the context of spatial graphs where the positions of the vertices are uncertain and belong to known uncertainty sets. The objective is to minimize the sum of the distances of the chosen set of edges for the worst positions of the vertices in their uncertainty sets. We first prove that these problems are NP-hard even when the feasible sets consist either of all spanning trees or of all s-t paths. Given this hardness, we propose an exact solution algorithm combining integer programming formulations with a cutting plane algorithm, identifying the cases where the separation problem can be solved efficiently. We also propose a conservative approximation and show its equivalence to the affine decision rule approximation in the context of Euclidean distances. We compare our algorithms to three deterministic reformulations on instances inspired by the scientific literature for the Steiner tree problem and a facility location problem.Marin Bougeret, Jérémy Omer, Michael Posswork_gswwrkoycrbexdtpahixxxpwrmMon, 26 Sep 2022 00:00:00 GMTEntropic Descent Archetypal Analysis for Blind Hyperspectral Unmixing
https://scholar.archive.org/work/dtejf7twe5gidkrsn26s3iqguu
In this paper, we introduce a new algorithm based on archetypal analysis for blind hyperspectral unmixing, assuming linear mixing of endmembers. Archetypal analysis is a natural formulation for this task. This method does not require the presence of pure pixels (i.e., pixels containing a single material) but instead represents endmembers as convex combinations of a few pixels present in the original hyperspectral image. Our approach leverages an entropic gradient descent strategy, which (i) provides better solutions for hyperspectral unmixing than traditional archetypal analysis algorithms, and (ii) leads to efficient GPU implementations. Since running a single instance of our algorithm is fast, we also propose an ensembling mechanism along with an appropriate model selection procedure that make our method robust to hyper-parameter choices while keeping the computational complexity reasonable. By using six standard real datasets, we show that our approach outperforms state-of-the-art matrix factorization and recent deep learning methods. We also provide an open-source PyTorch implementation: https://github.com/inria-thoth/EDAA.Alexandre Zouaouiwork_dtejf7twe5gidkrsn26s3iqguuMon, 26 Sep 2022 00:00:00 GMTApproximate Nash equilibria in large nonconvex aggregative games
https://scholar.archive.org/work/revhjjyfdbgqlakdjcgte2fr64
This paper shows the existence of 𝒪(1/n^γ)-Nash equilibria in n-player noncooperative sum-aggregative games in which the players' cost functions, depending only on their own action and the average of all players' actions, are lower semicontinuous in the former while γ-Hölder continuous in the latter. Neither the action sets nor the cost functions need to be convex. For an important class of sum-aggregative games, which includes congestion games with γ equal to 1, a gradient-proximal algorithm is used to construct 𝒪(1/n)-Nash equilibria with at most 𝒪(n^3) iterations. These results are applied to a numerical example concerning the demand-side management of an electricity system. The asymptotic performance of the algorithm when n tends to infinity is illustrated.Kang Liu, Nadia Oudjane, Cheng Wanwork_revhjjyfdbgqlakdjcgte2fr64Mon, 26 Sep 2022 00:00:00 GMTVariations of Renormalized Volume for Minimal Submanifolds of Poincare-Einstein Manifolds
https://scholar.archive.org/work/dtrfi6eh6vb6rltkxkm45ifrx4
We investigate the asymptotic expansion and the renormalized volume of minimal submanifolds, Y^m of arbitrary codimension in Poincare-Einstein manifolds, M^n+1. In particular, we derive formulae for the first and second variations of renormalized volume for Y^m ⊆ M^n+1 when m < n + 1. We apply our formulae to the codimension 1 and the M = ℍ^n+1 case. Furthermore, we prove the existence of an asymptotic description of our minimal submanifold, Y, over the boundary cylinder ∂ Y ×ℝ^+, and we further derive an L^2-inner-product relationship between u_2 and u_m+1 when M = ℍ^n+1. Our results apply to a slightly more general class of manifolds, which are conformally compact with a metric that has an even expansion up to high order near the boundary.Jared Marx-Kuowork_dtrfi6eh6vb6rltkxkm45ifrx4Sun, 25 Sep 2022 00:00:00 GMTDefects and Frustration in the Packing of Soft Balls
https://scholar.archive.org/work/67elapy57jfdtg63bzhtulxmla
This work introduces the Hookean-Voronoi energy, a minimal model for the packing of soft, deformable balls. This is motivated by recent studies of quasi-periodic equilibria arising from dense packings of diblock and star polymers. Restricting to the planar case, we investigate the equilibrium packings of identical, deformable objects whose shapes are determined by an N-site Voronoi tessellation of a periodic rectangle. We derive a reduced formulation of the system showing at equilibria each site must reside at the "max-center" of its associated Voronoi region and construct a family of ordered "single-string" minimizers whose cardinality is O(N^2). We identify sharp conditions under which the system admits a regular hexagonal tessellation and establish that in all cases the average energy per site is bounded below by that of a regular hexagon of unit size. However, numerical investigation of gradient flow of random initial data, reveals that for modest values of N the system preponderantly equilibrates to quasi-ordered states with low energy and large basins of attraction. For larger N the distribution of equilibria energies appears to approach a δ-function limit, whose energy is significantly higher than the ground state hexagon. This limit is possibly shaped by two mechanisms: a proliferation of moderate-energy disordered equilibria that block access of the gradient flow to lower energy quasi-ordered states and a rigid threshold on the maximum energy of stable states.Kenneth Jao, Keith Promislow, Samuel Sottilework_67elapy57jfdtg63bzhtulxmlaSun, 25 Sep 2022 00:00:00 GMTTurbulence as Clebsch Confinement
https://scholar.archive.org/work/qrlmjshh65cfddfvw4x3lbhb44
We argue that in the strong turbulence phase, as opposed to the weak one, the Clebsch variables compactify to the sphere S_2 and are not observable as wave excitations like weak turbulence. Various topologically nontrivial configurations of this confined Clebsch field are responsible for vortex sheets. Stability equations (CVS) for closed vortex surfaces (bubbles of Clebsch field) are derived and investigated. The exact non-compact solution for the stable vortex sheet family is presented. Compact solutions are proven not to exist by De Lellis and Brué. Asymptotic conservation of anomalous dissipation on stable vortex surfaces in the turbulent limit is discovered. We derive an exact formula for this anomalous dissipation as a surface integral of the square of velocity gap times the square root of minus local normal strain. Topologically stable time-dependent solutions, which we call Kelvinons, are introduced. They have a conserved velocity circulation around static loop; this makes them responsible for asymptotic PDF tails of velocity circulation, perfectly matching numerical simulations. The loop equation for fluid dynamics is derived and studied. This equation is exactly equivalent to the Schrödinger equation in loop space, with viscosity ν playing the role of Planck's constant. Area law and the asymptotic scaling law for mean circulation at a large area are derived. The exact representation of the solution of the loop equation in terms of a singular stochastic equation for momentum loop trajectory is presented. Kelvinons are fixed points of the loop equation at turbulent limit ν→ 0. The Loop equation's linearity makes the PDF's general solution to be a superposition of Kelvinon solutions with different winding numbers.Alexander Migdalwork_qrlmjshh65cfddfvw4x3lbhb44Sun, 25 Sep 2022 00:00:00 GMTTuning Frequency Bias in Neural Network Training with Nonuniform Data
https://scholar.archive.org/work/gghp2srbkrbv5hljlx5mixjoe4
Small generalization errors of over-parameterized neural networks (NNs) can be partially explained by the frequency biasing phenomenon, where gradient-based algorithms minimize the low-frequency misfit before reducing the high-frequency residuals. Using the Neural Tangent Kernel (NTK), one can provide a theoretically rigorous analysis for training where data are drawn from constant or piecewise-constant probability densities. Since most training data sets are not drawn from such distributions, we use the NTK model and a data-dependent quadrature rule to theoretically quantify the frequency biasing of NN training given fully nonuniform data. By replacing the loss function with a carefully selected Sobolev norm, we can further amplify, dampen, counterbalance, or reverse the intrinsic frequency biasing in NN training.Annan Yu, Yunan Yang, Alex Townsendwork_gghp2srbkrbv5hljlx5mixjoe4Sun, 25 Sep 2022 00:00:00 GMTInvariance of immersed Floer cohomology under Lagrangian surgery
https://scholar.archive.org/work/mmtjxyj6vnck3g4dh5x3g73q2e
We show that cellular Floer cohomology of an immersed Lagrangian brane is invariant under smoothing of a self-intersection point if the quantum valuation of the weakly bounding cochain vanishes and the Lagrangian has dimension at least two. The chain-level map replaces the two orderings of the self-intersection point with meridianal and longitudinal cells on the handle created by the surgery, and uses a bijection between holomorphic disks developed by Fukaya-Oh-Ohta-Ono. Our result generalizes invariance of potentials for certain Lagrangian surfaces in Dimitroglou-Rizell--Ekholm--Tonkonog, and implies the invariance of Floer cohomology under mean curvature flow with this type of surgery, as conjectured by Joyce.Joseph Palmer, Chris Woodwardwork_mmtjxyj6vnck3g4dh5x3g73q2eSun, 25 Sep 2022 00:00:00 GMT