IA Scholar Query: Accelerated Newton Iteration for Roots of Black Box Polynomials.
https://scholar.archive.org/
Internet Archive Scholar query results feedeninfo@archive.orgTue, 27 Sep 2022 00:00:00 GMTfatcat-scholarhttps://scholar.archive.org/help1440Quantifying and visualizing model similarities for multi-model methods
https://scholar.archive.org/work/hakolbblw5azlk7osjx5afunim
Modeling environmental systems is typically limited by an incomplete system understanding due to scarce and imprecise measurements. This leads to different types of uncertainties, among which conceptual uncertainty plays a key role, but is difficult to address. Conceptual uncertainty refers to the problem of finding the most appropriate model representation of the physical system. This includes the problem of choosing from several plausible model hypotheses, but also the problem that the true system description might not even be among this set of hypotheses. In this thesis, I address the first of these issues, the uncertainty of choosing a model from a finite set. To account for this uncertainty of model choice, modelers typically use multi-model methods. This means that they consider not only one but several models and apply statistical methods to either combine them or select the most appropriate one. For any of these methods, it is crucial to know how similar the individual models are. But even though multi-model methods have become increasingly popular, no methods were available that quantify the similarities between models and visualize them intuitively. This dissertation aims at closing these gaps. In particular, it tackles the challenges of judging whether simplified models are a suitable replacement for a more detailed model, and of visualizing model similarities in a way that helps modelers to gain an intuitive understanding of the model set. I defined three research questions that address these challenges and form the basis of this thesis. 1. How can we systematically assess how similar conceptually simplified model versions are compared to an original, more detailed model? 2. How can we extend the similarity analysis so it is suitable for computationally expensive models? 3. How can we visualize the similarities between probabilistic model predictions? With the first contribution, I show that the so-called model confusion matrix can be used to quantify model similarities and thus identify the best conc [...]Aline Schäfer Rodrigues Silva, Universität Stuttgartwork_hakolbblw5azlk7osjx5afunimTue, 27 Sep 2022 00:00:00 GMTOn Completeness of Cost Metrics and Meta-Search Algorithms in -Calculus
https://scholar.archive.org/work/h4lp7jrk2fgldkq2bo5dj47qd4
In the paper we define three new complexity classes for Turing Machine undecidable problems inspired by the famous Cook/Levin's NP-complete complexity class for intractable problems. These are U-complete (Universal complete), D-complete (Diagonalization complete) and H-complete (Hypercomputation complete) classes. In the paper, in the spirit of Cook/Levin/Karp, we started the population process of these new classes assigning several undecidable problems to them. We justify that some super-Turing models of computation, i.e., models going beyond Turing machines, are tremendously expressive and they allow to accept arbitrary languages over a given alphabet including those undecidable ones. We prove also that one of such super-Turing models of computation - the $-Calculus, designed as a tool for automatic problem solving and automatic programming, has also such tremendous expressiveness. We investigate also completeness of cost metrics and meta-search algorithms in $-calculus.Eugene Eberbachwork_h4lp7jrk2fgldkq2bo5dj47qd4Mon, 26 Sep 2022 00:00:00 GMTInformation geometry for multiparameter models: New perspectives on the origin of simplicity
https://scholar.archive.org/work/tevlnnteirf7lfjlf5hshomcge
Complex models in physics, biology, economics, and engineering are often sloppy, meaning that the model parameters are not well determined by the model predictions for collective behavior. Many parameter combinations can vary over decades without significant changes in the predictions. This review uses information geometry to explore sloppiness and its deep relation to emergent theories. We introduce the model manifold of predictions, whose coordinates are the model parameters. Its hyperribbon structure explains why only a few parameter combinations matter for the behavior. We review recent rigorous results that connect the hierarchy of hyperribbon widths to approximation theory, and to the smoothness of model predictions under changes of the control variables. We discuss recent geodesic methods to find simpler models on nearby boundaries of the model manifold -- emergent theories with fewer parameters that explain the behavior equally well. We discuss a Bayesian prior which optimizes the mutual information between model parameters and experimental data, naturally favoring points on the emergent boundary theories and thus simpler models. We introduce a 'projected maximum likelihood' prior that efficiently approximates this optimal prior, and contrast both to the poor behavior of the traditional Jeffreys prior. We discuss the way the renormalization group coarse-graining in statistical mechanics introduces a flow of the model manifold, and connect stiff and sloppy directions along the model manifold with relevant and irrelevant eigendirections of the renormalization group. Finally, we discuss recently developed 'intensive' embedding methods, allowing one to visualize the predictions of arbitrary probabilistic models as low-dimensional projections of an isometric embedding, and illustrate our method by generating the model manifold of the Ising model.Katherine N. Quinn, Michael C. Abbott, Mark K. Transtrum, Benjamin B. Machta, James P. Sethnawork_tevlnnteirf7lfjlf5hshomcgeThu, 22 Sep 2022 00:00:00 GMTGaussian Process regression for astronomical time-series
https://scholar.archive.org/work/5l2vgnngvzh4hawxmr2kc6b6fa
The last two decades have seen a major expansion in the availability, size, and precision of time-domain datasets in astronomy. Owing to their unique combination of flexibility, mathematical simplicity and comparative robustness, Gaussian Processes (GPs) have emerged recently as the solution of choice to model stochastic signals in such datasets. In this review we provide a brief introduction to the emergence of GPs in astronomy, present the underlying mathematical theory, and give practical advice considering the key modelling choices involved in GP regression. We then review applications of GPs to time-domain datasets in the astrophysical literature so far, from exoplanets to active galactic nuclei, showcasing the power and flexibility of the method. We provide worked examples using simulated data, with links to the source code, discuss the problem of computational cost and scalability, and give a snapshot of the current ecosystem of open source GP software packages. Driven by further algorithmic and conceptual advances, we expect that GPs will continue to be an important tool for robust and interpretable time domain astronomy for many years to come.Suzanne Aigrain, Daniel Foreman-Mackeywork_5l2vgnngvzh4hawxmr2kc6b6faMon, 19 Sep 2022 00:00:00 GMTSimple Approximative Algorithms for Free-Support Wasserstein Barycenters
https://scholar.archive.org/work/y34rvbozfjhsrirwjz7ryvqiay
Computing Wasserstein barycenters of discrete measures has recently attracted considerable attention due to its wide variety of applications in data science. In general, this problem is NP-hard, calling for practical approximative algorithms. In this paper, we analyze a well-known simple framework for approximating Wasserstein-p barycenters, where we mainly consider the most common case p=2 and p=1, which is not as well discussed. The framework produces sparse support solutions and shows good numerical results in the free-support setting. Depending on the desired level of accuracy, this requires only N-1 or N(N-1)/2 standard two-marginal optimal transport (OT) computations between the N input measures, respectively, which is fast, memory-efficient and easy to implement using any OT solver as a black box. What is more, these methods yield a relative error of at most N and 2, respectively, for both p=1, 2. We show that these bounds are practically sharp. In light of the hardness of the problem, it is not surprising that such guarantees cannot be close to optimality in general. Nevertheless, these error bounds usually turn out to be drastically lower for a given particular problem in practice and can be evaluated with almost no computational overhead, in particular without knowledge of the optimal solution. In our numerical experiments, this guaranteed errors of at most a few percent.Johannes von Lindheimwork_y34rvbozfjhsrirwjz7ryvqiayTue, 13 Sep 2022 00:00:00 GMTAutonomous Passage Planning for a Polar Vessel
https://scholar.archive.org/work/y6bl332gpndqldzg6xc33exxei
We introduce a method for long-distance maritime route planning in polar regions, taking into account complex changing environmental conditions. The method allows the construction of optimised routes, describing the three main stages of the process: discrete modelling of the environmental conditions using a non-uniform mesh, the construction of mesh-optimal paths, and path smoothing. In order to account for different vehicle properties we construct a series of data driven functions that can be applied to the environmental mesh to determine the speed limitations and fuel requirements for a given vessel and mesh cell, representing these quantities graphically and geospatially. In describing our results, we demonstrate an example use case for route planning for the polar research ship the RRS Sir David Attenborough (SDA), accounting for ice-performance characteristics and validating the spatial-temporal route construction in the region of the Weddell Sea, Antarctica. We demonstrate the versatility of this route construction method by demonstrating that routes change depending on the seasonal sea ice variability, differences in the route-planning objective functions used, and the presence of other environmental conditions such as currents. To demonstrate the generality of our approach, we present examples in the Arctic Ocean and the Baltic Sea. The techniques outlined in this manuscript are generic and can therefore be applied to vessels with different characteristics. Our approach can have considerable utility beyond just a single vessel planning procedure, and we outline how this workflow is applicable to a wider community, e.g. commercial and passenger shipping.Jonathan D. Smith, Samuel Hall, George Coombs, James Byrne, Michael A.S. Thorne, J. Alexander Brearley, Derek Long, Michael Meredith, Maria Foxwork_y6bl332gpndqldzg6xc33exxeiTue, 13 Sep 2022 00:00:00 GMTFast quantum subroutines for the simplex method
https://scholar.archive.org/work/n5imgfb6nrfwtmiur2rii75vre
We propose quantum subroutines for the simplex method that avoid classical computation of the basis inverse. We show how to quantize all steps of the simplex algorithm, including checking optimality, unboundedness, and identifying a pivot (i.e., pricing the columns and performing the ratio test) according to Dantzig's rule or the steepest edge rule. The quantized subroutines obtain a polynomial speedup in the dimension of the problem, but have worse dependence on other numerical parameters. For example, for a problem with m constraints, n variables, at most d_c nonzero elements per column of the costraint matrix, at most d nonzero elements per column or row of the basis, basis condition number κ, and optimality tolerance ϵ, pricing can be performed in Õ(1/ϵκ d √(n)(d_c n + d m)) time, where the Õ notation hides polylogarithmic factors; classically, pricing requires O(d_c^0.7 m^1.9 + m^2 + o(1) + d_c n) time in the worst case using the fastest known algorithm for sparse matrix multiplication. For well-conditioned sparse problems the quantum subroutines scale better in m and n, and may therefore have an advantage for very large problems. The running time of the quantum subroutines can be improved if the constraint matrix admits an efficient algorithmic description, or if quantum RAM is available.Giacomo Nanniciniwork_n5imgfb6nrfwtmiur2rii75vreMon, 12 Sep 2022 00:00:00 GMTI'm stuck! How to efficiently debug computational solid mechanics models so you can enjoy the beauty of simulations
https://scholar.archive.org/work/bu7oksyhpzfjrh6gfjv3bha34a
A substantial fraction of the time that computational modellers dedicate to developing their models is actually spent trouble-shooting and debugging their code. However, how this process unfolds is seldom spoken about, maybe because it is hard to articulate as it relies mostly on the mental catalogues we have built with the experience of past failures. To help newcomers to the field of material modelling, here we attempt to fill this gap and provide a perspective on how to identify and fix mistakes in computational solid mechanics models. To this aim, we describe the components that make up such a model and then identify possible sources of errors. In practice, finding mistakes is often better done by considering the symptoms of what is going wrong. As a consequence, we provide strategies to narrow down where in the model the problem may be, based on observation and a catalogue of frequent causes of observed errors. In a final section, we also discuss how one-time bug-free models can be kept bug-free in view of the fact that computational models are typically under continual development. We hope that this collection of approaches and suggestions serves as a "road map" to find and fix mistakes in computational models, and more importantly, keep the problems solved so that modellers can enjoy the beauty of material modelling and simulationEster Comellas, Jean-Paul Pelteret, Wolfgang Bangerthwork_bu7oksyhpzfjrh6gfjv3bha34aFri, 09 Sep 2022 00:00:00 GMTThe Cut-Cell Method for the Prediction of 2D/3D Flows in Complex Geometries and the Adjoint-Based Shape Optimization
https://scholar.archive.org/work/vlcycl6zxzbsxmj7jm3aljzsfa
This dissertation thesis develops integrated, robust, and reliable Computational Fluid Dynamics (CFD) methods and software for the analysis and shape optimization in real-world applications in fluid mechanics and aerodynamics. To this end, the cut-cell method, which removes mesh generation barriers from the flow analysis and design process is adopted. The computational domain is firstly covered with a Cartesian mesh and then parts occupied by the solid bodies are discarded, giving rise to the cut-cell mesh. The benefits of this method are profound in fluid problems with moving solid bodies which are allowed to move upon the stationary background mesh, avoiding the use of mesh deformation tools. Moreover, contrary to body-conforming approaches, the changes in shape during an optimization loop do not affect the surrounding mesh, preventing mesh generation failure and the premature breakdown of the optimization loop. Therefore, this dissertation thesis exploits these beneficial features and develops a cut-cell-based flow solver and shape optimization tool for compressible and incompressible flow problems.Konstantinos Samouchos, National Technological University Of Athenswork_vlcycl6zxzbsxmj7jm3aljzsfaThu, 08 Sep 2022 00:00:00 GMTVariational methods and its applications to computer vision
https://scholar.archive.org/work/dtthbdie4vf7nc4nxvyanwq7rq
Many computer vision applications such as image segmentation can be formulated in a "variational" way as energy minimization problems. Unfortunately, the computational task of minimizing these energies is usually difficult as it generally involves non convex functions in a space with thousands of dimensions and often the associated combinatorial problems are NP-hard to solve. Furthermore, they are ill-posed inverse problems and therefore are extremely sensitive to perturbations (e.g. noise). For this reason in order to compute a physically reliable approximation from given noisy data, it is necessary to incorporate into the mathematical model appropriate regularizations that require complex computations. The main aim of this work is to describe variational segmentation methods that are particularly effective for curvilinear structures. Due to their complex geometry, classical regularization techniques cannot be adopted because they lead to the loss of most of low contrasted details. In contrast, the proposed method not only better preserves curvilinear structures, but also reconnects some parts that may have been disconnected by noise. Moreover, it can be easily extensible to graphs and successfully applied to different types of data such as medical imagery (i.e. vessels, hearth coronaries etc), material samples (i.e. concrete) and satellite signals (i.e. streets, rivers etc.). In particular, we will show results and performances about an implementation targeting new generation of High Performance Computing (HPC) architectures where different types of coprocessors cooperate. The involved dataset consists of approximately 200 images of cracks, captured in three different tunnels by a robotic machine designed for the European ROBO-SPECT project.Erika Pellegrino, Panagiota Stathakiwork_dtthbdie4vf7nc4nxvyanwq7rqWed, 07 Sep 2022 00:00:00 GMTAvian Wing Joints Provide Longitudinal Flight Stability and Control
https://scholar.archive.org/work/3h225wlzgzdvrp57fffzbfjwui
Uncrewed aerial vehicle (UAV) design has advanced substantially over the past century; however, there are still scenarios where birds outperform UAVs. Birds regularly maneuver through cluttered environments or adapt to sudden changes in flight conditions, tasks that challenge even the most advanced UAVs. Thus, there remains a gap in our general knowledge of flight maneuverability and adaptability that can be filled by improving our understanding of how birds achieve these desirable flight characteristics. Although maneuverability is difficult to quantify, one approach is to leverage an expected trade-off between stability and maneuverability, wherein a stable flyer must generate larger moments to maneuver than an unstable flyer. Bird's stability, and adaptability, has previously been associated with their ability to morph their wing shape in flight. Birds morph their wings by actuating their musculoskeletal system, including the shoulder, elbow and wrist joints. Thus, to take an important step towards deciphering avian flight stability and adaptability, I investigated how the manipulating avian wing joints affect longitudinal stability and control characteristics. First, I used an open-source low fidelity model to calculate the lift and pitching moment of a gull wing and body across the full range of flexion and extension of the elbow and wrist. To validate the model, I measured the forces and moments on nine 3D printed equivalent wing-body models mounted in a wind tunnel. With the validated numerical results, I identified that extending the wing using different combinations of elbow and wrist angles would provide a method for adaptive control of loads and static stability. However, I also found that gulls were unable to trim for the tested shoulder angle. Next, I developed an open-source, mechanics-based method (AvInertia) to calculate the inertial characteristics of 22 bird species across the full range of flexion and extension of the elbow and wrist. This method allowed a detailed investigation of how manipula [...]Christina Harvey, University, Mywork_3h225wlzgzdvrp57fffzbfjwuiTue, 06 Sep 2022 00:00:00 GMTBayesian Analysis of Neuroimage Data Using Gaussian Process Priors
https://scholar.archive.org/work/4jdmaxcb3jhhlh7hqkuy3pgqdu
Magnetic Resonance Imaging (MRI) is a foundational tool for medical and academic research. Functional MRI (fMRI) and human brain research, for example, have become nearly synonymous phrases. MRI results in a dense, high-dimensional, highly correlated 3D or 4D datatype only digestible with concerted statistical effort. This dissertation focuses on developing new semiparametric Bayesian models and computational techniques to cope with some of the challenges that arise with fMRI data. The first project (Chapter 2) presents a model designed to integrate presurgical fMRI data collected at two different spatial resolutions. Modern neuroradiologists use fMRI to map patient-specific functional neuroanatomy to assist in presurgical planning. This application requires a high degree of spatial precision, but in practice the fMRI signal-to-noise ratio decreases with increasing spatial resolution. To mitigate this issue, our collaborator collected functional scans of preoperative patients at high and low spatial resolutions. The data inherently exhibit different levels of noise and lack a common spatial support, rendering them difficult to combine in a straightforward manner. We solve this problem by modeling the mean image intensity function of both data sources using a Gaussian process and develop a scalable posterior computation algorithm based on Riemann manifold Hamiltonian Monte Carlo methods. We show in simulation our method enables more accurate inference on image mean intensity than single-resolution alternatives, and further illustrate our approach in analyses of preoperative patient images. The second project (Chapter 3) is motivated by studies where heterogeneous latent imaging subgroup effects may be present in the study population. We propose a Bayesian semiparametric hierarchical model for image-on-scalar regression with subgroup detection. We model the mean intensity of imaging outcomes with a mixture of spatially varying coefficient (SVC) regression models, and take into account spatial dependence in the SVCs [...]Andrew Whiteman, University, Mywork_4jdmaxcb3jhhlh7hqkuy3pgqduTue, 06 Sep 2022 00:00:00 GMTHuman Prediction and Robotic Lower-Limb Prosthesis Planning for Safe Perturbation Recovery During Motion
https://scholar.archive.org/work/3lpjqqe6b5gjvdtyeeoyzqps2y
Falls are prevalent among older adults and people with lower-limb amputation of all ages. One recommendation to reduce fall risk is to identify those who are most likely to fall and provide targeted physical therapy, but existing methods for identifying fall risk have been unable to reliably predict who will fall. Another way to reduce falls is to assist the individual using a wearable robotic device, such a prosthesis, when stumbles occur. However, because there is uncertainty in how the human will respond, wearable robots are unable to safely assist with trip recovery. To accurately identify who may become unstable during motion, this dissertation presents Stability Basins, which characterize individual dynamic stability during the Sit-to-Stand motion. Stability Basins, formed using a dynamic model with a model for the individual's control strategy, encompass all stable model states at each point during the motion. In this document, Stability Basins are validated using data from an 11-subject experiment where subjects were pulled by motor-driven cables as they stood up from a chair. The Stability Basins' accurate characterization of stability during motion shows promise for identifying fall risk. Another way to predict human response to perturbation (e.g., a push or trip) is to find an individual's underlying objective during motion and use it to inform predictions. Given the assumption that humans are optimizing some cost such as metabolic energy during motion, inverse optimal control finds the underlying objective function corresponding to observed data. This dissertation presents results from applying an inverse optimal control formulation to the 11-subject Sit-to-Stand dataset. Results suggest that subjects place priority on the position and velocity of their center of mass rather than input torques during both perturbed and unperturbed Sit-to-Stand, and that the underlying cost function can be used to effectively simulate perturbation response. Accurate and quick predictions of trip recovery during walking [...]Shannon Danforth, University, Mywork_3lpjqqe6b5gjvdtyeeoyzqps2yTue, 06 Sep 2022 00:00:00 GMTAdjoint and Acceleration Methods for Projection-based Reduced Order Modeling
https://scholar.archive.org/work/latoylgwyzanbmhfolplzwaf5y
Computational modeling is a pillar of modern aerospace research and is increasingly becoming more important as computer technology and numerical methods grow more powerful and sophisticated. However, computational modeling remains expensive for many aerospace engineering problems, including high-fidelity solutions to three-dimensional unsteady simulations, large-scale aeroservoelastic control problems, and multidisciplinary design optimization. Reduced-order models (ROMs) have therefore garnered interest as an alternative means of preserving high fidelity at a lower computational cost. Among these methods is the class of projection-based ROMs, which utilizes the original physics and equations of the high-fidelity system but resolves the state projected onto a lower-order trial manifold with the system projected onto a low-order test manifold. The lower-order trial manifold is typically chosen to be a linear basis, and this thesis focuses on the proper orthogonal decomposition method (POD). Hyper-reduction is also necessary to reduce the complexity of nonlinear problems, and this thesis is focused on the the discrete empirical interpolation method (DEIM), an interpolation method that uses a sparse sampling of the nonlinear function values. The benefit of the method of snapshots is that given a representative set of solution samples, a linear basis that projects the solution space with very low error can be constructed. However, accuracy in state projection is only one part requirement for a ROM to be useful for engineering applications. Low errors in state projection do not necessarily mean that outputs are accurately predicted as the proportion of the domain that is used to calculate the output may be relatively very small. Quantification of the output error is thus important to assessing the quality of a ROM. Methods for estimating output error for POD-DEIM models exist; however, the application of these methods to fine-grain adaptation is limited. Furthermore, the commonly used Galerkin formulation of ROMs, whe [...]Gary Collins, University, Mywork_latoylgwyzanbmhfolplzwaf5yTue, 06 Sep 2022 00:00:00 GMTData-driven pedestrian simulation / an alternative to theory-based pedestrian simulation?
https://scholar.archive.org/work/4xpljq5sffh4dc2ihps7wkhozu
The aim of the research is to examine whether data-driven pedestrian simulation models can outperform the theoretical ones and provide a robust model framework for pedestrian simulation. Initially, an extended literature review was performed to identify the existing pedestrian simulation models and the main parameters utilized in pedestrian simulation. To achieve the aim of the study, a comparative analysis of a well-known and widely applied theoretical pedestrian simulation model (i.e. the social force mode) and four data-driven techniques: the Artificial Neural Networks, the Support Vector Regression, the Gaussian Processes and the Locally Weighted Regression was conducted. A suitable methodological framework for the comparative analysis was designed. Initially, appropriate data (i.e. pedestrian trajectories) were collected from two different area types: a metro station during peak hours and a shopping mall during afternoon hours, via video recordings. Then, with the aid of an appropriate software, pedestrian trajectories were extracted. Due to the fact that the collected data include white noise, an algorithm for noise elimination was developed as a combination of existing smoothing filters. Subsequently, an appropriate pedestrian simulation model setup for the data-driven techniques was developed, as they do not cater specifically for pedestrian simulation framework. In order to conduct a fair comparison the variables of the theoretical model were employed in the data-driven models. Cross-validation was applied as the appropriate method for examining each model's performance and to cater for data overfitting, while a combination of goodness-of-fit measures the models' accuracy were estimated to assess the models in a holistic manner. The results indicate that data-driven methods have higher capability of simulating pedestrian movement as they perform better according to all of goodness-of-fit measures. Following the first level of comparison (compare models with the same parameters), additional parameters (agent's characteristics and time parameter) have been included in the data-driven models in order to examine the possibility of improving (and its magnitude) the performance of these models. Results of this analysis indicate that the employment of the selected variables can improve data-driven pedestrian simulation models performance (they performed better for almost every goodness-of-fit measure).George Kouskoulis, National Technological University Of Athenswork_4xpljq5sffh4dc2ihps7wkhozuMon, 05 Sep 2022 00:00:00 GMTCharacterising and modeling the co-evolution of transportation networks and territories
https://scholar.archive.org/work/6f4roc6xvrcdtb443uuvx7mpk4
The identification of structuring effects of transportation infrastructure on territorial dynamics remains an open research problem. This issue is one of the aspects of approaches on complexity of territorial dynamics, within which territories and networks would be co-evolving. The aim of this thesis is to challenge this view on interactions between networks and territories, both at the conceptual and empirical level, by integrating them in simulation models of territorial systems.Juste Raimbaultwork_6f4roc6xvrcdtb443uuvx7mpk4Fri, 02 Sep 2022 00:00:00 GMTTrajectory planning for automated driving in dynamic environments
https://scholar.archive.org/work/3mb53wr34bhtnaxb4lp4ftafcy
Considering the last decades, the trend in the automotive industry to continuously increase the level of automation of vehicles is evident. A lot of research and development effort has been invested to improve upon driving safety and comfort in traffic. Nowadays, advanced driver assistance systems, and the development of automated driving functions in particular, represent one of the main areas of innovation in automotive engineering. In order to cope with challenges arising from complex dynamic environments the automated vehicle needs to perform comprehensive cognitive tasks that come along with the presence of other traffic participants and the necessity to adhere to prevailing traffic regulations. As a consequence, the automated driving task is decomposed into several sub problems. In the functional architecture of automated vehicles, motion planning that addresses the generation of a comfortable and safe trajectory is a key component that directly affects the overall driving performance. This thesis is about the development of a trajectory planning approach suitable to deal with dynamic environments. A two level hierarchical trajectory planning framework is proposed that unites the capability of optimality and spline interpolation and explicitly considers the aspect of contradicting planning objectives. The framework is designed to work in receding horizon fashion by performing cyclic replanning and hence accounts for the dynamic character of the environment. The hierarchization into two separate levels of optimization leads to an approach that covers basic driving functionality on low level, while required high level behavior is still prioritized. The presented framework relies on a spline-based trajectory representation with an underlying optimal interpolation strategy. The optimal trajectory with respect to a certain situation is found by joint optimization on high and low level. A continuous and a discrete trajectory optimization variant to generate an optimal trajectory with respect to high level objecti [...]Christian Lienke, Technische Universität Dortmundwork_3mb53wr34bhtnaxb4lp4ftafcyFri, 02 Sep 2022 00:00:00 GMTLearning with Differentiable Algorithms
https://scholar.archive.org/work/htdxwewkkba3plkjt2ojgjnqay
Classic algorithms and machine learning systems like neural networks are both abundant in everyday life. While classic computer science algorithms are suitable for precise execution of exactly defined tasks such as finding the shortest path in a large graph, neural networks allow learning from data to predict the most likely answer in more complex tasks such as image classification, which cannot be reduced to an exact algorithm. To get the best of both worlds, this thesis explores combining both concepts leading to more robust, better performing, more interpretable, more computationally efficient, and more data efficient architectures. The thesis formalizes the idea of algorithmic supervision, which allows a neural network to learn from or in conjunction with an algorithm. When integrating an algorithm into a neural architecture, it is important that the algorithm is differentiable such that the architecture can be trained end-to-end and gradients can be propagated back through the algorithm in a meaningful way. To make algorithms differentiable, this thesis proposes a general method for continuously relaxing algorithms by perturbing variables and approximating the expectation value in closed form, i.e., without sampling. In addition, this thesis proposes differentiable algorithms, such as differentiable sorting networks, differentiable renderers, and differentiable logic gate networks. Finally, this thesis presents alternative training strategies for learning with algorithms.Felix Petersenwork_htdxwewkkba3plkjt2ojgjnqayThu, 01 Sep 2022 00:00:00 GMTSPEX X-ray spectral fitting package
https://scholar.archive.org/work/ogxjqf5w2jdgtejwdiyx2dtwbu
SPEX is a software package for fitting astrophysical X-ray spectra. It has been developed since the 1970s at SRON Netherlands Institute for Space Research. SPEX is an interactive command-line program for the computation of emergent spectra of optically thin plasmas such as stellar coronal loop structures, supernova remnants (also including transient ionization effects), photo-ionized plasmas, and optically thick plasmas. These model spectra can be fitted to measured X-ray spectra from various X-ray observatories, like XMM-Newton and Chandra. SPEX has been optimized for high-resolution X-ray spectroscopy, which makes it especially suitable for analyzing grating and micro-calorimeter spectra.J. S. Kaastra, A. J. J. Raassen, J. de Plaa, Liyi Gu, Frans Alkemade, Gert-Jan Bartelds, Ehud Behar, Alex Blustin, Elisa Costantini, Max Duysens, Jacobo Ebrero, Irma Eggenkamp, Andrzej Fludra, Yan Grange, Ed Gronenschild, Theo Gunsing, Jorgen Hansen, Wouter Hartmann, Kurt van der Heyden, Fred Jansen, Lucien Kuiper, Jim Lemen, Duane Liedahl, Junjie Mao, Pasquale Mazzotta, Missagh Mehdipour, Rolf Mewe, Hans Nieuwenhuijzen, Bert van den Oord, Ken Phillips, Ciro Pinto, Remco van der Rijst, Daniele Rogantini, Makoto Sawada, Hans Schrijver, Karel Schrijver, Tiemen Schut, Katrien Steenbrugge, Janusz Sylwester, Rob Smeets, Jeroen Stil, Igone Urdampilleta, Dima Verner, Jacco Vink, Frank van der Wolf, Sascha Zeegers, Piotr Zyckiwork_ogxjqf5w2jdgtejwdiyx2dtwbuWed, 31 Aug 2022 00:00:00 GMTLearning "best" kernels from data in Gaussian process regression. With application to aerodynamics
https://scholar.archive.org/work/cn6ymfd7hrcadokrz5dr7kttqi
This paper introduces algorithms to select/design kernels in Gaussian process regression/kriging surrogate modeling techniques. We adopt the setting of kernel method solutions in ad hoc functional spaces, namely Reproducing Kernel Hilbert Spaces (RKHS), to solve the problem of approximating a regular target function given observations of it, i.e. supervised learning. A first class of algorithms is kernel flow, which was introduced in the context of classification in machine learning. It can be seen as a cross-validation procedure whereby a "best" kernel is selected such that the loss of accuracy incurred by removing some part of the dataset (typically half of it) is minimized. A second class of algorithms is called spectral kernel ridge regression, and aims at selecting a "best" kernel such that the norm of the function to be approximated is minimal in the associated RKHS. Within Mercer's theorem framework, we obtain an explicit construction of that "best" kernel in terms of the main features of the target function. Both approaches of learning kernels from data are illustrated by numerical examples on synthetic test functions, and on a classical test case in turbulence modeling validation for transonic flows about a two-dimensional airfoil.Jean-Luc Akian and Luc Bonnet and Houman Owhadi and Éric Savinwork_cn6ymfd7hrcadokrz5dr7kttqiWed, 31 Aug 2022 00:00:00 GMT