IA Scholar Query: Automata-Theoretic Techniques for Image Generation and Compression.
https://scholar.archive.org/
Internet Archive Scholar query results feedeninfo@archive.orgSat, 31 Dec 2022 00:00:00 GMTfatcat-scholarhttps://scholar.archive.org/help1440Vegas, Disney, and the Metaverse
https://scholar.archive.org/work/ycueigxa2jarrehldbuvl6hqk4
In his technophilosophical investigation Reality+: Virtual Worlds and the Problems of Philosophy, David J. Chalmers discusses the historical skepticism about the human capacity to recognize material reality. Already in the centuries before our era, Chinese, Indian, and Greek thinkers independently raised whether humans could distinguish between what was real and what was an illusion. Chalmers outlines Zuanghzi's dream about being a butterfly, Narada's dream in which he lived an entire life as a woman, and Plato's allegory of the cave whose imprisoned inhabitants mistake distorted and selective reflections for reality. He interprets these doubts as examples of philosophical domains: "Knowledge: How can Zhuangzi know whether or not he's dreaming? Reality: Is Narada's transformation real or illusory? Value: Can one lead a good life in Plato's cave?" 1 Classical philosophy's concerns about the perceptibility of material reality continued into the modern era, culminating in René Descartes's Meditations on First Philosophy. 2 With them, Chalmers writes, Descartes "set the agenda for centuries of Western 1 Chalmers, David John: Reality+: Virtual Worlds and the Problems of Philosophy,Gundolf S. Freyermuthwork_ycueigxa2jarrehldbuvl6hqk4Sat, 31 Dec 2022 00:00:00 GMTImage Modification Using Deep Neural Cellular Automata
https://scholar.archive.org/work/6qohti62d5acnldm2v65imuh7e
Abstract: Art Style Transfer is part of the rapidly growing AI Art community of recent times. Pioneered by Gatys et al, this class of methods makes it possible to convey styles, textures, patterns, and more. to a target image. The expressive masking feature of mainframe vision models such as VGG19 is used as a lossy function. The method of performing this transformation takes many forms, from the original method that directly optimizes the image pixels to more recent forms that form the CNN form to create a generic transport network. The method presented in this article is similar to more recent methods, but takes advantage of a new class of deep learning methods, deep neural automation. This new method provides the ability to convert any image into a target type that, like the CNN method mentioned previously, uses the same automatic data update rules over and over again. This paper contains how to use NCAs to transform images. also contains the Gatys et. al. type style transfer and the other a OpenAI CLIP based version where a prompt can be given to train NCAs to perform that transformation.Kaushal K. Gore, Sanskar S. Kothari, Saurav S. Kamtalwar, Koustubh P. Soman, Chinamay U. Kokate, Prof. Vayadande Kuldeepwork_6qohti62d5acnldm2v65imuh7eWed, 30 Nov 2022 00:00:00 GMT2019
https://scholar.archive.org/work/wcy47hfvvvdwvfgnwx2cuak4ze
On completion of this course, students will have knowledge in: • CO1.Basics of electrochemistry. Classical & modern batteries and fuel cells. CO2. Causes & effects of corrosion of metals and control of corrosion. Modification of surface properties of metals to develop resistance to corrosion, wear, tear, impact etc. by electroplating and electroless plating. CO3. Production & consumption of energy for industrialization of country and living standards of people. Utilization of solar energy for different useful forms of energy. CO4. Understanding Phase rule and instrumental techniques and its applications. CO5.Over viewing of synthesis, properties and applications of nanomaterials.BTECH.CSwork_wcy47hfvvvdwvfgnwx2cuak4zeMon, 28 Nov 2022 00:00:00 GMT2021
https://scholar.archive.org/work/n7rhmaerpvfrhha4draeqwscs4
Course Objectives: 1. Learn and understand basic concepts and principles of Physics. 2. Make students familiar with latest trends in material science research and learn about novel materials and its applications. 3. Make students confident in analyzing engineering problems and apply its solutions effectively and meaningfully. 4. Gain knowledge in interference and diffraction of light and its applications in new technology. Course Outcomes: CO1: Learn and understand more about basic principles and to develop problem solving skills and implementation in technology. CO2: Study material properties and their application and its use in engineering applications and studies. CO3: Understand crystal structure and applications to boost the technical skills and its applications. CO4: Apply light phenomena in new technology. Module 1 Classical free electron theory-Free-electron concept (Drift velocity, Thermal velocity, Mean collision time, Mean free path, relaxation time) -Expression for electrical conductivity-Failure of classical free electron theory. Quantum free electron theory, Assumptions, Fermi factor, Fermi-Dirac Statistics. Expression for electrical conductivity based on quantum free electron theory. Merits of quantum free electron theory. Temperature dependence of electrical resistivity -Specific heat -Thermionic emission. Hall effect (Qualitative) -Wiedemann-Franz law. Teaching Methodology: Chalk and talk method: Classical free electron theory-Free-electron concept (Drift velocity, Thermal velocity, Mean collision time, Mean free path, relaxation time) -Expression for electrical conductivity-Failure of classical free electron theory. Powerpoint presentation: Quantum free electron theory, Assumptions, Fermi factor, Fermi-Dirac Statistics. Expression for electrical conductivity based on quantum free electron theory. Merits of quantum free electron theory. Temperature dependence of electrical resistivity -Specific heat -Thermionic emission. Wiedemann-Franz law. Self-study material: Hall effect (Qualitative) 9 Hours Module 2 Interaction of radiation with matter -Absorption-Spontaneous emission -Stimulated emission-Einstein's coefficients (expression for energy density). Requisites of a Laser system. Condition for laser action. Principle, Construction and working of He-Ne laser. Propagation mechanism in optical fibers. Angle of acceptance. Numerical aperture. Types of optical fibers-Step index and Graded index fiber. Modes of propagation-Single mode and Multimode fibers. Attenuation-Attenuation mechanisms. Teaching Methodology: Chalk and talk method: Interaction of radiation with matter -Absorption-Spontaneous emission -Stimulated emission-Einstein's coefficients (expression for energy density). Requisites of a Laser system. Condition for laser action. Propagation mechanism in optical fibers. Angle of acceptance. Numerical aperture. Powerpoint presentation: Types of optical fibers-Step index and Graded index fiber. Modes of propagation-Single mode and Multimode fibers. Video: Construction and working of He-Ne laser. Self-study material: Attenuation-Attenuation mechanisms. 9 Hours Module 3 Temperature dependence of resistivity in metals and superconducting materials. Effect of magnetic field (Meissner effect). Isotope effect -Type I and Type II superconductors-Temperature dependence of critical field. BCS theory (qualitative). High temperature superconductors-Josephson effect -SQUID-Applications of superconductors-Maglev vehicles (qualitative). Magnetic dipole-dipole moment-flux density-magnetic field intensity-Intensity of magnetization-magnetic permeability-susceptibility-relation between permeability and susceptibility. Classification of magnetic materials-Dia, Para, Ferromagnetism. Hysteresis-soft and hard magnetic materials. Teaching Methodology: Chalk and talk method: Temperature dependence of resistivity in metals and superconducting materials. Effect of magnetic field (Meissner effect). Isotope effect -Type I and Type II superconductors-Temperature dependence of critical field. BCS theory (qualitative). High temperature superconductors-Powerpoint presentation: Josephson effect -SQUID-Applications of superconductors. Magnetic dipole-dipole moment-flux density-magnetic field intensity-Intensity of magnetization-magnetic permeability-susceptibility-relation between permeability and susceptibility. Hysteresis-soft and hard magnetic materials. Video: Maglev vehicles (qualitative). Self-study material: Classification of magnetic materials-Dia, Para, Ferromagnetism 9 Hours Module 4 Amorphous and crystalline materials-Space lattice, Bravais lattice-Unit cell, primitive cell. Lattice parameters. Crystal systems. Direction and planes in a crystal. Miller indices -Determination of Miller indices of a plane. Expression for interplanar spacing. Atoms per unit cell -Co-ordination number. Relation between atomic radius and lattice constant -Atomic packing factors (SC, FCC, BCC). Bragg's law. Determination of crystal structure using Bragg's X-ray diffractometer -X-ray spectrum. Teaching Methodology: Chalk and talk method: Direction and planes in a crystal. Miller indices -Determination of Miller indices of a plane. Powerpoint presentation: Atoms per unit cell -Co-ordination number. Relation between atomic radius and lattice constant -Atomic packing factors (SC, FCC, BCC). Bragg's law. Determination of crystal structure using Bragg's X-ray diffractometer -X-ray spectrum. Self-study material: Amorphous and crystalline materials-Space lattice, Bravais lattice-Unit cell, primitive cell. Lattice parameters. Crystal systems. 9 Hours Module 5 Interference of light -Superposition of two coherent waves-Constructive and destructive interference. Interference in thin films -Wedge shaped thin film-Air wedge -Application to find the diameter of a thin wire. Newton's rings -Application to find the refractive index of a liquid. Diffraction of light -Classes of diffraction -Fresnel and Fraunhofer diffraction. Fresnel theory of half period zone -Zone plate.BTECH.CSwork_n7rhmaerpvfrhha4draeqwscs4Mon, 28 Nov 2022 00:00:00 GMTApproximations of Algorithmic and Structural Complexity Validate Cognitive-behavioural Experimental Results
https://scholar.archive.org/work/zoqkglnhdzettcfdgxnkrnaoay
Being able to objectively characterise the intrinsic complexity of behavioural patterns resulting from human or animal decisions is fundamental for deconvolving cognition and designing autonomous artificial intelligence systems. Yet complexity is difficult in practice, particularly when strings are short. By numerically approximating algorithmic (Kolmogorov) complexity (K), we establish an objective tool to characterise behavioural complexity. Next, we approximate structural (Bennett's Logical Depth) complexity (LD) to assess the amount of computation required for generating a behavioural string. We apply our toolbox to three landmark studies of animal behaviour of increasing sophistication and degree of environmental influence, including studies of foraging communication by ants, flight patterns of fruit flies, and tactical deception and competition (e.g., predator-prey) strategies. We find that ants harness the environmental condition in their internal decision process, modulating their behavioural complexity accordingly. Our analysis of flight (fruit flies) invalidated the common hypothesis that animals navigating in an environment devoid of stimuli adopt a random strategy. Fruit flies exposed to a featureless environment deviated the most from Levy flight, suggesting an algorithmic bias in their attempt to devise a useful (navigation) strategy. Similarly, a logical depth analysis of rats revealed that the structural complexity of the rat always ends up matching the structural complexity of the competitor, with the rats' behaviour simulating algorithmic randomness. Finally, we discuss how experiments on how humans perceive randomness suggest the existence of an algorithmic bias in our reasoning and decision processes, in line with our analysis of the animal experiments.Hector Zenil, James A.R. Marshall, Jesper Tegnérwork_zoqkglnhdzettcfdgxnkrnaoaySat, 19 Nov 2022 00:00:00 GMTExploring the Latent Space of Autoencoders with Interventional Assays
https://scholar.archive.org/work/ulikrb4ohrbpxaws7yb3d2ndeq
Autoencoders exhibit impressive abilities to embed the data manifold into a low-dimensional latent space, making them a staple of representation learning methods. However, without explicit supervision, which is often unavailable, the representation is usually uninterpretable, making analysis and principled progress challenging. We propose a framework, called latent responses, which exploits the locally contractive behavior exhibited by variational autoencoders to explore the learned manifold. More specifically, we develop tools to probe the representation using interventions in the latent space to quantify the relationships between latent variables. We extend the notion of disentanglement to take the learned generative process into account and consequently avoid the limitations of existing metrics that may rely on spurious correlations. Our analyses underscore the importance of studying the causal structure of the representation to improve performance on downstream tasks such as generation, interpolation, and inference of the factors of variation.Felix Leeb, Stefan Bauer, Michel Besserve, Bernhard Schölkopfwork_ulikrb4ohrbpxaws7yb3d2ndeqThu, 17 Nov 2022 00:00:00 GMTFast Coalgebraic Bisimilarity Minimization
https://scholar.archive.org/work/3rykej7sfbc45hweawghi4nbny
Coalgebraic bisimilarity minimization generalizes classical automaton minimization to a large class of automata whose transition structure is specified by a functor, subsuming strong, weighted, and probabilistic bisimilarity. This offers the enticing possibility of turning bisimilarity minimization into an off-the-shelf technology, without having to develop a new algorithm for each new type of automaton. Unfortunately, there is no existing algorithm that is fully general, efficient, and able to handle large systems. We present a generic algorithm that minimizes coalgebras over an arbitrary functor in the category of sets as long as the action on morphisms is sufficiently computable. The functor makes at most 𝒪(m log n) calls to the functor-specific action, where n is the number of states and m is the number of transitions in the coalgebra. While more specialized algorithms can be asymptotically faster than our algorithm (usually by a factor of 𝒪(m/n)), our algorithm is especially well suited to efficient implementation, and our tool Boa often uses much less time and memory on existing benchmarks, and can handle larger automata, despite being more generic.Jules Jacobs, Thorsten Wißmannwork_3rykej7sfbc45hweawghi4nbnyThu, 17 Nov 2022 00:00:00 GMTHolistic Evaluation of Language Models
https://scholar.archive.org/work/xl5k5dwfrffx5c6m2ivnwuczju
Language models (LMs) are becoming the foundation for almost all major language technologies, but their capabilities, limitations, and risks are not well understood. We present Holistic Evaluation of Language Models (HELM) to improve the transparency of language models. First, we taxonomize the vast space of potential scenarios (i.e. use cases) and metrics (i.e. desiderata) that are of interest for LMs. Then we select a broad subset based on coverage and feasibility, noting what's missing or underrepresented (e.g. question answering for neglected English dialects, metrics for trustworthiness). Second, we adopt a multi-metric approach: We measure 7 metrics (accuracy, calibration, robustness, fairness, bias, toxicity, and efficiency) for each of 16 core scenarios when possible (87.5% of the time). This ensures metrics beyond accuracy don't fall to the wayside, and that trade-offs are clearly exposed. We also perform 7 targeted evaluations, based on 26 targeted scenarios, to analyze specific aspects (e.g. reasoning, disinformation). Third, we conduct a large-scale evaluation of 30 prominent language models (spanning open, limited-access, and closed models) on all 42 scenarios, 21 of which were not previously used in mainstream LM evaluation. Prior to HELM, models on average were evaluated on just 17.9% of the core HELM scenarios, with some prominent models not sharing a single scenario in common. We improve this to 96.0%: now all 30 models have been densely benchmarked on the same core scenarios and metrics under standardized conditions. Our evaluation surfaces 25 top-level findings. For full transparency, we release all raw model prompts and completions publicly for further analysis, as well as a general modular toolkit. We intend for HELM to be a living benchmark for the community, continuously updated with new scenarios, metrics, and models.Percy Liang, Rishi Bommasani, Tony Lee, Dimitris Tsipras, Dilara Soylu, Michihiro Yasunaga, Yian Zhang, Deepak Narayanan, Yuhuai Wu, Ananya Kumar, Benjamin Newman, Binhang Yuan, Bobby Yan, Ce Zhang, Christian Cosgrove, Christopher D. Manning, Christopher Ré, Diana Acosta-Navas, Drew A. Hudson, Eric Zelikman, Esin Durmus, Faisal Ladhak, Frieda Rong, Hongyu Ren, Huaxiu Yao, Jue Wang, Keshav Santhanam, Laurel Orr, Lucia Zheng, Mert Yuksekgonul, Mirac Suzgun, Nathan Kim, Neel Guha, Niladri Chatterji, Omar Khattab, Peter Henderson, Qian Huang, Ryan Chi, Sang Michael Xie, Shibani Santurkar, Surya Ganguli, Tatsunori Hashimoto, Thomas Icard, Tianyi Zhang, Vishrav Chaudhary, William Wang, Xuechen Li, Yifan Mai, Yuhui Zhang, Yuta Koreedawork_xl5k5dwfrffx5c6m2ivnwuczjuWed, 16 Nov 2022 00:00:00 GMTDesign and training of deep reinforcement learning agents
https://scholar.archive.org/work/v4bnexxtdbgkvdgyu4jub7gulm
Deep reinforcement learning is a field of research at the intersection of reinforcement learning and deep learning. On one side, the problem that researchers address is the one of reinforcement learning: to act efficiently. A large number of algorithms were developed decades ago in this field to update value functions and policies, explore, and plan. On the other side, deep learning methods provide powerful function approximators to address the problem of representing functions such as policies, value functions, and models. The combination of ideas from these two fields offers exciting new perspectives. However, building successful deep reinforcement learning experiments is particularly difficult due to the large number of elements that must be combined and adjusted appropriately. This thesis proposes a broad overview of the organization of these elements around three main axes: agent design, environment design, and infrastructure design. Arguably, the success of deep reinforcement learning research is due to the tremendous amount of effort that went into each of them, both from a scientific and engineering perspective, and their diffusion via open source repositories. For each of these three axes, a dedicated part of the thesis describes a number of related works that were carried out during the doctoral research. The first part, devoted to the design of agents, presents two works. The first one addresses the problem of applying discrete action methods to large multidimensional action spaces. A general method called action branching is proposed, and its effectiveness is demonstrated with a novel agent, named BDQ, applied to discretized continuous action spaces. The second work deals with the problem of maximizing the utility of a single transition when learning to achieve a large number of goals. In particular, it focuses on learning to reach spatial locations in games and proposes a new method called Q-map to do so efficiently. An exploration mechanism based on this method is then used to demonstrate the effect [...]Fabio Pardo, Petar Kormushev, Andrew Davison, Dyson Technology Limited (Firm)work_v4bnexxtdbgkvdgyu4jub7gulmTue, 15 Nov 2022 00:00:00 GMTDagstuhl Reports, Volume 12, Issue 4, April 2022, Complete Issue
https://scholar.archive.org/work/oiijemxg5zhmzehjc3gy2mvkhm
Dagstuhl Reports, Volume 12, Issue 4, April 2022, Complete Issuework_oiijemxg5zhmzehjc3gy2mvkhmMon, 14 Nov 2022 00:00:00 GMTCFLOBDDs: Context-Free-Language Ordered Binary Decision Diagrams
https://scholar.archive.org/work/lmgksqlwyndfvfv4mgwpxhokcu
This paper presents a new compressed representation of Boolean functions, called CFLOBDDs (for Context-Free-Language Ordered Binary Decision Diagrams). They are essentially a plug-compatible alternative to BDDs (Binary Decision Diagrams), and hence useful for representing certain classes of functions, matrices, graphs, relations, etc. in a highly compressed fashion. CFLOBDDs share many of the good properties of BDDs, but--in the best case--the CFLOBDD for a Boolean function can be exponentially smaller than any BDD for that function. Compared with the size of the decision tree for a function, a CFLOBDD--again, in the best case--can give a double-exponential reduction in size. They have the potential to permit applications to (i) execute much faster, and (ii) handle much larger problem instances than has been possible heretofore. CFLOBDDs are a new kind of decision diagram that go beyond BDDs (and their many relatives). The key insight is a new way to reuse sub-decision-diagrams: components of CFLOBDDs are structured hierarchically, so that sub-decision-diagrams can be treated as standalone "procedures" and reused. We applied CFLOBDDs to the problem of simulating quantum circuits, and found that for several standard problems the improvement in scalability--compared to simulation using BDDs--is quite dramatic. In particular, the number of qubits that could be handled using CFLOBDDs was larger, compared to BDDs, by a factor of 128x for GHZ; 1,024x for BV; 8,192x for DJ; and 128x for Grover's algorithm. (With a 15-minute timeout, the number of qubits that CFLOBDDs can handle are 65,536 for GHZ, 524,288 for BV; 4,194,304 for DJ; and 4,096 for Grover's Algorithm.)Meghana Sistla, Swarat Chaudhuri, Thomas Repswork_lmgksqlwyndfvfv4mgwpxhokcuSun, 13 Nov 2022 00:00:00 GMTOn the Billaud Conjecture and related problems
https://scholar.archive.org/work/zz6o4rrkcvajzkhgkx6p2pdoo4
The Billaud Conjecture, which has been open since 1993, is a fundamental problem on finite words $w$ and their heirs, i.e., the words obtained by deleting every occurrence of a given letter from some word $w$. The Conjecture posits that every morphically primitive word, i.e., a word which is only a fixed point of the identity on the set of symbols of the word, has at least one morphically primitive heir. In this thesis we introduce the Conjecture, and give a comprehensive overview of the current state of knowledge about it. In particular, we recall the known special cases in which the Conjecture holds. Based on the previous special case results we develop a 'blueprint' for solving the Conjecture for an arbitrary alphabet size, i.e., we identify an enumeration of the cases which need to be solved in order to prove the Billaud Conjecture for a fixed alphabet size. We apply the blueprint to the proof of the next major case of the Conjecture, i.e., the case for quaternary alphabets, and discuss the potential for generalising our reasoning to larger alphabets. Subsequently, we introduce and investigate the related class of so-called Billaud words, i.e., words whose all heirs are morphically imprimitive. We provide a characterisation of morphically imprimitive Billaud words, using a new concept. We show that there are two phenomena through which words can have morphically imprimitive heirs, and we highlight that only one of those occurs in morphically primitive words. We examine our concept further, and we use it to rephrase and study the Billaud Conjecture in more detail. Finally, we relate the notions associated with the Billaud Conjecture to other concepts available in the literature. In particular, we show that the Conjecture can be expressed as a problem of characterising the outcome of the application of a synchronised shuffle operation to certain classes of languages. We assert that these classes of languages are expressible using the so-called pattern expressions, which we show are not closed under the synchron [...]Szymon Lopaciukwork_zz6o4rrkcvajzkhgkx6p2pdoo4Mon, 07 Nov 2022 00:00:00 GMTPyCSP3: Modeling Combinatorial Constrained Problems in Python
https://scholar.archive.org/work/vuf7d7sjzjdrtic4nn2v2xafpe
In this document, we introduce PyCSP3, a Python library that allows us to write models of combinatorial constrained problems in a declarative manner. Currently, with PyCSP3, you can write models of constraint satisfaction and optimization problems. More specifically, you can build CSP (Constraint Satisfaction Problem) and COP (Constraint Optimization Problem) models. Importantly, there is a complete separation between the modeling and solving phases: you write a model, you compile it (while providing some data) in order to generate an XCSP3 instance (file), and you solve that problem instance by means of a constraint solver. You can also directly pilot the solving procedure in PyCSP3, possibly conducting an incremental solving strategy. In this document, you will find all that you need to know about PyCSP3, with more than 50 illustrative models.Christophe Lecoutre, Nicolas Szczepanskiwork_vuf7d7sjzjdrtic4nn2v2xafpeMon, 07 Nov 2022 00:00:00 GMTSurface area and size distribution of cement particles in hydrating paste as indicators for the conceptualization of a cement paste representative volume element
https://scholar.archive.org/work/3okewo64svervd6skexb4i4lmu
The conceptualization of a representative volume element (RVE) of hardened cement paste for numerical homogenization of mechanical problems rests on identifying the largest discernible microstructural feature, i.e. unreacted cement grains. While the particle size distribution (PSD) of anhydrous cement is a wellcontrolled production parameter, the size evolution of a representative cement grain throughout hydration remained unresolved. This study analyzes digitized 3D cement paste microstructures obtained from X-ray micro-computed tomography, coupled with CEMHYD3D hydration model, and segmented by image-processing tools, to obtain the full PSD and specific surface area evolutions of unreacted grains throughout hydration. Results provided indicate a representative grain size in the range of 30 − 40 μm regardless of hydration elapsed, implying a cement paste RVE should amount to 150 − 200 μm to realistically represent cement grains. The PSD shape remained self-similar and two distinctive hydration regimes were identified, differing in dissolution rate and specific surface area decrease, correlating with calcium sulfate reactivity peak. Both measures provide easily accessible microstructural features that may be used for constructing artificial RVEs of hardened cement paste in micromechanical models and related simulations, resting on experimental data.Michal Hlobil, Ivana Kumpová, Adéla Hlobilováwork_3okewo64svervd6skexb4i4lmuTue, 01 Nov 2022 00:00:00 GMTAssessment of Transfer Learning Capabilities for Fatigue Damage Classification and Detection in Aluminum Specimens with Different Notch Geometries
https://scholar.archive.org/work/gvj4dhrxzfaytph4q6wepbyhia
Fatigue damage detection and its classification in metallic materials are persistently challenging the structural health monitoring community. The mechanics of fatigue damage is difficult to analyze and is further complicated because of the presence of notches of different geometries. These notches act as possible crack-nucleation sites resulting in failure mechanisms that are drastically different from one another. Often, sensor-based tools are used to monitor and detect fatigue damage in critical metallic materials such as aluminum alloys. Through deep neural networks (DNNs), such a sensor-based approach can be ubiquitously extended for a variety of geometries as appropriate for different applications. To that end, this paper presents a DNN-based transfer learning framework that can be used to classify and detect fatigue damage across candidate notch geometries. The DNNs are built upon ultrasonic time-series data obtained during fatigue testing of Al7075-T6 specimens with two types of notch geometries, namely, a U-notch and a V-notch. The baseline U-notch DNN is shown to achieve an accuracy of 96.1% while the baseline V-notch DNN has an accuracy of 95.8%. Both baseline DNNs are, thereafter, subjected to a transfer learning process by keeping a certain number of layers frozen and retraining only the remaining layers with a small volume of data obtained from the other notch geometry. When a layer of the baseline U-notch DNN is retrained with just 10% of the total V-notch data, an accuracy above 90% is observed for fatigue damage detection of V-notch specimens. Similar results are also obtained when the baseline V-notch DNN is retrained and interrogated to detect damage for U-notch specimens. These results, in summary, demonstrate the data-thrifty quality of combining the concepts of transfer learning and DNN for fatigue damage detection in different geometries of specimens made of high-performance aluminum alloys.Susheel Dharmadhikari, Riddhiman Raut, Chandrachur Bhattacharya, Asok Ray, Amrita Basakwork_gvj4dhrxzfaytph4q6wepbyhiaSat, 29 Oct 2022 00:00:00 GMTMass Transfer Enhancement in Carbon Dioxide Gas Hydrate Formation for Effective Carbon Separation and Storage
https://scholar.archive.org/work/34w4qixatnhedjjqywvh63ht2m
Carbon dioxide (CO2) is widely acknowledged as a significant contributor to global warming. Hydrate-based carbon capture (HBCC) technology holds high potential in delivering cost-effective and environmentally friendly carbon capture solutions. However, the relatively severe formation conditions and low formation rate of gas hydrates limit its practical applications. This thesis focuses on the mass transfer enhancement methods for effective CO2 hydrate formation through experimental and numerical studies. The thermodynamic and kinetic promotion experiments on CO2 hydrate formation using chemical promoters are implemented in tetra-n-butyl ammonium bromide (TBAB) solution with surfactants. TBAB, as a thermodynamic promoter, can moderate hydrate phase equilibrium by forming CO2-TBAB semiclathrate hydrates. However, it decreases CO2 gas uptake yields. Three kinds of surfactants, namely anionic surfactant sodium dodecyl sulfate (SDS), cationic surfactant dodecyl-trimethylammonium chloride (DTAC), and non-ionic surfactant Tween 80 (T-80), are added in the system to increase the formation rate and offset the low gas uptake yields. Induction time, normalized gas uptake, split fraction and separation factor are the performance metrics. The results in TBAB systems show that the hydrate formation is most accelerated with the addition of SDS, but DTAC shows better CO2 separation performance. Similar results of rapid formation rate with the addition of non-ionic surfactant T-80 are also found. Analysis of variance is used to analyze the difference among experimental results, and a decision box is proposed to evaluate the performance of the systems studied. Compared with SDS and DTAC, 2000-ppm T-80 shows the best CO2 separation performance in semiclathrate hydrates. The mass transfer can also be enhanced by adding microparticles due to their considerable surface areas. The kinetic promotion experiments of CO2 hydrate formation are thus further studied in "dry water" and silica gel (SG) microparticles of different sizes. The exp [...]Fengyuan Zhang, University, The Australian Nationalwork_34w4qixatnhedjjqywvh63ht2mFri, 21 Oct 2022 00:00:00 GMTA Survey of Data Optimization for Problems in Computer Vision Datasets
https://scholar.archive.org/work/jgrshnax2fcfjdnferxhjilgwu
Recent years have witnessed remarkable progress in artificial intelligence (AI) thanks to refined deep network structures, powerful computing devices, and large-scale labeled datasets. However, researchers have mainly invested in the optimization of models and computational devices, leading to the fact that good models and powerful computing devices are currently readily available, while datasets are still stuck at the initial stage of large-scale but low quality. Data becomes a major obstacle to AI development. Taking note of this, we dig deeper and find that there has been some but unstructured work on data optimization. They focus on various problems in datasets and attempt to improve dataset quality by optimizing its structure to facilitate AI development. In this paper, we present the first review of recent advances in this area. First, we summarize and analyze various problems that exist in large-scale computer vision datasets. We then define data optimization and classify data optimization algorithms into three directions according to the optimization form: data sampling, data subset selection, and active learning. Next, we organize these data optimization works according to data problems addressed, and provide a systematic and comparative description. Finally, we summarize the existing literature and propose some potential future research topics.Zhijing Wan, Zhixiang Wang, CheukTing Chung, Zheng Wangwork_jgrshnax2fcfjdnferxhjilgwuFri, 21 Oct 2022 00:00:00 GMTCellular Automata: Temporal Stochasticity and Computability
https://scholar.archive.org/work/3iy5cks2jvfczojqwqmbijur2y
In this dissertation, we study temporally stochasticity in cellular automata and the behavior of such cellular automata. The work also explores the computational ability of such cellular automaton that illustrates the computability of solving the affinity classification problem. In addition to that, a cellular automaton, defined over Cayley tree, is shown as the classical searching problem solver. The proposed temporally stochastic cellular automata deals with two elementary cellular automata rules, say f and g. The f is the default rule, however, g is temporally applied to the overall system with some probability τ which acts as a noise in the system. After exploring the dynamics of temporally stochastic cellular automata (TSCAs), we study the dynamical behavior of these temporally stochastic cellular automata (TSCAs) to identify the TSCAs that converge to a fixed point from any seed. We apply each of the convergent TSCAs to some standard datasets and observe the effectiveness of each TSCA as a pattern classifier. It is observed that the proposed TSCA-based classifier shows competitive performance in comparison with existing classifier algorithms. We use temporally stochastic cellular automata to solve a new problem in the field of cellular automata, named as, affinity classification problem which is a generalization of the density classification problem . We show that this model can be used in several applications, like modeling self-healing systems. Finally, we introduce a new model of computing unit developed around cellular automata to reduce the workload of the Central Processing Unit (CPU) of a machine to compute. Each cell of the computing unit acts as a tiny processing element with attached memory. Such a CA is implemented on the Cayley Tree to realize efficient solutions for diverse computational problems.Subrata Paulwork_3iy5cks2jvfczojqwqmbijur2yThu, 20 Oct 2022 00:00:00 GMTInvestigating Quantum Many-Body Systems with Tensor Networks, Machine Learning and Quantum Computers
https://scholar.archive.org/work/wszjs55s4bepdm4lzqjixf7wku
We perform quantum simulation on classical and quantum computers and set up a machine learning framework in which we can map out phase diagrams of known and unknown quantum many-body systems in an unsupervised fashion. The classical simulations are done with state-of-the-art tensor network methods in one and two spatial dimensions. For one dimensional systems, we utilize matrix product states (MPS) that have many practical advantages and can be optimized using the efficient density matrix renormalization group (DMRG) algorithm. The data for two dimensional systems is obtained from entangled projected pair states (PEPS) optimized via imaginary time evolution. Data in form of observables, entanglement spectra, or parts of the state vectors from these simulations, is then fed into a deep learning (DL) pipeline where we perform anomaly detection to map out the phase diagram. We extend this notion to quantum computers and introduce quantum variational anomaly detection. Here, we first simulate the ground state and then process it in a quantum machine learning (QML) manner. Both simulation and QML routines are performed on the same device, which we demonstrate both in classical simulation and on a physical quantum computer hosted by IBM.Korbinian Kottmannwork_wszjs55s4bepdm4lzqjixf7wkuThu, 20 Oct 2022 00:00:00 GMTThe low-rank hypothesis of complex systems
https://scholar.archive.org/work/acal7tsbinfx3m3gl4fujcc3nq
Complex systems are high-dimensional nonlinear dynamical systems with intricate interactions among their constituents. To make interpretable predictions about their large-scale behavior, it is typically assumed, without a clear statement, that these dynamics can be reduced to a few number of equations involving a low-rank matrix describing the network of interactions -- what we call the low-rank hypothesis. By leveraging fundamental theorems on singular value decomposition, we verify the hypothesis for various random networks, either by making explicit their low-rank formulation or by demonstrating the exponential decrease of their singular values. Notably, we validate the hypothesis experimentally for real networks by showing that their effective rank is considerably lower than their number of vertices. We then evaluate the impact of the low-rank hypothesis for general dynamical systems on networks through an optimal dimension reduction. This allows us to prove that recurrent neural networks can be exactly reduced, and to connect the rapidly decreasing singular values of real networks to the dimension reduction error of the nonlinear dynamics they support, be it microbial, neuronal or epidemiological. Finally, we prove that higher-order interactions naturally emerge from the dimension reduction, thus providing theoretical insights into the origin of higher-order interactions in complex systems.Vincent Thibeault, Antoine Allard, Patrick Desrosierswork_acal7tsbinfx3m3gl4fujcc3nqTue, 18 Oct 2022 00:00:00 GMT