IA Scholar Query: Algorithmic proofs of two theorems of Stafford.
https://scholar.archive.org/
Internet Archive Scholar query results feedeninfo@archive.orgMon, 19 Sep 2022 00:00:00 GMTfatcat-scholarhttps://scholar.archive.org/help1440Shalika germs for tamely ramified elements in GL_n
https://scholar.archive.org/work/ddi6pac4q5cvdejdph7gvpqeyi
Degenerating the action of the elliptic Hall algebra on the Fock space, we give a combinatorial formula for the Shalika germs of tamely ramified regular semisimple elements γ of GL_n over a nonarchimedean local field. As a byproduct, we compute the weight polynomials of affine Springer fibers in type A and orbital integrals of tamely ramified regular semisimple elements. We conjecture that the Shalika germs of γ correspond to residues of torus localization weights of a certain quasi-coherent sheaf ℱ_γ on the Hilbert scheme of points on 𝔸^2, thereby finding a geometric interpretation for them. As corollaries, we obtain the polynomiality in q of point-counts of compactified Jacobians of planar curves, as well as a virtual version of the Cherednik-Danilenko conjecture on their Betti numbers. Our results also provide further evidence for the ORS conjecture relating compactified Jacobians and HOMFLY-PT invariants of algebraic knots.Oscar Kivinen, Cheng-Chiang Tsaiwork_ddi6pac4q5cvdejdph7gvpqeyiMon, 19 Sep 2022 00:00:00 GMTIn Silico Tools for Investigating the Performance of Breast Cancer Imaging Technologies
https://scholar.archive.org/work/x3dcdd7cznbatb7a53nmmwwot4
Breast cancer screening programs using two dimensional (2D) digital mammography (DM), have proven effective in early detection of cancer subsequently reducing breast cancer related deaths. A major drawback of DM arises from large amounts of overlapping breast tissues which may mimic or conceal abnormalities in a 2D image. Advanced breast imaging technologies like digital breast tomosynthesis (DBT) generating 3D information are now being considered as a replacement for DM in screening programs. However, the benefits of DBT based screening for earlier detection of cancer, across the various commercially available detector technologies, are yet to be established. The aim of this thesis is to investigate the influence of x-ray imager technologies and imaging modalities on the early detection of breast cancer using in silico trials. The first part of this thesis focuses on developing computational models that replicate the growth of cancerous lesions and the detector physics of commercially available DM/DBT systems. I propose a growth model for breast lesions based on biological and physiological phenomena accounting for the stiffness of surrounding anatomical structures. Depending on the breast's local anatomical structures, a range of unique lesion morphology was realized. Imaging physics models were developed to simulate direct and indirect x-ray detector technology. Image quality metrics were compared against measured data from three commercially available DM/DBT systems. Finally, these tools combined with the VICTRE 1.0 in silico framework were used to design in silico trials to study whether DBT can facilitate the detection of breast cancer at earlier disease stages and for a range of detector technologies. The in silico studies suggest that while DBT shows clear advantages for detecting masses at earlier stages, its benefits over DM for detecting micro-calcifications depend on the detector technology.Aunnasha Sengupta, University, Mywork_x3dcdd7cznbatb7a53nmmwwot4Tue, 06 Sep 2022 00:00:00 GMTStatistical Inference with Stochastic Gradient Algorithms
https://scholar.archive.org/work/din6grsnezhczbyspqxzmintze
Stochastic gradient algorithms are widely used for both optimization and sampling in large-scale learning and inference problems. However, in practice, tuning these algorithms is typically done using heuristics and trial-and-error rather than rigorous, generalizable theory. To address this gap between theory and practice, we novel insights into the effect of tuning parameters by characterizing the large-sample behavior of iterates of a very general class of preconditioned stochastic gradient algorithms with fixed step size. In the optimization setting, our results show that iterate averaging with a large fixed step size can result in statistically efficient approximation of the (local) M-estimator. In the sampling context, our results show that with appropriate choices of tuning parameters, the limiting stationary covariance can match either the Bernstein--von Mises limit of the posterior, adjustments to the posterior for model misspecification, or the asymptotic distribution of the MLE; and that with a naive tuning the limit corresponds to none of these. Moreover, we argue that an essentially independent sample from the stationary distribution can be obtained after a fixed number of passes over the dataset. We validate our asymptotic results in realistic finite-sample regimes via several experiments using simulated and real data. Overall, we demonstrate that properly tuned stochastic gradient algorithms with constant step size offer a computationally efficient and statistically robust approach to obtaining point estimates or posterior-like samples.Jeffrey Negrea, Jun Yang, Haoyue Feng, Daniel M. Roy, Jonathan H. Hugginswork_din6grsnezhczbyspqxzmintzeMon, 25 Jul 2022 00:00:00 GMTUnsolved Problems in Group Theory. The Kourovka Notebook
https://scholar.archive.org/work/fhii5oyzvrb7vpeun6rsfapu2i
This is a collection of open problems in group theory proposed by hundreds of mathematicians from all over the world. It has been published every 2-4 years in Novosibirsk since 1965. This is the 20th edition, which contains 126 new problems and a number of comments on problems from the previous editions.E. I. Khukhro, V. D. Mazurovwork_fhii5oyzvrb7vpeun6rsfapu2iMon, 27 Jun 2022 00:00:00 GMTThe Prospect of Income-Contingent Loans for Malaysia Higher Education Student Financing
https://scholar.archive.org/work/dtetrpgptzhmhi2dtay3hmk43a
In Malaysia, there has been public and political pressure for reform due to low loan recovery and high default in the Malaysian student loan scheme, known as PTPTN. The current student financing arrangement requires fixed monthly repayments over a fixed repayment period and is burdensome for low income borrowers and increases the risk of default. In contrast to time-based repayment loans, or TBRL (of which PTPTN is an example), income-contingent loans (ICL) can be designed to ensure affordable repayments and provide protection against financial hardship by basing repayments on income. Lessons learned from existing ICL schemes around the world suggest that ICLs can be flexibly designed with a range of features including repayment rate, repayment threshold, interest rates, loan surcharge and write-off period. We examine the implications to borrower affordability, repayments and taxpayer costs of illustrative ICL designs for Malaysia. The choice of income model is a critical determinant of repayment prospects in an ICL, hence both static and dynamic income models are developed and compared for Malaysian graduates. The former employs conditional quantile regression with a rigid assumption of no stochastic variability in income, while the latter allows for mobility in income over time and is modelled using copulas. Due to an absence of Malaysian graduate panel data, we use Australian panel data to develop a model of income dynamics using copulas, and we apply this model to Malaysian cross-sectional graduate income. As far as we are aware, this is the first research to apply income dynamics of another country for modelling ICL costs for a country where panel data is not available. We show how underlying income and labour force assumptions affect repayment burden calculations and Malaysian ICL cost estimates. Using an ICL design where parameters are chosen based on best practice, we estimate an average subsidy (due to loan non-repayment and interest subsidies) below 5% under a dynamic model and 13% under a static income [...]Syaza Nawwarah Zein Isma, University, The Australian Nationalwork_dtetrpgptzhmhi2dtay3hmk43aThu, 26 May 2022 00:00:00 GMTEntwerfen im Modus der diagrammatischen Modellierung
https://scholar.archive.org/work/3per5uiimngp3kxypcnhjgvhke
Die Dissertation befasst sich mit einer erkenntnistheoretischen Untersuchung der Gebäudedatenmodellierung bzw. von building information modelling (BIM) als Entwurfsmedium in der Architektur. Hierzu wird zuerst die Geschichte von BIM als transdisziplinäre, non-lineare Geschichte nachgezeichnet, in der der Transfer von Konzepten und Modellen zwischen Architektur und Informatik den Fokus der Betrachtung abgibt. Sodann wird das architektonische Entwerfen als Erkenntnisprozess beschrieben. Den theoretische Rahmen hierfür bilden die Wissensforschung, der semiotische Pragmatismus und die Urteilstheorie des Modells. In diesem Setting wird dann eine Heuristik von sechs charakteristischen Merkmalen des BIM-basierten Entwerfens herausgearbeitet. Im Anschluss an diese Bestandsaufnahme wird das BIM-basierte Entwerfen als diagrammatisches Modellieren charakterisiert. Dabei wird zuerst die Transformation des architektonischen Entwurfsmodells vom analogen Skalenmodell zum digitalen Informationsmodell beschrieben, und ein pragmatischer Modell-Begriff eingeführt, mit dem die Bedingungen der Beurteilung eines BIM- Modells als Entwurfsmodell analysiert werden. Zum zweiten wird das Interface des BIM-Modells als Diagramm interpretiert. Hierbei wird der Diagramm-Begriff des semiotischen Pragmatismus in Stellung gebracht gegenüber dem des Poststrukturalismus, der vor dem Aufkommen von BIM in der Architektur in den architekturtheoretischen Debatten leitend war, und bis heute fortwirkt.Jan Bovelet, Technische Universität Berlin, Jörg Gleiterwork_3per5uiimngp3kxypcnhjgvhkeWed, 11 May 2022 00:00:00 GMTProblems with Abstract Observers and Advantages of a Model-Centric Cybernetics Paradigm
https://scholar.archive.org/work/yuwl3akr6rdrndavozs6maskaq
Since 1974, when Heinz von Foerster made the distinction between "the cybernetics of observed systems" as first-order cybernetics (1oC) and "the cybernetics of observing systems" as second-order cybernetics (2oC), cybernetics has been dominated by this observer-centric paradigm that he claimed cannot be extended meaningfully to a third order. Rather than attempting to extend his paradigm, we derive an alternative, model-centric cybernetics paradigm from the first principles of regulation, which naturally extends to three orders, where the third order is ethical regulation. We thus consider a type of regulator that requires a third model and a third observer: if the third model is a model of acceptable (ethical) situations, then a third observer is a necessary element of the system's "conscience" that prevents any violations of the model of ethical situations. In this paradigm, the cybernetics of systems that are designed to exhibit ethical behaviour can be characterized as third-order cybernetics (3oC). By being able to extend the paradigm to include ethical systems, the model-centric paradigm brings clarity and utility that is not possible using the observer-centric paradigm and its under-specified (abstract) observers. Finally, new definitions for cybernetics are proposed that clearly differentiate between the science of cybernetics and the philosophy of cybernetics.Mick Ashbywork_yuwl3akr6rdrndavozs6maskaqTue, 19 Apr 2022 00:00:00 GMTGambits: Theory and Evidence
https://scholar.archive.org/work/emopp63mwvgfzf6rpfqcmhsniy
Gambits are central to human decision-making. Our goal is to provide a theory of Gambits. A Gambit is a combination of psychological and technical factors designed to disrupt predictable play. Chess provides an environment to study gambits and behavioral game theory. Our theory is based on the Bellman optimality path for sequential decision-making. This allows us to calculate the Q-values of a Gambit where material (usually a pawn) is sacrificed for dynamic play. On the empirical side, we study the effectiveness of a number of popular chess Gambits. This is a natural setting as chess Gambits require a sequential assessment of a set of moves (a.k.a. policy) after the Gambit has been accepted. Our analysis uses Stockfish 14.1 to calculate the optimal Bellman Q values, which fundamentally measures if a position is winning or losing. To test whether Bellman's equation holds in play, we estimate the transition probabilities to the next board state via a database of expert human play. This then allows us to test whether the Gambiteer is following the optimal path in his decision-making. Our methodology is applied to the popular Stafford and reverse Stafford (a.k.a. Boden-Kieretsky-Morphy) Gambit and other common ones including the Smith-Morra, Goring, Danish and Halloween Gambits. We build on research in human decision-making by proving an irrational skewness preference within agents in chess. We conclude with directions for future research.Shiva Maharaj, Nicholas Polson, Christian Turkwork_emopp63mwvgfzf6rpfqcmhsniyTue, 12 Apr 2022 00:00:00 GMTThe affine Springer fiber - sheaf correspondence
https://scholar.archive.org/work/x4gw6wvdxfbtxghxoqhwvwvqnm
Given a semisimple element in the loop Lie algebra of a reductive group, we construct a quasi-coherent sheaf on a partial resolution of the trigonometric commuting variety of the Langlands dual group. The construction uses affine Springer theory and can be thought of as an incarnation of 3d mirror symmetry. For the group GL_n, the corresponding partial resolution is Hilb^n(ℂ^××ℂ). We also consider a quantization of this construction for homogeneous elements.Eugene Gorsky, Oscar Kivinen, Alexei Oblomkovwork_x4gw6wvdxfbtxghxoqhwvwvqnmFri, 01 Apr 2022 00:00:00 GMTCognitive and evolutionary foundations of culture and belief
https://scholar.archive.org/work/vlmkuhrisvdflak5vd52axtvhu
This thesis explores topical issues in human culture and belief using tools afforded by cognitive psychology and evolutionary theory. Chapter 1 outlines the specific topics examined in this thesis. Chapter 2 presents a meta-analysis that examines the association between delusional ideation and data gathering in the "beads task" paradigm. Chapter 3 presents a behavioural study that examines the extent to which analytic cognitive style and delusional ideation independently predict data gathering in the "beads task" paradigm. Chapter 4 presents a behavioural study of belief formation using the "allergist" associative learning paradigm. Chapter 5 presents an analysis of the evolution of European folktales using methods from population genetics to examine cultural evolution in large, modern societies. Chapter 6 presents a discussion of the importance of taking a geographically explicit approach to the analysis of cross-cultural data. Chapter 7 presents an analysis of the evolution of Arctic folktales using methods from population genetics to examine cultural evolution in small, traditional societies. Chapter 8 presents a general conclusion that summarises the contribution that this thesis makes to our understanding of culture and belief.Robert Malcolm Rosswork_vlmkuhrisvdflak5vd52axtvhuMon, 28 Mar 2022 00:00:00 GMTWolstenholme and Vandiver primes
https://scholar.archive.org/work/fgr4sriogjab7bynfk5pwtwlaq
A prime p is a Wolstenholme prime if 2pp≡2 mod p^4, or, equivalently, if p divides the numerator of the Bernoulli number B_p-3; a Vandiver prime p is one that divides the Euler number E_p-3. Only two Wolstenholme primes and eight Vandiver primes are known. We increase the search range in the first case by a factor of 10, and show that no additional Wolstenholme primes exist up to 10^11, and in the second case by a factor of 20, proving that no additional Vandiver primes occur up to this same bound. To facilitate this, we develop a number of new congruences for Bernoulli and Euler numbers mod p that are favorable for computation, and we implement some highly parallel searches using GPUs.Andrew R. Booker, Shehzad Hathi, Michael J. Mossinghoff, Timothy S. Trudgianwork_fgr4sriogjab7bynfk5pwtwlaqSun, 27 Mar 2022 00:00:00 GMTCompetition-based control of the false discovery proportion
https://scholar.archive.org/work/7btuvpgndzdapmk7onrclx4rii
Recently, Barber and Candès laid the theoretical foundation for a general framework for false discovery rate (FDR) control based on the notion of "knockoffs." A closely related FDR control methodology has long been employed in the analysis of mass spectrometry data, referred to there as "target-decoy competition" (TDC). However, any approach that aims to control the FDR, which is defined as the expected value of the false discovery proportion (FDP), suffers from a problem. Specifically, even when successfully controlling the FDR at level α, the FDP in the list of discoveries can significantly exceed α. We offer FDP-SD, a new procedure that rigorously controls the FDP in the competition (knockoff / TDC) setup by guaranteeing that the FDP is bounded by α at any desired confidence level. Compared with the just-published general framework of Katsevich and Ramdas, FDP-SD generally delivers more power and often substantially so in simulated as well as real data.Dong Luo, Arya Ebadi, Yilun He, Kristen Emery, William Stafford Noble, Uri Keichwork_7btuvpgndzdapmk7onrclx4riiMon, 14 Mar 2022 00:00:00 GMTComplex Networks: Structure and Inference
https://scholar.archive.org/work/3z2u6qwehjcpdl5kqz6frpc4cu
From the spread of disease across a population to the dispersion of vehicular traffic in cities, many real-world processes are driven by lots of small components that interact in simple ways at small scales to produce nontrivial large-scale effects. Probing the fundamental mechanisms that govern such systems——broadly called "complex systems"——is crucial for control, design, and intervention relevant to these processes. Networks, mathematical objects composed of nodes attached in pairs by edges, provide a very useful representation of such systems, and thus modeling networks is of critical importance for understanding real-world complex systems. In this thesis, I examine two different aspects of network modeling: (1) characterizing structure in networks with metadata, and (2) developing scalable, accurate, and interpretable inference techniques for real-world network data. I approach the problem of characterizing structure in networks with metadata from two different perspectives. First, I discuss new measures for characterizing the structure of signed networks with positive and negative edge signs representing amity and enmity respectively. Signed networks are hypothesized to display structural regularity (balance) as a result of certain configurations of edge signs being more common than others——for instance, the friend of my enemy should be my enemy. I show that we can develop intuitive measures of balance in signed networks that capture long-range correlations, demonstrating that real networks are indeed significantly balanced using these measures, and that these measures can be used to impute missing data. Second, I move on to explore how we can measure diversity at multiple scales in networks with node metadata that take the form of distributions. I detail a general information theoretic framework for this task, illustrating new insights it can give us through example applications involving demographic data across spatially contiguous regions. With regards to inference, I first describe a new message passi [...]Alec Kirkley, University, Mywork_3z2u6qwehjcpdl5kqz6frpc4cuWed, 19 Jan 2022 00:00:00 GMTFinancial Intermediaries and the Macroeconomy: Evidence from a High-Frequency Identification
https://scholar.archive.org/work/mpwhqp35u5ewhnak4kbdhhlstu
We provide empirical evidence of the causal effects of changes in financial intermediaries' net worth on the aggregate economy. Our strategy identifies financial shocks as high-frequency changes in the market value of intermediaries' net worth in a narrow window around their earnings announcements, based on US tick-by-tick data. Using these shocks, we estimate that news of a 1% decline in intermediaries' net worth leads to a 0.2% to 0.4% decrease in the market value of nonfinancial firms. These effects are more pronounced for firms with high default risk and low liquidity and when the aggregate net worth of intermediaries is low.Pablo Ottonello, Wenting Songwork_mpwhqp35u5ewhnak4kbdhhlstuExploring the accuracy of analytic methods in predicting the evolution of large-scale structure
https://scholar.archive.org/work/p6cxbhdy2fgrrgw42xvwbwrcf4
Cosmology is at a crossroads. Experiments are providing an unprecedented amount of data that, in theory, should lead to clear solutions to the many open questions in cosmology. However, with new data comes new questions and recently uncovered tensions between the predictions of the standard model of cosmology and observations are leading some to question the very foundations on which the standard model is built. To explore the vast cosmological landscape, numerical simulations are often employed, but given the broad parameter space that needs to be explored other faster (but more approximate) methods need to be adopted to maximise the coverage and the possible extensions surveyed. In this panorama one of the options is the halo model, a simple and elegant way to study the clustering of matter in the Universe. However, this method is not free from assumptions and associated uncertainties. In this thesis I explore the uncertainties associated with the halo model making use of cosmological numerical simulations. I use the BAHAMAS simulations to obtain data products such as the mass density profiles of the haloes and the number density of haloes over a wide range of masses and I use these quantities in the halo model formalism in order to make a self-consistent comparisons against the simulations results. Aside from this application, I calibrate a fitting function on the Einasto function, which has been shown to be a good representation of the matter distribution inside haloes, and I use a standard form for the halo mass function. Comparing against the simulation matter power spectrum at different redshift, I show the accuracy of the halo model predictions is strongly dependent on the mass definitions used with differences over 50%. In particular, the transition region between the 1-halo and the 2-halo terms and in the smallest scales sampled (k≈ 10 h/Mpc). This picture applies to both collisionless and hydrodynamical simulations, where galaxy formation processes are taken into account. In contrast to the poor abilit [...]A Acutowork_p6cxbhdy2fgrrgw42xvwbwrcf4Essays in asset pricing
https://scholar.archive.org/work/yvzatislebdrpmqbznit5bnilq
This thesis contains three chapters studying asset prices from different financial markets to understand the economic forces driving their movements and recover economic variables of interest. In Chapter 1, I develop and estimate a model to quantify the effects of financial constraints, arbitrage capital, and hedging demands on asset prices and their deviations from frictionless benchmarks. Using foreign exchange derivatives data, I find that financial constraints and hedging demands contribute to 46 and 35 percent variation in the deviations from covered interest parity of the one-year maturity. While arbitrage capital fluctuation explains the remaining 19 percent variation on average, it periodically stabilizes prices when the other two forces exert disproportionately large impacts. The model features general financial constraints and produces a nonparametric arbitrage profit function. I unveil the shapes and dynamics of financial constraints from estimates of this function. In Chapter 2 (co-authored with Ian Martin), we propose a framework to compute sharp bounds of the crash probability of an individual stock using option prices. Empirical tests suggest that these bounds are close to the exact forward-looking crash probabilities. Out of sample, either the lower or upper bound outperforms combinations of stock characteristics in terms of forecasting stock-specific crash events. Applying the framework to study the equity of global systemically important banks (G-SIBs) gives rise to forward-looking fragility and stability measures of the global financial system. In Chapter 3 (co-authored with Jiantao Huang), we develop a transparent Bayesian approach to quantify uncertainty in linear stochastic discount factor (SDF) models. We show that, for a Bayesian decision maker, posterior model probabilities increase with maximum in-sample Sharpe ratios and decrease with model dimensions. Entropy of posterior probabilities represents model uncertainty. We apply our approach to quantify the time series of model uncertainty [...]Ran Shiwork_yvzatislebdrpmqbznit5bnilqMaking the most of your day: online learning for optimal allocation of time
https://scholar.archive.org/work/bw2szyvznbghjac57e5v5lx34i
We study online learning for optimal allocation when the resource to be allocated is time. %Examples of possible applications include job scheduling for a computing server, a driver filling a day with rides, a landlord renting an estate, etc. An agent receives task proposals sequentially according to a Poisson process and can either accept or reject a proposed task. If she accepts the proposal, she is busy for the duration of the task and obtains a reward that depends on the task duration. If she rejects it, she remains on hold until a new task proposal arrives. We study the regret incurred by the agent, first when she knows her reward function but does not know the distribution of the task duration, and then when she does not know her reward function, either. This natural setting bears similarities with contextual (one-armed) bandits, but with the crucial difference that the normalized reward associated to a context depends on the whole distribution of contexts.Etienne Boursier and Tristan Garrec and Vianney Perchet and Marco Scarsiniwork_bw2szyvznbghjac57e5v5lx34iThu, 04 Nov 2021 00:00:00 GMTLXM: better splittable pseudorandom number generators (and almost as fast)
https://scholar.archive.org/work/xwepsxjrsvfjllcm7msipvg4vy
In 2014, Steele, Lea, and Flood presented SplitMix, an object-oriented pseudorandom number generator (prng) that is quite fast (9 64-bit arithmetic/logical operations per 64 bits generated) and also splittable . A conventional prng object provides a generate method that returns one pseudorandom value and updates the state of the prng; a splittable prng object also has a second operation, split , that replaces the original prng object with two (seemingly) independent prng objects, by creating and returning a new such object and updating the state of the original object. Splittable prng objects make it easy to organize the use of pseudorandom numbers in multithreaded programs structured using fork-join parallelism. This overall strategy still appears to be sound, but the specific arithmetic calculation used for generate in the SplitMix algorithm has some detectable weaknesses, and the period of any one generator is limited to 2 64 . Here we present the LXM family of prng algorithms. The idea is an old one: combine the outputs of two independent prng algorithms, then (optionally) feed the result to a mixing function. An LXM algorithm uses a linear congruential subgenerator and an F 2 -linear subgenerator; the examples studied in this paper use a linear congruential generator (LCG) of period 2 16 , 2 32 , 2 64 , or 2 128 with one of the multipliers recommended by L'Ecuyer or by Steele and Vigna, and an F 2 -linear xor-based generator (XBG) of the xoshiro family or xoroshiro family as described by Blackman and Vigna. For mixing functions we study the MurmurHash3 finalizer function; variants by David Stafford, Doug Lea, and degski; and the null (identity) mixing function. Like SplitMix, LXM provides both a generate operation and a split operation. Also like SplitMix, LXM requires no locking or other synchronization (other than the usual memory fence after instance initialization), and is suitable for use with simd instruction sets because it has no branches or loops. We analyze the period and equidistribution properties of LXM generators, and present the results of thorough testing of specific members of this family, using the TestU01 and PractRand test suites, not only on single instances of the algorithm but also for collections of instances, used in parallel, ranging in size from 2 to 2 24 . Single instances of LXM that include a strong mixing function appear to have no major weaknesses, and LXM is significantly more robust than SplitMix against accidental correlation in a multithreaded setting. We believe that LXM, like SplitMix, is suitable for "everyday" scientific and machine-learning applications (but not cryptographic applications), especially when concurrent threads or distributed processes are involved.Guy L. Steele Jr., Sebastiano Vignawork_xwepsxjrsvfjllcm7msipvg4vyWed, 20 Oct 2021 00:00:00 GMT