IA Scholar Query: Deriving Incremental Implementations from Algebraic.
https://scholar.archive.org/
Internet Archive Scholar query results feedeninfo@archive.orgFri, 30 Sep 2022 00:00:00 GMTfatcat-scholarhttps://scholar.archive.org/help1440An FPGA Overlay for CNN Inference with Fine-grained Flexible Parallelism
https://scholar.archive.org/work/jadjivca25g3nclz6naalqzbxu
Increasingly, pre-trained convolutional neural networks (CNNs) are being deployed for inference in various computer vision applications, both on the server-side in the data centers and at the edge. CNN inference is a very compute-intensive task. It is a challenge to meet performance metrics such as latency and throughput while optimizing power. Special-purpose ASICs and FPGAs are suitable candidates to meet these power and performance budgets simultaneously. Rapidly evolving CNN architectures involve novel convolution operations such as point convolutions, depth separable convolutions, and so on. This leads to substantial variation in the computational structure across CNNs and layers within a CNN. Because of this, FPGA reconfigurability provides an attractive tradeoff compared to ASICs. FPGA-based hardware designers address the structural variability issue by generating a network-specific accelerator for a single network or a class of networks. However, homogeneous accelerators are network agnostic and often sacrifice throughput and FPGA LUTs for flexibility. In this article, we propose an FPGA overlay for efficient processing of CNNs that can be scaled based on the available compute and memory resources of the FPGA. The overlay is configured on the fly through control words sent by the host on a per-layer basis. Unlike current overlays, our architecture exploits all forms of parallelism inside a convolution operation. A constraint system is employed at the host end to find out the per-layer configuration of the overlay that uses all forms of parallelism in the processing of the layer, resulting in the highest throughput for that layer. We studied the effectiveness of our overlay by using it to process AlexNet, VGG16, YOLO, MobileNet, and ResNet-50 CNNs targeting a Virtex7 and a bigger Ultrascale+VU9P FPGAs. The chosen CNNs have a mix of different types of convolution layers and filter sizes, presenting a good variation in model size and structure. Our accelerator reported a maximum throughput of 1,200 GOps/second on the Virtex7, an improvement of 1.2 \( \times \) to 5 \( \times \) over the recent designs. Also, the reported performance density, measured in giga operations per second per KLUT, is 1.3 \( \times \) to 4 \( \times \) improvement over existing works. Similar speed-up and performance density is also observed for the Ultrascale+VU9P FPGA.Ziaul Choudhury, Shashwat Shrivastava, Lavanya Ramapantulu, Suresh Puriniwork_jadjivca25g3nclz6naalqzbxuFri, 30 Sep 2022 00:00:00 GMTNested Session Types
https://scholar.archive.org/work/cdzjx4x355eyjn7slpugmdj6di
Session types statically describe communication protocols between concurrent message-passing processes. Unfortunately, parametric polymorphism even in its restricted prenex form is not fully understood in the context of session types. In this article, we present the metatheory of session types extended with prenex polymorphism and, as a result, nested recursive datatypes. Remarkably, we prove that type equality is decidable by exhibiting a reduction to trace equivalence of deterministic first-order grammars. Recognizing the high theoretical complexity of the latter, we also propose a novel type equality algorithm and prove its soundness. We observe that the algorithm is surprisingly efficient and, despite its incompleteness, sufficient for all our examples. We have implemented our ideas by extending the Rast programming language with nested session types. We conclude with several examples illustrating the expressivity of our enhanced type system.Ankush Das, Henry Deyoung, Andreia Mordido, Frank Pfenningwork_cdzjx4x355eyjn7slpugmdj6diFri, 30 Sep 2022 00:00:00 GMTA∗WRBAS: Space Mobile Robotics Control Conceptual Model Using IoRT Reinforcement Learning and Tracking with Noise Estimation Using EKF
https://scholar.archive.org/work/g3grdcaapbgwhhqtlltru4o5ju
With more than one billion connected devices, the notion of the Internet of Things (IoT) is now gaining momentum. The mobile robot must be able to find itself in space, which is a necessary ability for autonomous navigation. Every high-level navigation operation starts with the fundamental assumption that the user is aware of both their position and the locations of other points of interest throughout the world. A robot without a sense of position can only function in a localized, reactive manner and cannot plan actions that take place outside of the immediate area of its sensory capabilities. The ubiquity of sensors and objects with robotic and autonomous systems is combined in a novel idea known as the "Internet of Robotic Things." Computer science and mechanical engineering come together in robotics. Designing and manufacturing mechanical parts and components for robot control systems benefits from the use of mechanical engineering. Space robots and robotics are recognized as tools that can improve astronauts' manipulation, functions, and control; as a result, they can be referred to as their artificial assistance for in situ evaluations of the conditions in space. Human-robot contact is made possible by the fact that gestures and actions are so common in robot control systems. Contrary to AI and reinforcement learning, which have been used to regulate the operation of robots in a variety of sectors, IoRT (Internet of Robotic Things), a novel subset of IoT, has the potential to track a range of robot action plans. In this research, we provide a conceptual framework to help future researchers design and simulate such a prototype. It is based on an IoRT control system that has been enhanced using reinforcement learning and AI algorithms. We also use AKF to keep track of robots and reduce noise in sensors that have been combined with the A ∗ algorithm (adaptive Kalman filtering). It is necessary to develop and imitate this mental framework. Deep reinforcement learning is a promising approach for autonomously learning complex behaviors from little sensor data (RL). We also discuss the fundamental theoretical foundations and fundamental issues with current algorithms that limit the usage of reinforcement learning methods in practical robotics applications. We also go through some possible directions that reinforcement learning research may go in the future.Anurag Sinha, Ashish Bagwari, Pooja Joshi, Ramish, Sudhani Verma, Jyotshana Kanti, Robin Singh Bhadoriawork_g3grdcaapbgwhhqtlltru4o5juThu, 29 Sep 2022 00:00:00 GMTApplication of Operations Research in Oil Industry: A Systematic Literature Review
https://scholar.archive.org/work/d6muihpmrram7mkrqa2gs3ls7m
In the following bibliometric research paper, we have solely focused on applications of various methodologies of operations research focused on the addition, contributions, methodological focuses, and findings related to the oil industry. Starting from the 1940s to most recent publications available the authors and most relevant papers have been chosen and their contributions specified to provide complete information about their methodological work and the reasons they did it. To give a regressing brief about the paper our major findings include the MOEA, DOSC and Simplex methods and how they currently affect the oil industry along with what other questions our paper creates like how Covid-19 would affect the functioning/ working of these methods. To regress to the previous point that states a summary concluding our findings. Followed by the 4 most important methodologies and their applications in certain sectors within optimization withing the oil industry. This is then backed by our list of reviewed articles and what is stated within them to support the findings within the paper. Lastly the introduction consists of the issues and errors that created the need for these methods to be applied in the industry. This paper doesn't prove any hypothesis or work on filling gaps within the industry instead gives a competent collection of data over the years that have been slowly solving these problems and how they could help people identify other problems to solve after having detailed knowledge of most existing methods used within the oil industry.Vedant Parekh, Utkarsh Narayan Singh, Vaaniya Lodhi, Tanishka Sharma, Vanshika Jain, Yugansh Pawahwork_d6muihpmrram7mkrqa2gs3ls7mWed, 28 Sep 2022 00:00:00 GMTA Tutorial Introduction to Lattice-based Cryptography and Homomorphic Encryption
https://scholar.archive.org/work/vlqa6rnsa5d3vnpa3qeaizot6a
Why study Lattice-based Cryptography? There are a few ways to answer this question. 1. It is useful to have cryptosystems that are based on a variety of hard computational problems so the different cryptosystems are not all vulnerable in the same way. 2. The computational aspects of lattice-based cryptosystem are usually simple to understand and fairly easy to implement in practice. 3. Lattice-based cryptosystems have lower encryption/decryption computational complexities compared to popular cryptosystems that are based on the integer factorisation or the discrete logarithm problems. 4. Lattice-based cryptosystems enjoy strong worst-case hardness security proofs based on approximate versions of known NP-hard lattice problems. 5. Lattice-based cryptosystems are believed to be good candidates for post-quantum cryptography, since there are currently no known quantum algorithms for solving lattice problems that perform significantly better than the best-known classical (non-quantum) algorithms, unlike for integer factorisation and (elliptic curve) discrete logarithm problems. 6. Last but not least, interesting structures in lattice problems have led to significant advances in Homomorphic Encryption, a new research area with wide-ranging applications.Yang Li, Kee Siong Ng, Michael Purcellwork_vlqa6rnsa5d3vnpa3qeaizot6aWed, 28 Sep 2022 00:00:00 GMTStudies of quantum chromodynamics with jets at the CMS experiment at the LHC
https://scholar.archive.org/work/tl6cqxvdijhwbnd4wxd3l6sbii
Several people played a decisive role in accomplishing this thesis and helped me in dierent aspects. In Hamburg, I would like to extend my deepest gratitude to Patrick L.S. Connor for his invaluable contribution to this work and for training me to consider scientic research as a "share, help, learn, cross-check, enjoy" cycle. Besides developing the overall analysis framework, he was always reachable for help and support, making the work with him a continuous upskilling process. I am also extremely grateful to Paolo Gunnellini for his contributions to the analysis, but mainly for his crucial guidance during my rst steps in high energy physics and his availability to help whenever I needed to. At DESY, I am deeply indebted to Hannes Jung for all his hospitality and support. Apart from that, he also gave me the opportunity to work with his wonderful team, to whom I am also grateful. In particular, many thanks toParaskevas Gianneios, University Of Ioanninawork_tl6cqxvdijhwbnd4wxd3l6sbiiWed, 28 Sep 2022 00:00:00 GMTOn the speed of convergence of Piterbarg constants
https://scholar.archive.org/work/frwdf27upfg6fpdcsavkozybbu
In this paper we derive an upper bound for the difference between the continuous and discrete Piterbarg constants. Our result allows us to approximate the classical Piterbarg constants by their discrete counterparts using Monte Carlo simulations with an explicit error rateKrzysztof Bisewski, Grigori Jasnovidovwork_frwdf27upfg6fpdcsavkozybbuWed, 28 Sep 2022 00:00:00 GMTInfluence of Stratification and Bottom Boundary Layer on the Classical Ekman Model
https://scholar.archive.org/work/gbsbtq45ybhixedvpaytk5hsoa
A depth understanding of the different processes of water movements produced by the wind surface stress yields a better description and improvement of the marine food chain and ecosystem. The classical Ekman model proposes a hypothetical ocean, excluding the influence of continents and the Coriolis force. It also assumes infinite depth and a constant vertical eddy viscosity. The current study aims to understand how the vertical velocity profile is affected by the variation of the eddy viscosity coefficient (kz) and the consideration of a finite depth. The study uses an ideal analytical model with the Ekman classical model as a starting point. It has been demonstrated that, for a very stratified profile, when the depth is not considered infinity, the Ekman transport tends to a direction smaller than 80°. It differs from the classical Ekman model, which proposes an approximated angle equal to 90°. Considering the modified model, it was also found that the surface current deviation is smaller than 40°, which differs from the 45° proposed by the classical model. In addition, it was determined that for ocean depths smaller than 180 m, the maximum velocity does not occur at the water surface, as in the classical model, but at deeper levels.Viviana Santander-Rodríguez, Manuel Díez-Minguito, Mayken Espinoza-Andaluzwork_gbsbtq45ybhixedvpaytk5hsoaWed, 28 Sep 2022 00:00:00 GMTAtomistic modelling of the channeling process with radiation reaction force included
https://scholar.archive.org/work/hkp6lven3vb3ljcruuje5hvzaq
Methodology is developed that incorporates the radiation reaction force into the relativistic molecular dynamics framework implemented in the MBN Explorer software package. The force leads to a gradual decrease in the projectile's energy E due to the radiation emission. This effect is especially strong for ultra-relativistic projectiles passing through oriented crystals where they experience the action of strong electrostatic fields as has been shown in recent experiments. A case study has been carried out for the initial approbation of the methodology developed. Simulations of the processes of planar channeling and photon emission have been performed for 150 GeV positrons in a 200 microns thick single oriented Si(110) crystal. Several regimes for the decrease in E have been established and characterized. Further steps in developing the code to include the necessary quantum corrections are identified and possible algorithmic modifications are proposed.Gennady B. Sushko and Andrei V. Korol and Andrey V. Solov'yovwork_hkp6lven3vb3ljcruuje5hvzaqWed, 28 Sep 2022 00:00:00 GMTMathematical Components
https://scholar.archive.org/work/ahuebtxoqbcrbebz5rb2ulla4q
Mathematical Components is the name of a library of formalized mathematics for the Coq system. It covers a variety of topics, from the theory of basic data structures (e.g., numbers, lists, finite sets) to advanced results in various flavors of algebra. This library constitutes the infrastructure for the machine-checked proofs of the Four Color Theorem and of the Odd Order Theorem. The reason of existence of this book is to break down the barriers to entry. While there are several books around covering the usage of the Coq system and the theory it is based on, the Mathematical Components library is built in an unconventional way. As a consequence, this book provides a non-standard presentation of Coq, putting upfront the formalization choices and the proof style that are the pillars of the library. This books targets two classes of public. On the one hand, newcomers, even the more mathematically inclined ones, find a soft introduction to the programming language of Coq, Gallina, and the SSReflect proof language. On the other hand accustomed Coq users find a substantial account of the formalization style that made the Mathematical Components library possible.Assia Mahboubi, Enrico Tassiwork_ahuebtxoqbcrbebz5rb2ulla4qWed, 28 Sep 2022 00:00:00 GMTEfficient and Near-Optimal Online Portfolio Selection
https://scholar.archive.org/work/xucynznjczbvppsboc3eksflly
In the problem of online portfolio selection as formulated by Cover (1991), the trader repeatedly distributes her capital over d assets in each of T > 1 rounds, with the goal of maximizing the total return. Cover proposed an algorithm, termed Universal Portfolios, that performs nearly as well as the best (in hindsight) static assignment of a portfolio, with an O(dlog(T)) regret in terms of the logarithmic return. Without imposing any restrictions on the market this guarantee is known to be worst-case optimal, and no other algorithm attaining it has been discovered so far. Unfortunately, Cover's algorithm crucially relies on computing certain d-dimensional integral which must be approximated in any implementation; this results in a prohibitive Õ(d^4(T+d)^14) per-round runtime for the fastest known implementation due to Kalai and Vempala (2002). We propose an algorithm for online portfolio selection that admits essentially the same regret guarantee as Universal Portfolios – up to a constant factor and replacement of log(T) with log(T+d) – yet has a drastically reduced runtime of Õ(d^2(T+d)) per round. The selected portfolio minimizes the current logarithmic loss regularized by the log-determinant of its Hessian – equivalently, the hybrid logarithmic-volumetric barrier of the polytope specified by the asset return vectors. As such, our work reveals surprising connections of online portfolio selection with two classical topics in optimization theory: cutting-plane and interior-point algorithms.Rémi Jézéquel, Dmitrii M. Ostrovskii, Pierre Gaillardwork_xucynznjczbvppsboc3eksfllyWed, 28 Sep 2022 00:00:00 GMTBounded-error constrained state estimation in presence of sporadic measurements
https://scholar.archive.org/work/zhm7lrbpanfzbcjacnst7euo2y
This contribution proposes a recursive set-membership method for the ellipsoidal state characterization for discrete-time linear time-varying models with additive unknown disturbances vectors, bounded by possibly degenerate zonotopes and polytopes, impacting respectively, the state evolution equation and the sporadic measurement vectors, which are expressed as linear inequality and equality constraints on the state vector. New algorithms are designed considering the unprecedented fact that, due to equality constraints, the shape matrix of the ellipsoid characterizing all possible values of the state vector is non invertible. The two main size minimizing criteria (volume and sum of squared axes lengths) are examined in the time update step and also in the observation updating, in addition to a third one, minimizing some error norm and ensuring the input-to-state stability of the estimation error.Yasmina Becis-Aubrywork_zhm7lrbpanfzbcjacnst7euo2yWed, 28 Sep 2022 00:00:00 GMTSchrödinger's Cat
https://scholar.archive.org/work/dli45c5uhneaze3m2c2bdtfn2i
The basic idea here is that observation (or one's experience) is fundamental and the 'atomic world' is postulated as the source of such observation. Once this source has been inferred to exist one may attempt to explicitly derive its structure in such a way that the observation itself can be reproduced. And so here is a purely quantum mechanical model of observation coupled to its supposed source, and the observation itself is realised as a projection of this quantum system.Matthew F. Brownwork_dli45c5uhneaze3m2c2bdtfn2iTue, 27 Sep 2022 00:00:00 GMTPersistent homology based goodness-of-fit tests for spatial tessellations
https://scholar.archive.org/work/3ogyb6yebng5nhf2fgixp275ki
Motivated by the rapidly increasing relevance of virtual material design in the domain of materials science, it has become essential to assess whether topological properties of stochastic models for a spatial tessellation are in accordance with a given dataset. Recently, tools from topological data analysis such as the persistence diagram have allowed to reach profound insights in a variety of application contexts. In this work, we establish the asymptotic normality of a variety of test statistics derived from a tessellation-adapted refinement of the persistence diagram. Since in applications, it is common to work with tessellation data subject to interactions, we establish our main results for Voronoi and Laguerre tessellations whose generators form a Gibbs point process. We elucidate how these conceptual results can be used to derive goodness of fit tests, and then investigate their power in a simulation study. Finally, we apply our testing methodology to a tessellation describing real foam data.Christian Hirsch, Johannes Krebs, Claudia Redenbachwork_3ogyb6yebng5nhf2fgixp275kiTue, 27 Sep 2022 00:00:00 GMTScaling limit of the disordered generalized Poland–Scheraga model for DNA denaturation
https://scholar.archive.org/work/rr2zvqn67jf6xixbevjlhvcjhe
The Poland–Scheraga model, introduced in the 1970's, is a reference model to describe the denaturation transition of DNA. More recently, it has been generalized in order to allow for asymmetry in the strands lengths and in the formation of loops: the mathematical representation is based on a bivariate renewal process, that describes the pairs of bases that bond together. In this paper, we consider a disordered version of the model, in which the two strands interact via a potential β V(ω̂_i,ω̅_j)+h when the i-th monomer of the first strand and the j-th monomer of the second strand meet. Here, h∈ℝ is a homogeneous pinning parameter, (ω̂_i)_i≥ 1 and (ω̅_j)_j≥ 1 are two sequences of i.i.d. random variables attached to each DNA strand, V(·,·) is an interaction function and β>0 is the disorder intensity. Our main result finds some condition on the underlying bivariate renewal so that, if one takes β,h↓0 at some appropriate (explicit) rate as the length of the strands go to infinity, the partition function of the model admits a non-trivial, i.e. disordered, scaling limit. This is known as an intermediate disorder regime and is linked to the question of disorder relevance for the denaturation transition. Interestingly and surprisingly, the rate at which one has to take β↓0 depends on the interaction function V(·,·) and on the distribution of (ω̂_i)_i≥ 1, (ω̅_j)_j≥ 1. On the other hand, the intermediate disorder limit of the partition function, when it exists, is universal: it is expressed as a chaos expansion of iterated integrals against a Gaussian process ℳ, which arises as the scaling limit of the field (e^β V(ω̂_i,ω̅_j))_i,j≥ 0 and exhibits strong correlations on lines and columns.Quentin Berger, Alexandre Legrandwork_rr2zvqn67jf6xixbevjlhvcjheTue, 27 Sep 2022 00:00:00 GMTHigh order approximations of the Cox-Ingersoll-Ross process semigroup using random grids
https://scholar.archive.org/work/flqoi4lyg5g77np5az3d3d3tjy
We present new high order approximations schemes for the Cox-Ingersoll-Ross (CIR) process that are obtained by using a recent technique developed by Alfonsi and Bally (2021) for the approximation of semigroups. The idea consists in using a suitable combination of discretization schemes calculated on different random grids to increase the order of convergence. This technique coupled with the second order scheme proposed by Alfonsi (2010) for the CIR leads to weak approximations of order 2k, for all k∈ℕ^*. Despite the singularity of the square-root volatility coefficient, we show rigorously this order of convergence under some restrictions on the volatility parameters. We illustrate numerically the convergence of these approximations for the CIR process and for the Heston stochastic volatility model and show the computational time gain they give.Aurélien Alfonsi, Edoardo Lombardowork_flqoi4lyg5g77np5az3d3d3tjyTue, 27 Sep 2022 00:00:00 GMTSurvey Descent: A Multipoint Generalization of Gradient Descent for Nonsmooth Optimization
https://scholar.archive.org/work/73uii2nbh5gypmqocm3omqz7re
For strongly convex objectives that are smooth, the classical theory of gradient descent ensures linear convergence relative to the number of gradient evaluations. An analogous nonsmooth theory is challenging. Even when the objective is smooth at every iterate, the corresponding local models are unstable and the number of cutting planes invoked by traditional remedies is difficult to bound, leading to convergences guarantees that are sublinear relative to the cumulative number of gradient evaluations. We instead propose a multipoint generalization of the gradient descent iteration for local optimization. While designed with general objectives in mind, we are motivated by a "max-of-smooth" model that captures the subdifferential dimension at optimality. We prove linear convergence when the objective is itself max-of-smooth, and experiments suggest a more general phenomenon.X.Y. Han, Adrian S. Lewiswork_73uii2nbh5gypmqocm3omqz7reTue, 27 Sep 2022 00:00:00 GMTTowards Quantum Advantage on Noisy Quantum Computers
https://scholar.archive.org/work/vy7q3xiryfgdregta54odzx6si
Topological data analysis (TDA) is a powerful technique for extracting complex and valuable shape-related summaries of high-dimensional data. However, the computational demands of classical TDA algorithms are exorbitant, and quickly become impractical for high-order characteristics. Quantum computing promises exponential speedup for certain problems. Yet, many existing quantum algorithms with notable asymptotic speedups require a degree of fault tolerance that is currently unavailable. In this paper, we present NISQ-TDA, the first fully implemented end-to-end quantum machine learning algorithm needing only a linear circuit-depth, that is applicable to non-handcrafted high-dimensional classical data, with potential speedup under stringent conditions. The algorithm neither suffers from the data-loading problem nor does it need to store the input data on the quantum computer explicitly. Our approach includes three key innovations: (a) an efficient realization of the full boundary operator as a sum of Pauli operators; (b) a quantum rejection sampling and projection approach to restrict a uniform superposition to the simplices of the desired order in the complex; and (c) a stochastic rank estimation method to estimate the topological features in the form of approximate Betti numbers. We present theoretical results that establish additive error guarantees for NISQ-TDA, and the circuit and computational time and depth complexities for exponentially scaled output estimates, up to the error tolerance. The algorithm was successfully executed on quantum computing devices, as well as on noisy quantum simulators, applied to small datasets. Preliminary empirical results suggest that the algorithm is robust to noise.Ismail Yunus Akhalwaya, Shashanka Ubaru, Kenneth L. Clarkson, Mark S. Squillante, Vishnu Jejjala, Yang-Hui He, Kugendran Naidoo, Vasileios Kalantzis, Lior Horeshwork_vy7q3xiryfgdregta54odzx6siTue, 27 Sep 2022 00:00:00 GMTThe Consensus Problem in Polities of Agents with Dissimilar Cognitive Architectures
https://scholar.archive.org/work/nr2nshqksjba7cqsdmxjrkq4ru
Agents interacting with their environments, machine or otherwise, arrive at decisions based on their incomplete access to data and their particular cognitive architecture, including data sampling frequency and memory storage limitations. In particular, the same data streams, sampled and stored differently, may cause agents to arrive at different conclusions and to take different actions. This phenomenon has a drastic impact on polities—populations of agents predicated on the sharing of information. We show that, even under ideal conditions, polities consisting of epistemic agents with heterogeneous cognitive architectures might not achieve consensus concerning what conclusions to draw from datastreams. Transfer entropy applied to a toy model of a polity is analyzed to showcase this effect when the dynamics of the environment is known. As an illustration where the dynamics is not known, we examine empirical data streams relevant to climate and show the consensus problem manifest.Damian Radosław Sowinski, Jonathan Carroll-Nellenback, Jeremy DeSilva, Adam Frank, Gourab Ghoshal, Marcelo Gleiserwork_nr2nshqksjba7cqsdmxjrkq4ruTue, 27 Sep 2022 00:00:00 GMTAn Efficient Computational Technique for the Analysis of Telegraph Equation
https://scholar.archive.org/work/cvht2ch4ybgg3lj76rxdpddmqe
The Telegraph equation has drawn much attention due to its recent variety of applications in different areas of the communication system. Various methods have been developed to solve the Telegraph equation so far. In this research paper, we have formulated a derivation mathematically for the Telegraph equation for the section of a line of transmission concerning the voltage associated and the current. Therefore, obtained mathematical equation has been solved numerically by COMSOL Multiphysics. We have then numerically analyzed the parametric behavior of the Telegraph equation. The analysis first starts with allowing both the damping coefficients to vary, keeping the transmission velocity fixed, and observing the pulse shape at different time slots. We have then investigated the deformation of the pulse caused due to the gradual increase of transmission velocity for varying damping coefficients at the intended discrete time slots. Finally, we analyzed the behavior of the associated voltage pattern for those variations due to the corresponding distance of the Telegraph wire. We have observed that changes in the damping coefficients have a gradual impact on the associated voltage of the Telegraph equation, which is more conspicuous for the higher time slots. Transmission velocity is found as the most influential parameter of the Telegraph equation that controls the deformation of the pulse height, which is the cardinal part of the inquiry.Selim Hussen, Mahtab Uddin, Md. Rezaul Karimwork_cvht2ch4ybgg3lj76rxdpddmqeTue, 27 Sep 2022 00:00:00 GMT