Filters








1,316 Hits in 8.1 sec

Relevant Applications of Generative Adversarial Networks in Drug Design and Discovery: Molecular De Novo Design, Dimensionality Reduction, and De Novo Peptide and Protein Design

Eugene Lin, Chieh-Hsin Lin, Hsien-Yuan Lane
2020 Molecules  
A growing body of evidence now suggests that artificial intelligence and machine learning techniques can serve as an indispensable foundation for the process of drug design and discovery.  ...  Firstly, we review drug design and discovery studies that leverage various GAN techniques to assess one main application such as molecular de novo design in drug design and discovery.  ...  shown to outperform the long short-term memory unit [65] .  ... 
doi:10.3390/molecules25143250 pmid:32708785 fatcat:rrik322g6vbetaubwjb3rtvajm

Unsupervised Anomaly Video Detection via a Double-Flow ConvLSTM Variational Autoencoder

Lin Wang, Haishu Tan, Fuqiang Zhou, Wangxia Zuo, Pengfei Sun
2022 IEEE Access  
To solve these problems, in this paper, we present a double-flow convolutional long short-term memory variational autoencoder (DF-ConvLSTM-VAE) to model the probabilistic distribution of the normal video  ...  in an unsupervised learning scheme, and to reconstruct videos without anomaly objects for anomaly video detection.  ...  ACKNOWLEDGMENT The authors would like to thank the editors and anonymous reviewers for their constructive and valuable comments and suggestions.  ... 
doi:10.1109/access.2022.3165977 fatcat:ni57gcuccvhb3mp5gtd57ojze4

Attention Autoencoder for Generative Latent Representational Learning in Anomaly Detection

Ariyo Oluwasanmi, Muhammad Umar Aftab, Edward Baagyere, Zhiguang Qin, Muhammad Ahmad, Manuel Mazzara
2021 Sensors  
Additionally, a variational autoencoder (VAE) and a long short-term memory (LSTM) network is designed to learn the Gaussian distribution of the generative reconstruction and time-series sequential data  ...  The three proposed models include an attention autoencoder that maps input data to a lower-dimensional latent representation with maximum feature retention, and a reconstruction decoder with minimum remodeling  ...  The final model designs the recurrent neural network's (RNN) long short-term memory (LSTM) architecture to process the input data as a time-series sequence.  ... 
doi:10.3390/s22010123 pmid:35009666 pmcid:PMC8747546 fatcat:glnwwinczjd23hbmyy2gtvpkwm

An Overview of Deep Learning Based Methods for Unsupervised and Semi-Supervised Anomaly Detection in Videos

B. Kiran, Dilip Thomas, Ranjith Parakkal
2018 Journal of Imaging  
This article reviews the state-of-the-art deep learning based methods for video anomaly detection and categorizes them based on the type of model and criteria of detection.  ...  We also perform simple studies to understand the different approaches and provide the criteria of evaluation for spatio-temporal anomaly detection.  ...  Acknowledgments : The authors would like to thank Benjamin Crouzier for his help in proof reading the manuscript, and Y. Senthil Kumar (Valeo) for helpful suggestions.  ... 
doi:10.3390/jimaging4020036 fatcat:za52zspzjbewbakdordavpatvq

An overview of deep learning based methods for unsupervised and semi-supervised anomaly detection in videos [article]

B Ravi Kiran, Dilip Mathew Thomas, Ranjith Parakkal
2018 arXiv   pre-print
This article reviews the state-of-the-art deep learning based methods for video anomaly detection and categorizes them based on the type of model and criteria of detection.  ...  We also perform simple studies to understand the different approaches and provide the criteria of evaluation for spatio-temporal anomaly detection.  ...  (GANs), Long Short Term memory networks (LSTMs), and others.  ... 
arXiv:1801.03149v2 fatcat:u6qz7upzfbdgfaxvihpl55kdhi

Scalable Recollections for Continual Lifelong Learning

Matthew Riemer, Tim Klinger, Djallel Bouneffouf, Michele Franceschini
2019 PROCEEDINGS OF THE THIRTIETH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE AND THE TWENTY-EIGHTH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE  
and memory.  ...  In particular we consider the case where typical experiences are O(n) bits and memories are limited to O(k) bits for k  ...  In this theory, updated in (Kumaran, Hassabis, and McClelland 2016) , the hippocampus is responsible for fast learning, providing a very plastic representation for retaining short term memories.  ... 
doi:10.1609/aaai.v33i01.33011352 fatcat:wnxdvhxyojf4xepljtua3565j4

DEEP LEARNING BASED HYBRID APPROACH OF DETECTING FRAUDULENT TRANSACTIONS

MIN JONG CHEON, DONG HEE LEE, HAN SEON JOO, OOK LEE
2021 Zenodo  
Even though our model has a similar accuracy score compared to other models and does not implement the Variational Autoencoder for feature selection, this model could potentially be utilized as an effective  ...  Through constructing a new model with a hybrid approach of deep learning and machine learning, which is composed of a Bi-LSTM-Autoencoder and Isolation Forest, we successfully detected fraudulent transactions  ...  We combined them with the original input dataset then, compared the result of our model to different machine learning models such as Isolation Forest, Local Outlier and Long Short-Term Memory Autoencoder  ... 
doi:10.5281/zenodo.5393028 fatcat:c4vqx7tk6rfbzk7qx5km5d2uf4

Multiphase flow applications of nonintrusive reduced-order models with Gaussian process emulation

Themistoklis Botsas, Indranil Pan, Lachlan R. Mason, Omar K. Matar
2022 Data-Centric Engineering  
long short-term memory networks, for the interpolation.  ...  In previous work, we presented a ROM analysis framework that coupled compression techniques, such as autoencoders, with Gaussian process regression in the latent space.  ...  Replication data and code can be found in the Github repository: https://github.com/themisbo/ ROM_applications.git.  ... 
doi:10.1017/dce.2022.19 fatcat:gfwnu6j3rfbwhbzfc5zmqk5ioy

Memory-Augmented Insider Threat Detection with Temporal-Spatial Fusion

Dongyang Li, Lin Yang, Hongguang Zhang, Xiaolei Wang, Linru Ma, Robertas Damaševičius
2022 Security and Communication Networks  
Moreover, it introduces the memory-augmented network into autoencoder to enlarge the reconstruction error of abnormal samples, thereby reducing the false negative rate.  ...  yet long-lasting insider threats, and reduce the possibility of false positives.  ...  Acknowledgments is research was supported by a research grant from the National Science Foundation of China under Grant nos. 61772271 and 62106282.  ... 
doi:10.1155/2022/6418420 fatcat:qwp4j6ms6fhz5p7kp6a3sthtfq

Learning Latent Representation of Freeway Traffic Situations from Occupancy Grid Pictures Using Variational Autoencoder

Olivér Rákos, Tamás Bécsi, Szilárd Aradi, Péter Gáspár
2021 Energies  
The planning layer deals with the sort and long-term situation prediction, which are crucial for intelligent vehicles.  ...  The method uses the structured data of surrounding vehicles and transforms it to an occupancy grid which a Convolutional Variational Autoencoder (CVAE) processes.  ...  Deep CNN networks utilize long short-term memories to seize the data's static and dynamic features and focus on the dynamic part for prediction.  ... 
doi:10.3390/en14175232 fatcat:aqzdcrg2qbex5e7bzcyugztj2i

A deep learning framework for financial time series using stacked autoencoders and long-short term memory

Wei Bao, Jun Yue, Yulei Rao, Boris Podobnik
2017 PLoS ONE  
A deep learning framework for financial time series using stacked autoencoders and long-short term memory. PLoS ONE 12(7): e0180944. https://doi.  ...  This study presents a novel deep learning framework where wavelet transforms (WT), stacked autoencoders (SAEs) and long-short term memory (LSTM) are combined for stock price forecasting.  ...  Long-short term memory Long short-term memory is one of the many variations of recurrent neural network (RNN) architecture [20] .  ... 
doi:10.1371/journal.pone.0180944 pmid:28708865 pmcid:PMC5510866 fatcat:unjizwkzlfaq5g6qxijhaatfky

Raw Music from Free Movements

Daniel Bisig, Kıvanç Tatar
2021 Zenodo  
The architecture combines a sequence-to-sequence model generating audio encodings and an adversarial autoencoder that generates raw audio from audio encodings.  ...  Experiments have been conducted with two datasets: a dancer improvising freely to a given music, and music created through simple movement sonification. The paper presents preliminary results.  ...  The sequence encoder consists of three recurrent layers with 512 Long short-term memory (LSTM) (Hochreiter & Schmidhuber, 1997) units each.  ... 
doi:10.5281/zenodo.5137952 fatcat:tcvycitkcbcarjb3nt3fxv6fzm

Deep Learning for Computer Vision: A Brief Review

Athanasios Voulodimos, Nikolaos Doulamis, Anastasios Doulamis, Eftychios Protopapadakis
2018 Computational Intelligence and Neuroscience  
Belief Networks, and Stacked Denoising Autoencoders.  ...  A brief account of their history, structure, advantages, and limitations is given, followed by a description of their applications in various computer vision tasks, such as object detection, face recognition  ...  Acknowledgments This research is implemented through IKY scholarships programme and cofinanced by the European Union (European Social Fund-ESF) and Greek national funds through the action titled "Reinforcement  ... 
doi:10.1155/2018/7068349 pmid:29487619 pmcid:PMC5816885 fatcat:yeawpj32onfutegmkqpx4p6tsa

Expressing uncertainty in neural networks for production systems

Samim Ahmad Multaheb, Bernd Zimmering, Oliver Niggemann
2021 at - Automatisierungstechnik  
This is a prerequisite for the use of such networks in closedcontrol loops and in automation systems.  ...  Acknowledgment: Special thanks go to Carlo Voss and Maurice Thomas for their support in preparing the models and data pipelines.  ...  For our approach, we use a Long Short-Term Memory network (LSTM), which is a common type of RNN [16] .  ... 
doi:10.1515/auto-2020-0122 dblp:journals/at/MultahebZN21 fatcat:euv5rs3gdje6bm7o4dy5qvv4ee

Traffic Request Generation through a Variational Auto Encoder Approach

Stefano Chiesa, Sergio Taraglio
2022 Computers  
Here, a variational autoencoder architecture has been trained on a floating car dataset in order to grasp the statistical features of the traffic demand in the city of Rome.  ...  The generated trajectories are compared with those in the dataset. The resulting reconstructed synthetic data are employed to compute the traffic fluxes and geographic distribution of parked cars.  ...  Acknowledgments: The authors sincerely thank the editor and the anonymous reviewers for constructive comments and suggestions to clarify the content of the paper.  ... 
doi:10.3390/computers11050071 fatcat:ow6lfsq6bbhsxnnmrcs4lsmki4
« Previous Showing results 1 — 15 out of 1,316 results