Filters








6,879 Hits in 5.4 sec

Mitigating Divergence of Latent Factors via Dual Ascent for Low Latency Event Prediction Models [article]

Alex Shtoff, Yair Koren
2021 arXiv   pre-print
To address these behaviors, fresh models are highly important, and to achieve this (and for several other reasons) incremental training on small chunks of past events is often employed.  ...  These behaviors and algorithmic optimizations occasionally cause model parameters to grow uncontrollably large, or diverge.  ...  The OFFSET algorithm includes an adaptive online hyperparameter tuning mechanism [3] .  ... 
arXiv:2111.07866v1 fatcat:knzsocdlivggzoislvxfdd2fce

Event-Driven Source Traffic Prediction in Machine-Type Communications Using LSTM Networks

Thulitha Senevirathna, Bathiya Thennakoon, Tharindu Sankalpa, Chatura Seneviratne, Samad Ali, Nandana Rajatheva
2020 GLOBECOM 2020 - 2020 IEEE Global Communications Conference  
Knowledge of such a causal relationship can enable event-driven traffic prediction.  ...  In this paper, a long short-term memory (LSTM) based deep learning approach is proposed for eventdriven source traffic prediction.  ...  ) LSTM based Source Traffic Prediction Model LSTM Tuning Test data set Tuned Model TABLE I : I Hyper-parameter Ranges used and Best Model Parameters Hyper parameter Range Best Model Parameters  ... 
doi:10.1109/globecom42002.2020.9322417 fatcat:npyrpxsohjhp7kb43ccgtqoqay

Scalable hands-free transfer learning for online advertising

Brian Dalessandro, Daizhuo Chen, Troy Raeder, Claudia Perlich, Melinda Han Williams, Foster Provost
2014 Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining - KDD '14  
This paper presents a combination of strategies, deployed by the online advertising firm Dstillery, for learning many models from extremely high-dimensional data efficiently and without human intervention  ...  (ii) A new update rule for automatic learning rate adaptation, to support learning from sparse, high-dimensional data, as well as the integration with adaptive regularization.  ...  Recent developments in adaptive learningrate schedules [13] and adaptive regularization [12] allow for incremental training of linear models in millions of dimensions without exhaustive hyper-parameter  ... 
doi:10.1145/2623330.2623349 dblp:conf/kdd/DalessandroCRPWP14 fatcat:k356kiceefakthmxkz4wxbdvn4

Novel asynchronous activation of the bio-inspired adaptive tuning in the speed controller: Study case in DC motors

Miguel Gabriel Villarreal-Cervantes, Alejandro Rodriguez-Molina, Omar Serrano-Perez
2021 IEEE Access  
The above is attributed to the adaptive tuning based on identification and prediction, and the incorporation of ODE, which reduces the dependence on an exact motor model.  ...  If the event condition is satisfied, the adaptive controller tuning process must be executed to find the controller's most suitable parameters.  ... 
doi:10.1109/access.2021.3118658 fatcat:ptgbxm5ufbhv5fb2fpczvp6bbe

Tuning Random Forest Parameters using Simulated Annealing for Intrusion Detection

2020 VOLUME-8 ISSUE-10, AUGUST 2019, REGULAR ISSUE  
Therefore Simulated Annealing (SA) is utilized for tuning these hyper parameters of RF which leads to improve detection accuracy and efficiency of IDS.  ...  Among these parameters the hyper parameters are selected based on three decision factors, randomness; split rule; tree complexity.  ...  RF also performs FS, is an added benefit. However, cloud based IDS adds more and more events for detection which leads to heavy weighted trees for RF.  ... 
doi:10.35940/ijitee.h6799.079920 fatcat:pir5aaa3bvgorpjenoe4rqmdwu

Deep Learning for Unsupervised Insider Threat Detection in Structured Cybersecurity Data Streams [article]

Aaron Tuor, Samuel Kaplan, Brian Hutchinson, Nicole Nichols, Sean Robinson
2017 arXiv   pre-print
For our best model, the events labeled as insider threat activity in our dataset had an average anomaly score in the 95.53 percentile, demonstrating our approach's potential to greatly reduce analyst workloads  ...  As a prospective filter for the human analyst, we present an online unsupervised deep learning approach to detect anomalous network activity from system logs in real time.  ...  If desired, a second system could be trained to model normal weekend behavior. Tuning We tune our models and baselines on the development set using random hyper-parameter search.  ... 
arXiv:1710.00811v2 fatcat:u7nwwxy7bvdnvnclga2ajjx7jm

Rafiki

Wei Wang, Jinyang Gao, Meihui Zhang, Sheng Wang, Gang Chen, Teck Khim Ng, Beng Chin Ooi, Jie Shao, Moaz Reyad
2018 Proceedings of the VLDB Endowment  
Rafiki provides distributed hyper-parameter tuning for the training service, and online ensemble modeling for the inference service which trades off between latency and accuracy.  ...  prediction.  ...  Rafiki supports effective distributed hyper-parameter tuning for the training service, and online ensemble modeling for the inference service that is amenable to the trade off between latency and accuracy  ... 
doi:10.14778/3282495.3282499 fatcat:673flklsivgdtkzycg7ddm7hbe

Conformalized Online Learning: Online Calibration Without a Holdout Set [article]

Shai Feldman, Stephen Bates, Yaniv Romano
2022 arXiv   pre-print
This allows us to fit the predictive model in a fully online manner, utilizing the most recent observation for constructing calibrated uncertainty sets.  ...  Consequently, and in contrast with existing techniques, (i) the sets we build can quickly adapt to new changes in the distribution; and (ii) our procedure does not require refitting the model at each time  ...  Compatibility with online learning models. Our method works together with any black-box online learning algorithm to adaptively control any parameter that encodes the size of the prediction set.  ... 
arXiv:2205.09095v3 fatcat:hej6pzlvczcdbkimm4omphchwe

PipeTune: Pipeline Parallelism of Hyper and System Parameters Tuning for Deep Learning Clusters [article]

Isabelly Rocha, Nathaniel Morris, Lydia Y. Chen, Pascal Felber, Robert Birke, Valerio Schiavoni
2020 arXiv   pre-print
The most critical phase of these jobs for model performance and learning cost is the tuning of hyperparameters.  ...  PipeTune takes advantage of the high parallelism and recurring characteristics of such jobs to minimize the learning cost via a pipelined simultaneous tuning of both hyper and system parameters.  ...  Indranil Gupta, for his helpful feedback.  ... 
arXiv:2010.00501v2 fatcat:tfbqfbkulzfpzniexf4ng5kf3y

Anomaly Detection in Audio with Concept Drift using Adaptive Huffman Coding [article]

Pratibha Kumari, Mukesh Saini
2021 arXiv   pre-print
We propose to use adaptive Huffman coding for anomaly detection in audio with concept drift.  ...  Compared with the existing method of adaptive Gaussian mixture modeling (AGMM), adaptive Huffman coding does not require a priori information about the clusters and can adjust the number of clusters dynamically  ...  The hyper-parameters for other scenarios are also tuned in the similar way.  ... 
arXiv:2102.10515v2 fatcat:fb24632jmzeaxfuveitp6f3h6i

Domain Adaptation for Real-Time Student Performance Prediction [article]

Byung-Hak Kim, Ethan Vizitei, Varun Ganapathi
2019 arXiv   pre-print
In particular, we first introduce recently-developed GritNet architecture which is the current state of the art for student performance prediction problem, and develop a new unsupervised domain adaptation  ...  Increasingly fast development and update cycle of online course contents, and diverse demographics of students in each online classroom, make student performance prediction in real-time (before the course  ...  In that, further hyper-parameter optimization could be done for the optimal accuracy at each task and week. D.  ... 
arXiv:1809.06686v3 fatcat:rl2rqqltjnbm5gulxpviinz6qu

Labeled Memory Networks for Online Model Adaptation [article]

Shiv Shankar, Sunita Sarawagi
2017 arXiv   pre-print
We also found them to be more accurate and faster than state-of-the-art methods of retuning model parameters for adapting to domain-specific labeled data.  ...  We propose a design of memory augmented neural networks (MANNs) called Labeled Memory Networks (LMNs) suited for tasks requiring online adaptation in classification models.  ...  Acknowledgements We gratefully acknowledge the support of NVIDIA Corporation with the donation of the Titan X Pascal GPU used for this research.  ... 
arXiv:1707.01461v3 fatcat:dpewcme5ybajznrhfrkzpidziy

Deep Learning to Predict Student Outcomes [article]

Byung-Hak Kim
2019 arXiv   pre-print
The increasingly fast development cycle for online course contents, along with the diverse student demographics in each online classroom, make real-time student outcomes prediction an interesting topic  ...  In this paper, we tackle the problem of real-time student performance prediction in an on-going course using a domain adaptation framework.  ...  Further hyper-parameter optimization could be done for the optimal accuracy at each week.  ... 
arXiv:1905.02530v1 fatcat:exehpcdp5zgolhk4vxd3r3j4ry

Investigating response time and accuracy in online classifier learning for multimedia publish-subscribe systems

Asra Aslam, Edward Curry
2021 Multimedia tools and applications  
Our experiments demonstrate that deep neural network-based object detection models, with hyperparameter tuning, can improve the performance within less training time for the answering of previously unknown  ...  GPU for the processing of multimedia events.  ...  We also gratefully acknowledge the support of NVIDIA Corporation for the donation of GPU (Titan Xp).  ... 
doi:10.1007/s11042-020-10277-x pmid:34720665 pmcid:PMC8550296 fatcat:xxkpkly22rhwfotmyssrb4yd2e

Predicting the client's purchasing intention using Machine Learning models

S. Ahsain, M. Ait Kbir, K. Slimani, O. Gerasymov, M. Ait Kbir, S. Bennani Dosse, S. Bourekkadi, A. Amrani
2022 E3S Web of Conferences  
In this paper, we introduce a prediction algorithm that will determine the likelihood that a client will purchase from a website or not.  ...  The tuned Random Forest model scored the best results with a 91% Accuracy score before tuning the hyper-parameters when used with the sessions dataset.  ...  It can also be tuned to get better results; It can also automatically tune the hyper-parameters of a model using the Random Grid Search. rf = create_model('rf') tuned_rf = tune_model(rf) We notice satisfying  ... 
doi:10.1051/e3sconf/202235101070 fatcat:qbwsndnn2nh55guvwgwnxxfmmq
« Previous Showing results 1 — 15 out of 6,879 results