Filters








155 Hits in 2.8 sec

SINGA

Wei Wang, Gang Chen, Anh Tien Tuan Dinh, Jinyang Gao, Beng Chin Ooi, Kian-Lee Tan, Sheng Wang
2015 Proceedings of the 23rd ACM international conference on Multimedia - MM '15  
To address these two challenges, in this paper, we design a distributed deep learning platform (called DLP) with easyto-use programming model and good scalability.  ...  Our experiments with training deep learning models for real-life multimedia applications show that DLP is a practical system.  ...  We present a distributed deep learning platform called DLP which has been designed for supporting deep learning in multimedia and other applications.  ... 
doi:10.1145/2733373.2806232 dblp:conf/mm/WangCDGOTW15 fatcat:k6vwvuoxangmtimkmjxfhfphfy

SINGA

Beng Chin Ooi, Yuan Wang, Zhongle Xie, Meihui Zhang, Kaiping Zheng, Kian-Lee Tan, Sheng Wang, Wei Wang, Qingchao Cai, Gang Chen, Jinyang Gao, Zhaojing Luo (+1 others)
2015 Proceedings of the 23rd ACM international conference on Multimedia - MM '15  
In this paper, we present a distributed deep learning system, called SINGA, for training big models over large datasets.  ...  An intuitive programming model based on the layer abstraction is provided, which supports a variety of popular deep learning models.  ...  We would like to thank the SINGA team members and NetEase for their contributions to the implementation of the Apache SINGA system, and the anonymous reviewers for their insightful and constructive comments  ... 
doi:10.1145/2733373.2807410 dblp:conf/mm/OoiTWWCCGLTWXZZ15 fatcat:tuhmnj5rarhbtbzmjrnpkxp2vu

Deep Learning At Scale and At Ease [article]

Wei Wang, Gang Chen, Haibo Chen, Tien Tuan Anh Dinh, Jinyang Gao, Beng Chin Ooi, Kian-Lee Tan, Sheng Wang
2016 arXiv   pre-print
To address these two challenges, in this paper, we design a distributed deep learning platform called SINGA which has an intuitive programming model based on the common layer abstraction of deep learning  ...  Our experience with developing and training deep learning models for real-life multimedia applications in SINGA shows that the platform is both usable and scalable.  ...  (1) We present a distributed platform called SINGA which is designed to train deep learning models for multimedia and other applications.  ... 
arXiv:1603.07846v1 fatcat:vi3vdeyemvd3xd7slqzxx7azzm

Towards a Scalable and Distributed Infrastructure for Deep Learning Applications [article]

Bita Hasheminezhad, Shahrzad Shirzad, Nanmiao Wu, Patrick Diehl, Hannes Schulz, Hartmut Kaiser
2020 arXiv   pre-print
Parallelization approaches and distribution requirements are not considered in the primary designs of most available distributed deep learning frameworks and most of them still are not able to perform  ...  require deep learning frameworks to utilize scaling out techniques.  ...  Current Distributed Deep Learning Frameworks In this section, we scrutinize existing distributed DL frameworks. We focus on frameworks that are more recent and popular among users.  ... 
arXiv:2010.03012v1 fatcat:2hy7evtvdra2dotv35dvbhv7mu

A Comprehensive Classification of Deep Learning Libraries [chapter]

Hari Mohan Pandey, David Windridge
2018 Advances in Intelligent Systems and Computing  
ND4J It is a distributed deep learning library, can be implemented on CPUs and GPUs and it provides Java and Scala APIs.  ...  SINGA It provides flexible architecture for distributed training and used in health-care applications.  ... 
doi:10.1007/978-981-13-1165-9_40 fatcat:6vjjl7zutbgtxiaehskleo4pdi

ACM Multimedia(ACMMM) 2015 Report

Matsui Yusuke, Choi Saemi, Yamasaki Toshihiko
2016 The Journal of The Institute of Image Information and Television Engineers  
Ooi et al.: "SINGA: A Distributed Deep Learning Platform", ACM Multimedia(2015) 8)C. Sweeney et al.: "Theia: A Fast and Scalable Structure-from-Motion Library", ACM Multimedia(2015) 9)J.  ...  v=pBny6 VM Hub: Building Cloud Service and Mobile Application for Image/Video/Multimedia Services (2) Interactive Video Search (3) Learning Knowledge Bases for Multimedia in 2015 .  ... 
doi:10.3169/itej.70.293 fatcat:kp64sdxv4vh5jbcftwxcyazyga

Implementation of a Practical Distributed Calculation System with Browsers and JavaScript, and Application to Distributed Deep Learning [article]

Ken Miura, Tatsuya Harada
2015 arXiv   pre-print
The combination of Sashimi and Sukiyaki, as well as new distribution algorithms, demonstrates the distributed deep learning of deep CNNs only with web browsers on various devices.  ...  Sukiyaki performs 30 times faster than a conventional JavaScript library for deep convolutional neural networks (deep CNNs) learning.  ...  It is our hope that many programmers will further develop Sukiyaki and Sashimi to become a high-performance distributed computing platform that anyone can use easily.  ... 
arXiv:1503.05743v1 fatcat:ad7qthdwi5a57f4hszdhnexygm

PANDA: Facilitating Usable AI Development [article]

Jinyang Gao, Wei Wang, Meihui Zhang, Gang Chen, H.V. Jagadish, Guoliang Li, Teck Khim Ng, Beng Chin Ooi, Sheng Wang, Jingren Zhou
2018 arXiv   pre-print
Recent advances in artificial intelligence (AI) and machine learning have created a general perception that AI could be used to solve complex problems, and in some situations over-hyped as a tool that  ...  In this paper, we take a new perspective on developing AI solutions, and present a solution for making AI usable. We hope that this resolution will enable all subject matter experts (eg.  ...  Apache SINGA [47] is a deep learning platform initiated by us, which focuses on memory and speed efficiency optimization.  ... 
arXiv:1804.09997v1 fatcat:glimvitbc5d63isrtpruo43dgy

Towards Detecting Dementia via Deep Learning

Deepika Bansal, *Kavita Khanna, Rita Chhikara, Rakesh Kumar Dua, Rajeev Malini
2021 International Journal of Healthcare Information Systems and Informatics  
Deep learning provides path-breaking applications in medical imaging. This study provides a detailed summary of different implementation approaches of deep learning for detecting the disease.  ...  SGDM classifier with a learning rate 10-4 and a mini-batch size of 10 have shown the best performance in a reasonable time.  ...  It is a distributed deep learning framework for Apache Spark.  ... 
doi:10.4018/ijhisi.20211001.oa31 fatcat:yjxrrsej2jgahjxrm37msopbkm

Omnivore: An Optimizer for Multi-device Deep Learning on CPUs and GPUs [article]

Stefan Hadjis, Ce Zhang, Ioannis Mitliagkas, Dan Iter, Christopher Ré
2016 arXiv   pre-print
We demonstrate that the most popular distributed deep learning systems fall within our tradeoff space, but do not optimize within the space.  ...  We study the factors affecting training time in multi-device deep learning systems.  ...  Table I shows a number of factors to consider when designing distributed deep learning systems.  ... 
arXiv:1606.04487v4 fatcat:qwr4xcuz45d3hoj3kxjius3q6y

Deep Learning: The Impact on Future eLearning

Anandhavalli Muniasamy, Areej Alasiry
2020 International Journal of Emerging Technologies in Learning (iJET)  
trends of deep learning in eLearning, the relevant deep learning-based artificial intelligence tools and a platform enabling the developer and learners to quickly reuse resources are clearly summarized  ...  In addition, deep learning models for developing the contents of the eLearning platform, deep learning framework that enable deep learn-ing systems into eLearning and its development, benefits & future  ...  convolutional neural network toolbox Deeplearning4j [15] An open-source, Apache 2.0-licensed distributed neural net library in Java and Scala Apache Singa [6] Open source library for deep learning  ... 
doi:10.3991/ijet.v15i01.11435 fatcat:uj63cq5l7fchtbvipi7xyr4mfm

Ako

Pijika Watcharapichat, Victoria Lopez Morales, Raul Castro Fernandez, Peter Pietzuch
2016 Proceedings of the Seventh ACM Symposium on Cloud Computing - SoCC '16  
Distributed systems for the training of deep neural networks (DNNs) with large amounts of data have vastly improved the accuracy of machine learning models for image and speech recognition.  ...  We describe Ako, a decentralised dataflow-based DNN system without parameter servers that is designed to saturate cluster resources.  ...  Singa [27, 42, 43] is a deep learning platform that supports multiple partitioning and synchronisation schemes, thus enabling users to easily use different training regimes at scale.  ... 
doi:10.1145/2987550.2987586 dblp:conf/cloud/WatcharapichatM16 fatcat:wucrsub5pbaszmqntlexi3qqay

CNNLab: a Novel Parallel Framework for Neural Networks using GPU and FPGA-a Practical Study with Trade-off Analysis [article]

Maohua Zhu, Liu Liu, Chao Wang, Yuan Xie
2016 arXiv   pre-print
However, the diversity and large-scale data size have posed a significant challenge to construct a flexible and high-performance implementation of deep learning neural networks.  ...  To improve the performance and maintain the scalability, we present CNNLab, a novel deep learning framework using GPU and FPGA-based accelerators.  ...  SINGA [8] is a distributed deep learning system for training big models over large datasets.  ... 
arXiv:1606.06234v1 fatcat:en7acoahonb7beqrnxv553g46e

Medical 4.0: Medical Data Ready for Deep and Machine Learning

Natalia Labuda, Tomasz Lepa, Marek Labuda, Karol Kozak
2017 Bioanalysis & Biomedicine  
This manuscript presents Medical 4.0, a doodle alternative for scientific data, presenting how radically the platform contributes to the four digital evolutions in medicine.  ...  Digitization is not an end in itself, but rather a means to an end.  ...  . • Deliver medical data ready for deep and machine learning, • Deliver integrated deep learning frameworks into medical platforms.  ... 
doi:10.4172/1948-593x.1000194 fatcat:yhofs4lxnbalfj4opascj32aou

Hardware Resource Analysis in Distributed Training with Edge Devices

Sihyeong Park, Jemin Lee, Hyungshin Kim
2019 Electronics  
When training a deep learning model with distributed training, the hardware resource utilization of each device depends on the model structure and the number of devices used for training.  ...  Distributed training has recently been applied to edge computing.  ...  Frameworks such as SINGA [22] , Poseidon [23] , and MXNet have been proposed for the distributed training of deep learning models.  ... 
doi:10.3390/electronics9010028 fatcat:xjbkw6wq6bb5pggdpxxz3cmvya
« Previous Showing results 1 — 15 out of 155 results