101 Hits in 9.3 sec

Parameter Hub: a Rack-Scale Parameter Server for Distributed Deep Neural Network Training [article]

Liang Luo, Jacob Nelson, Luis Ceze, Amar Phanishayee, Arvind Krishnamurthy
2018 arXiv   pre-print
Distributed deep neural network (DDNN) training constitutes an increasingly important workload that frequently runs in the cloud.  ...  We therefore propose PHub, a high performance multi-tenant, rack-scale PS design.  ...  To eliminate these bottlenecks, we proposed PHub, a high performance multi-tenant, rack-scale PS design, with co-designed software and hardware to accelerate rack-level and hierarchical cross-rack parameter  ... 
arXiv:1805.07891v1 fatcat:jrur6u3vjfgrxpfi6lialuhoru

Parameter Box: High Performance Parameter Servers for Efficient Distributed Deep Neural Network Training [article]

Liang Luo, Jacob Nelson, Luis Ceze, Amar Phanishayee, Arvind Krishnamurthy
2020 arXiv   pre-print
As DNNs get bigger, training requires going distributed. Distributed deep neural network (DDNN) training constitutes an important workload on the cloud.  ...  Our experiments show existing DNN training frameworks do not scale in a typical cloud environment due to insufficient bandwidth and inefficient parameter server software stacks.We propose PBox, a balanced  ...  OPTIMIZED PARAMETER SERVERS Model updates are usually performed in a parameter server (PS), a key-value store for the current model [10, 11, 15, 16] .  ... 
arXiv:1801.09805v3 fatcat:4yhulkalzzd7vatisryee567dm

Transforming Data Centers in Active Thermal Energy Players in Nearby Neighborhoods

Marcel Antal, Tudor Cioara, Ionut Anghel, Claudia Pop, Ioan Salomie
2018 Sustainability  
To reduce the computational time complexity, we have used neural networks, which are trained using the simulation results.  ...  Experiments have been conducted considering a small operational DC featuring a server room of 24 square meters and 60 servers organized in four racks.  ...  distribution, Section 4 describes the CFD and neural networks techniques used to assess the DC thermal flexibility, Section 5 presents results obtained for a small-scale test bed DC, while Section 6 concludes  ... 
doi:10.3390/su10040939 fatcat:7wojd3vjwzgmra2c43g6zzdvlu

AI Technical Considerations: Data Storage, Cloud usage and AI Pipeline [article]

P.M.A van Ooijen, Erfan Darzidehkalani, Andre Dekker
2022 arXiv   pre-print
Artificial intelligence (AI), especially deep learning, requires vast amounts of data for training, testing, and validation.  ...  However, the realization of proper imaging data collections is not sufficient to train, validate and deploy AI as resource demands are high and require a careful hybrid implementation of AI pipelines both  ...  DISTRIBUTED / FEDERATED LEARNING Training deep learning models requires finding optimal values for millions of parameters, and this is time-consuming.  ... 
arXiv:2201.08356v1 fatcat:zuqnpwbxn5fblpwfnvl2tmliqy

Prototype Cross Platform oriented on Cybersecurity, Virtual Connectivity, Big Data and Artificial Intelligence Control

Alessandro Massaro, Michele Gargaro, Giovanni Dipierrro, Angelo Galiano, Simone Buonopane
2020 IEEE Access  
Artificial Intelligence Control: "Digital Cross Platform: Virtual AI Distribution Network Processes".  ...  Authors gratefully thanks the researcher Mario Fumai for the configuration of the Cassandra system.  ...  : this component allows the connection of the rack to a computer network through a VPN, improving the security of the system; • NAS Server: the Network Attached Storage (NAS) Server is a file-level computer  ... 
doi:10.1109/access.2020.3034399 fatcat:3c2bvrihq5clvhkdmqtr3aeoui

Edge Intelligence: Paving the Last Mile of Artificial Intelligence With Edge Computing

Zhi Zhou, Xu Chen, En Li, Liekang Zeng, Ke Luo, Junshan Zhang
2019 Proceedings of the IEEE  
We then provide an overview of the overarching architectures, frameworks, and emerging key technologies for deep learning model toward training/inference at the network edge.  ...  To this end, we conduct a comprehensive survey of the recent research efforts on EI. Specifically, we first review the background and motivation for AI running at the network edge.  ...  Since the ANN adopted by deep learning model typically consists of a series of layers, the model is called a deep neural network (DNN).  ... 
doi:10.1109/jproc.2019.2918951 fatcat:d53vxmklgfazbmzjhsq3tuoama

Management of Resource at the Network Edge for Federated Learning [article]

Silvana Trindade, Luiz F. Bittencourt, Nelson L. S. da Fonseca
2022 arXiv   pre-print
Federated learning has been explored as a promising solution for training at the edge, where end devices collaborate to train models without sharing data with other entities.  ...  To select nodes for aggregation, they employed Deep Neural Networks (DNN) at the edge, creating contracts between a central node and vehicular clients when they are suitable for aggregation.  ...  An edge server can be a Personal Computer (PC) or a half-rack built for processing Information Technology (IT) workloads (micro data center).  ... 
arXiv:2107.03428v2 fatcat:hez3rqjonzd45plyvdzujdjt6u

A Survey on Edge Computing Systems and Tools

Fang Liu, Guoming Tang, Youhuizi Li, Zhiping Cai, Xingzhou Zhang, Tongqing Zhou
2019 Proceedings of the IEEE  
A comparison of open source tools is presented according to their applicability. Finally, we highlight energy efficiency and deep learning optimization of edge computing systems.  ...  To explore new research opportunities and assist users in selecting suitable edge computing systems for specific applications, this survey paper provides a comprehensive overview of the existing edge computing  ...  DDNN [78] is a distributed deep neural network architecture across cloud, edge, and edge devices.  ... 
doi:10.1109/jproc.2019.2920341 fatcat:rocspx5ziffblfzaye2xhebe3e

Optimization of power consumption in data centers using machine learning based approaches: a review

Rajendra Kumar, Sunil Kumar Khatri, Mario José Diván
2022 International Journal of Power Electronics and Drive Systems (IJPEDS)  
A potential future scope is proposed based on the findings of this review by combining a mixture of bioinspired optimization and neural network.</span>  ...  As a result, various machine learning-based optimization approaches for enhancing overall power effectiveness have been outlined.  ...  In Q-learning, the deep Q-network (DQN) proposed uses a neural network approximation.  ... 
doi:10.11591/ijece.v12i3.pp3192-3203 fatcat:35n4qt7v4fhqhatcqryoeysvgu

A novel revenue optimization model to address the operation and maintenance cost of a data center

Snehanshu Saha, Jyotirmoy Sarkar, Avantika Dwivedi, Nandita Dwivedi, Anand M. Narasimhamurthy, Ranjan Roy
2016 Journal of Cloud Computing: Advances, Systems and Applications  
The economic sustainability of such a model is accomplished via Cobb-Douglas production function.  ...  A typical investment is of the order of millions of dollars, infrastructure and recurring cost included.  ...  Saibal Kar, Centre for Studies in Social Sciences, Calcutta and IZA, Bonn for reading the paper and giving useful feedback.  ... 
doi:10.1186/s13677-015-0050-8 fatcat:wqa3k3xsabadddpj5zojvcpp64

Toward Native Artificial Intelligence in 6G Networks: System Design, Architectures, and Paradigms [article]

Jianjun Wu, Rongpeng Li, Xueli An, Chenghui Peng, Zhe Liu, Jon Crowcroft, Honggang Zhang
2021 arXiv   pre-print
Apparently, the intelligent inclusion vision produces far-reaching influence on the corresponding network architecture design in 6G and deserves a clean-slate rethink.  ...  In this article, we propose an end-to-end system architecture design scope for 6G, and talk about the necessity to incorporate an independent data plane and a novel intelligent plane with particular emphasis  ...  Intelligence inclusive RAN is more than installing racks of servers at the edge location and local break out of the traffic for edge processing.  ... 
arXiv:2103.02823v1 fatcat:6r7v223p7bb3rkocj5vwapkopa

Long-Term Prediction of Bikes Availability on Bike-Sharing Stations

Paolo Nesi, Paolo Nesi
2021 Journal of Visual Language and Computing  
On the other hand, bike-sharing present some problems such as the irregular distribution of bikes on the stations/racks/areas (still very used for e-bikes) and for the final users the difficulty of knowing  ...  in advance their status with a certain degree of confidence, whether there will be available bikes at a specific bike-station at a certain time of the day, or a free slot for leaving the rented bike.  ...  Acknowledgment The authors would like to thank the MIUR, the University of Florence and companies involved for cofounding Sii-Mobility national project on smart city mobility and transport.  ... 
doi:10.18293/jvlc2021-n1-001 fatcat:7qg463yd4zc5lhzj54dhtffwpm

AI on the Edge: Rethinking AI-based IoT Applications Using Specialized Edge Architectures [article]

Qianlin Liang, Prashant Shenoy, David Irwin
2020 arXiv   pre-print
edge and cloud servers.  ...  Edge computing has emerged as a popular paradigm for supporting mobile and IoT applications with low latency or high bandwidth needs.  ...  An efficient method to train distributed deep neural networks (DDNNs) over a distributed computing hierarchy consisting of cloud, edge, and end-devices was proposed in [36] .  ... 
arXiv:2003.12488v1 fatcat:rice6s77jjevlk3ir4em6doc2e

A Survey of Big Data Machine Learning Applications Optimization in Cloud Data Centers and Networks [article]

Sanaa Hamid Mohamed, Taisir E.H. El-Gorashi, Jaafar M.H. Elmirghani
2019 arXiv   pre-print
networks.  ...  However, the increasing traffic between and within the data centers that migrate, store, and process big data, is becoming a bottleneck that calls for enhanced infrastructures capable of reducing the congestion  ...  ACKNOWLEDGEMENTS Sanaa Hamid Mohamed would like to acknowledge Doctoral Training Award (DTA) funding from the UK Engineering and physical Sciences Research Council (EPSRC).  ... 
arXiv:1910.00731v1 fatcat:kvi3br4iwzg3bi7fifpgyly7m4

D1.1 - State of the Art Analysis

Danilo Ardagna
2021 Zenodo  
Then, the deliverable provides a background on AI applications design, also considering some advanced design trends (e.g., Network Architecture Search, Federated Learning, Deep Neural Networks partitioning  ...  We also extensively discuss existing solutions for applications deployment, monitoring, runtime management, and scheduling considering the emerging Function as a Service paradigm.  ...  Accordingly, there are many works to partition DNN such as [Teerapittayanon2017] which proposes a distributed deep neural network (DDNN) over distributed computing hierarchies.  ... 
doi:10.5281/zenodo.6372377 fatcat:f6ldfuwivbcltew4smiiwphfty
« Previous Showing results 1 — 15 out of 101 results