Filters








15 Hits in 1.6 sec

Distributing Deep Learning Hyperparameter Tuning for 3D Medical Image Segmentation [article]

Josep Lluis Berral, Oriol Aranda, Juan Luis Dominguez, Jordi Torres
2021 arXiv   pre-print
Most research on novel techniques for 3D Medical Image Segmentation (MIS) is currently done using Deep Learning with GPU accelerators. The principal challenge of such technique is that a single input can easily cope computing resources, and require prohibitive amounts of time to be processed. Distribution of deep learning and scalability over computing devices is an actual need for progressing on such research field. Conventional distribution of neural networks consist in data parallelism,
more » ... a parallelism, where data is scattered over resources (e.g., GPUs) to parallelize the training of the model. However, experiment parallelism is also an option, where different training processes are parallelized across resources. While the first option is much more common on 3D image segmentation, the second provides a pipeline design with less dependence among parallelized processes, allowing overhead reduction and more potential scalability. In this work we present a design for distributed deep learning training pipelines, focusing on multi-node and multi-GPU environments, where the two different distribution approaches are deployed and benchmarked. We take as proof of concept the 3D U-Net architecture, using the MSD Brain Tumor Segmentation dataset, a state-of-art problem in medical image segmentation with high computing and space requirements. Using the BSC MareNostrum supercomputer as benchmarking environment, we use TensorFlow and Ray as neural network training and experiment distribution platforms. We evaluate the experiment speed-up, showing the potential for scaling out on GPUs and nodes. Also comparing the different parallelism techniques, showing how experiment distribution leverages better such resources through scaling. Finally, we provide the implementation of the design open to the community, and the non-trivial steps and methodology for adapting and deploying a MIS case as the here presented.
arXiv:2110.15884v1 fatcat:ozfpb7w2ejcajawaq5ysmowvta

Self-adaptive utility-based web session management

Nicolas Poggi, Toni Moreno, Josep Lluis Berral, Ricard Gavaldà, Jordi Torres
2009 Computer Networks  
In the Internet, where millions of users are a click away from your site, being able to dynamically classify the workload in real time, and predict its short term behavior, is crucial for proper self-management and business efficiency. As workloads vary significantly according to current time of day, season, promotions and linking, it becomes impractical for some ecommerce sites to keep over-dimensioned infrastructures to accommodate the whole load. When server resources are exceeded,
more » ... exceeded, session-based admission control systems allow maintaining a high throughput in terms of properly finished sessions and QoS for a limited number of sessions; however, by denying access to excess users, the website looses potential customers. In the present study we describe the architecture of AUGURES, a system that learns to predict Web user's intentions for visiting the site as well its resource usage. Predictions are made from information known at the time of their first request and later from navigational clicks. For this purpose we use machine learning techniques and Markov-chain models. The system uses these predictions to automatically shape QoS for the most profitable sessions, predict short-term resource needs, and dynamically provision servers according to the expected revenue and the cost to serve it. We test the AUGURES prototype on access logs from a high-traffic, online travel agency, obtaining promising results.
doi:10.1016/j.comnet.2008.08.022 fatcat:4pqsg5mmgvdrdesxwndplhoxjq

Adaptive sliding windows for improved estimation of data center resource utilization

Shuja-ur-Rehman Baig, Waheed Iqbal, Josep Lluis Berral, David Carrera
2019 Future generations computer systems  
Josep Lluis Berral received the degree in informatics in 2007, the M.Sc. degree in computer architecture in 2008, and the Ph.D. degree from BarcelonaTech-UPC, computer science in 2013.  ... 
doi:10.1016/j.future.2019.10.026 fatcat:bcpejkx5njesdi6djvfkgad3ba

Automatic Generation of Workload Profiles Using Unsupervised Learning Pipelines

David Buchaca Prats, Josep Lluis Berral, David Carrera
2018 IEEE Transactions on Network and Service Management  
The complexity of resource usage and power consumption on cloud-based applications makes the understanding of application behavior through expert examination difficult. The difficulty increases when applications are seen as "black boxes", where only external monitoring can be retrieved. Furthermore, given the different amount of scenarios and applications, automation is required. Here we examine and model application behavior by finding behavior phases. We use Conditional Restricted Boltzmann
more » ... tricted Boltzmann Machines (CRBM) to model time-series containing resources traces measurements like CPU, Memory and IO. CRBMs can be used to map a given given historic window of trace behaviour into a single vector. This low dimensional and time-aware vector can be passed through clustering methods, from simplistic ones like k-means to more complex ones like those based on Hidden Markov Models (HMM). We use these methods to find phases of similar behaviour in the workloads. Our experimental evaluation shows that the proposed method is able to identify different phases of resource consumption across different workloads. We show that the distinct phases contain specific resource patterns that distinguish them.
doi:10.1109/tnsm.2017.2786047 fatcat:idlfldnoxnbufjnu2si4dljj4q

Proactive Container Auto-scaling for Cloud Native Machine Learning Services

David Buchaca, Josep LLuis Berral, Chen Wang, Alaa Youssef
2020 2020 IEEE 13th International Conference on Cloud Computing (CLOUD)  
Understanding the resource usage behaviors of the ever-increasing machine learning workloads are critical to cloud providers offering Machine Learning (ML) services. Capable of auto-scaling resources for customer workloads can significantly improve resource utilization, thus greatly reducing the cost. Here we leverage the AI4DL framework [1] to characterize workload and discover resource consumption phases. We advance the existing technology to an incremental phase discovery method that applies
more » ... method that applies to more general types of ML workload for both training and inference. We use a timewindow MultiLayer Perceptron (MLP) to predict phases in containers with different types of workload. Then, we propose a predictive vertical auto-scaling policy to resize the container dynamically according to phase predictions. We evaluate our predictive auto-scaling policies on 561 long-running containers with multiple types of ML workloads. The predictive policy can reduce up to 38% of the allocated CPU compared to the default resource provisioning policies by developers. By comparing our predictive policies with commonly used reactive auto-scaling policies, we find that they can accurately predict sudden phase transitions (with an F1-score of 0.92) and significantly reduce the number of out-of-memory errors (350 vs. 20). Besides, we show that the predictive auto-scaling policy maintains the number of resizing operations close to the best reactive policies.
doi:10.1109/cloud49709.2020.00070 fatcat:nylyu5iz3nhsvpk6rv7y25wype

Real-Time Data Center's Telemetry Reduction and Reconstruction Using Markov Chain Models

Shuja-ur-Rehman Baig, Waheed Iqbal, Josep Lluis Berral, Abdelkarim Erradi, David Carrera
2019 IEEE Systems Journal  
Berral, and D. Carrera are with Barcelona Supercomputing Center (BSC) and Universitat Politècnica de Catalunya (UPC), Barcelona Spain.  ... 
doi:10.1109/jsyst.2019.2918430 fatcat:xwa2w5jlevatlkk4n3fvbownbi

You Only Run Once: Spark Auto-tuning from a Single Run

David Buchaca, Felipe Portella, Carlos Costa, Josep LLuis Berral
2020 IEEE Transactions on Network and Service Management  
Tuning configurations of Spark jobs is not a trivial task. State-of-the-art auto-tuning systems are based on iteratively running workloads with different configurations. During the optimization process, the relevant features are explored to find good solutions. Many optimizers enhance the time-to-solution using black-box optimization algorithms that do not take into account any information from the Spark workloads. In this paper, we present a new method for tuning configurations that uses
more » ... ons that uses information from one run of a Spark workload. To achieve good performance, we mine the SparkEventLog that is generated by the Spark engine. This log file contains a large amount of information from the executed application. We use this information to enhance a performance model with low-level features from the workload to be optimized. These features include Spark Actions, Transformations, and Task metrics. This process allows us to obtain application-specific workload information. With this information our system can predict sensible Spark configurations for unseen jobs, given that it has been trained with reasonable coverage of Spark applications. Experiments show that the presented system correctly produces good configurations, while achieving up to 80% speedup with respect to the default Spark configuration, and up to 12x speedup of the time-to-solution with respect to a standard Bayesian Optimization procedure.
doi:10.1109/tnsm.2020.3034824 fatcat:tw3x6tmoebcldlg42q532steg4

Adaptive Prediction Models for Data Center Resources Utilization Estimation

Shuja-ur-Rehman Baig, Waheed Iqbal, Josep Lluis Berral, Abdelkarim Erradi, David Carrera
2019 IEEE Transactions on Network and Service Management  
Accurate estimation of data center resource utilization is a challenging task due to multi-tenant co-hosted applications having dynamic and time-varying workloads. Accurate estimation of future resources utilization helps in better job scheduling, workload placement, capacity planning, proactive auto-scaling, and load balancing. The inaccurate estimation leads to either under or over-provisioning of data center resources. Most existing estimation methods are based on a single model that often
more » ... model that often does not appropriately estimate different workload scenarios. To address these problems, we propose a novel method to adaptively and automatically identify the most appropriate model to accurately estimate data center resources utilization. The proposed approach trains a classifier based on statistical features of historical resources usage to decide the appropriate prediction model to use for given resource utilization observations collected during a specific time interval. We evaluated our approach on real datasets and compared the results with multiple baseline methods. The experimental evaluation shows that the proposed approach outperforms the state-of-the-art approaches and delivers 6% to 27% improved resource utilization estimation accuracy compared to baseline methods.
doi:10.1109/tnsm.2019.2932840 fatcat:3bbu3n6pqrbt5kuxbcqfazrufq

Constant-Time Sliding Window Framework with Reduced Memory Footprint and Efficient Bulk Evictions

Alvaro Villalba Navarro, Josep Lluis Berral, David Carrera
2018 IEEE Transactions on Parallel and Distributed Systems  
The fast evolution of data analytics platforms has resulted in an increasing demand for real-time data stream processing. From Internet of Things applications to the monitoring of telemetry generated in large data centers, a common demand for currently emerging scenarios is the need to process vast amounts of data with low latencies, generally performing the analysis process as close to the data source as possible. Stream processing platforms are required to be malleable and absorb spikes
more » ... absorb spikes generated by fluctuations of data generation rates. Data is usually produced as time series that have to be aggregated using multiple operators, being sliding windows one of the most common abstractions used to process data in real-time. To satisfy the above-mentioned demands, efficient stream processing techniques that aggregate data with minimal computational cost need to be developed. In this paper we present the Monoid Tree Aggregator general sliding window aggregation framework, which seamlessly combines the following features: amortized O(1) time complexity and a worst-case of O(log n) between insertions; it provides both a window aggregation mechanism and a window slide policy that are user programmable; the enforcement of the window sliding policy exhibits amortized O(1) computational cost for single evictions and supports bulk evictions with cost O(log n); and it requires a local memory space of O(log n). The framework can compute aggregations over multiple data dimensions, and has been designed to support decoupling computation and data storage through the use of distributed Key-Value Stores to keep window elements and partial aggregations.
doi:10.1109/tpds.2018.2868960 fatcat:7jmbuohitzeo5mltesht5zhwta

A highly parameterizable framework for Conditional Restricted Boltzmann Machine based workloads accelerated with FPGAs and OpenCL

Zoran Jakšić, Nicola Cadenelli, David Buchaca Prats, Jordà Polo, Josep Lluís Berral Garcia, David Carrera Perez
2019 Future generations computer systems  
Conditional Restricted Boltzmann Machine (CRBM) is a promising candidate for a multidimensional system modeling that can learn a probability distribution over a set of data. It is a specific type of an artificial neural network with one input (visible) and one output (hidden) layer. Recently published works demonstrate that CRBM is a suitable mechanism for modeling multidimensional time series such as human motion, workload characterization, city traffic analysis. The process of learning and
more » ... of learning and inference of these systems relies on linear algebra functions like matrix-matrix multiplication, and for higher data sets, they are very compute-intensive. In this paper, we present a configurable framework for CRBM based workloads for arbitrary large models. We show how to accelerate the learning process of CRBM with FPGAs and OpenCL, and we conduct an extensive scalability study for different model sizes and system configurations. We show significant improvement in performance/Watt for large models and batch sizes (from 1.51x up to 5.71x depending on the host configuration) when we use FPGA and OpenCL for the acceleration, and limited benefits for small models comparing to the state-of-the-art CPU solution.
doi:10.1016/j.future.2019.10.025 fatcat:sibqhfis6nhohcvtoixeflre24

ALOJA-ML

Josep Lluís Berral, Nicolas Poggi, David Carrera, Aaron Call, Rob Reinauer, Daron Green
2015 Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining - KDD '15  
This article presents ALOJA-Machine Learning (ALOJA-ML) an extension to the ALOJA project that uses machine learning techniques to interpret Hadoop benchmark performance data and performance tuning; here we detail the approach, efficacy of the model and initial results. Hadoop presents a complex execution environment, where costs and performance depends on a large number of software (SW) configurations and on multiple hardware (HW) deployment choices. These results are accompanied by a test bed
more » ... anied by a test bed and tools to deploy and evaluate the cost-effectiveness of the different hardware configurations, parameter tunings, and Cloud services. Despite early success within ALOJA from expert-guided benchmarking, it became clear that a genuinely comprehensive study requires automation of modeling procedures to allow a systematic analysis of large and resource-constrained search spaces. ALOJA-ML provides such an automated system allowing knowledge discovery by modeling Hadoop executions from observed benchmarks across a broad set of configuration parameters. The resulting performance models can be used to forecast execution behavior of various workloads; they allow 'a-priori' prediction of the execution times for new configurations and HW choices and they offer a route to model-based anomaly detection. In addition, these models can guide the benchmarking exploration efficiently, by automatically prioritizing candidate future benchmark tests. Insights from ALOJA-ML's models can be used to reduce the operational time on clusters, speed-up the data acquisition and knowledge discovery process, and importantly, reduce running costs. In addition to learning from the methodology presented in this work, the community can benefit in general from ALOJA data-sets, framework, and derived insights to improve the design and deployment of Big Data applications.
doi:10.1145/2783258.2788600 dblp:conf/kdd/BerralPCCRG15 fatcat:3y7pnkbwxvbzjodjfwhm4ckjla

Sequence-to-sequence models for workload interference prediction on batch processing datacenters

David Buchaca, Joan Marcual, Josep LLuis Berral, David Carrera
2020 Future generations computer systems  
Berral and D.  ... 
doi:10.1016/j.future.2020.03.058 fatcat:vw33tgjwdjfahfxfq7crf5fpqe

ALOJA: A Framework for Benchmarking and Predictive Analytics in Hadoop Deployments

Josep Lluis Berral, Nicolas Poggi, David Carrera, Aaron Call, Rob Reinauer, Daron Green
2017 IEEE Transactions on Emerging Topics in Computing  
This article presents the ALOJA project and its analytics tools, which leverages machine learning to interpret Big Data benchmark performance data and tuning. ALOJA is part of a long-term collaboration between BSC and Microsoft to automate the characterization of cost-effectiveness on Big Data deployments, currently focusing on Hadoop. Hadoop presents a complex run-time environment, where costs and performance depend on a large number of configuration choices. The ALOJA project has created an
more » ... ct has created an open, vendor-neutral repository, featuring over 40,000 Hadoop job executions and their performance details. The repository is accompanied by a test-bed and tools to deploy and evaluate the cost-effectiveness of different hardware configurations, parameters and Cloud services. Despite early success within ALOJA, a comprehensive study requires automation of modeling procedures to allow an analysis of large and resource-constrained search spaces. The predictive analytics extension, ALOJA-ML, provides an automated system allowing knowledge discovery by modeling environments from observed executions. The resulting models can forecast execution behaviors, predicting execution times for new configurations and hardware choices. That also enables model-based anomaly detection or efficient benchmark guidance by prioritizing executions. In addition, the community can benefit from ALOJA data-sets and framework to improve the design and deployment of Big Data applications.
doi:10.1109/tetc.2015.2496504 fatcat:7kpa5wvwfzfs3jtd6aqjfbq5du

Web Customer Modeling for Automated Session Prioritization on High Traffic Sites [chapter]

Nicolas Poggi, Toni Moreno, Josep Lluis Berral, Ricard Gavaldà, Jordi Torres
Lecture Notes in Computer Science  
In the Web environment, user identification is becoming a major challenge for admission control systems on high traffic sites. When a web server is overloaded there is a significant loss of throughput when we compare finished sessions and the number of responses per second; longer sessions are usually the ones ending in sales but also the most sensitive to load failures. Session-based admission control systems maintain a high QoS for a limited number of sessions, but does not maximize revenue
more » ... maximize revenue as it treats all non-logged sessions the same. We present a novel method for learning to assign priorities to sessions according to the revenue that will generate. For this, we use traditional machine learning techniques and Markov-chain models. We are able to train a system to estimate the probability of the user's purchasing intentions according to its early navigation clicks and other static information. The predictions can be used by admission control systems to prioritize sessions or deny them if no resources are available, thus improving sales throughput per unit of time for a given infrastructure. We test our approach on access logs obtained from a high-traffic online travel agency, with promising results.
doi:10.1007/978-3-540-73078-1_63 fatcat:vwq67tk7frboxdv5xiwc3at5su

Comparison of Phase-less Direction of Arrival Estimation Methods for Switched Beam Antennas

A. Cidronali, G. Collodi, M. Lucarelli, S. Maddio, M. Passafiume, G. Pelosi, S. Selleri
2019 2019 PhotonIcs & Electromagnetics Research Symposium - Spring (PIERS-Spring)  
Publica de Navarra); Miguel Navarro-Cia (Imperial College London); 12:10 Invited Circuit Models for Classical Electromagnetic Analogs of Electromagnetically Induced Transparency Raul Rodriguez-Berral  ...  Nantakan Wongkasem (University of Texas Rio Grande Valley); 11:00 Coffee Break Session 2A12b Trends in Metasurfaces: New Materials and Applications Tuesday AM, June 18, 2019 Room 21 -2nd Floor Organized by Josep  ... 
doi:10.1109/piers-spring46901.2019.9017336 fatcat:adobca2sg5ga5ccjpuepkrpbsa