Filters








5 Hits in 0.39 sec

Improving the Usability of Virtual Reality Neuron Tracing with Topological Elements [article]

Torin McDonald, Will Usher, Nate Morrical, Attila Gyulassy, Steve Petruzza, Frederick Federer, Alessandra Angelucci, Valerio Pascucci
2020 arXiv   pre-print
. • Steve Petruzza is with the SCI Institute, University of Utah and Utah State University. • Frederick Federer and Alessandra Angelucci are with the Moran Eye Institute, University of Utah.  ... 
arXiv:2009.01891v1 fatcat:xqq73cq22fgfxji24crlcqhev4

ISAVS

Steve Petruzza, Aniketh Venkat, Attila Gyulassy, Giorgio Scorzelli, Frederick Federer, Alessandra Angelucci, Valerio Pascucci, Peer-Timo Bremer
2017 SIGGRAPH Asia 2017 Symposium on Visualization on - SA '17  
Petruzza et al. Page 21 Figure 1 . 1 Figure 1. Figure 2 . 2 Figure 2. Figure 3 . 3 Figure 3. Figure 4 . 4 Figure 4.  ... 
doi:10.1145/3139295.3139299 pmid:30148289 pmcid:PMC6105268 dblp:conf/siggraph/PetruzzaVGSFAPB17 fatcat:ppcwkpk32jd65arcaw3nrnvzne

Distributed Resources for the Earth System Grid Advanced Management (DREAM) [article]

Luca Cinquini, Steve Petruzza, Jason Jerome Boutte, Sasha Ames, Ghaleb Abdulla, Venkatramani Balaji, Robert Ferraro, Aparna Radhakrishnan, Laura Carriere, Thomas Maxwell, Giorgio Scorzelli, Valerio Pascucci
2020 arXiv   pre-print
The DREAM project was funded more than 3 years ago to design and implement a next-generation ESGF (Earth System Grid Federation [1]) architecture which would be suitable for managing and accessing data and services resources on a distributed and scalable environment. In particular, the project intended to focus on the computing and visualization capabilities of the stack, which at the time were rather primitive. At the beginning, the team had the general notion that a better ESGF architecture
more » ... uld be built by modularizing each component, and redefining its interaction with other components by defining and exposing a well defined API. Although this was still the high level principle that guided the work, the DREAM project was able to accomplish its goals by leveraging new practices in IT that started just about 3 or 4 years ago: the advent of containerization technologies (specifically, Docker), the development of frameworks to manage containers at scale (Docker Swarm and Kubernetes), and their application to the commercial Cloud. Thanks to these new technologies, DREAM was able to improve the ESGF architecture (including its computing and visualization services) to a level of deployability and scalability beyond the original expectations.
arXiv:2004.09599v1 fatcat:j6nn4gel3ngyhdzwya2osnll3e

Scalable Data Management of the Uintah Simulation Framework for Next-Generation Engineering Problems with Radiation [chapter]

Sidharth Kumar, Alan Humphrey, Will Usher, Steve Petruzza, Brad Peterson, John A. Schmidt, Derek Harris, Ben Isaac, Jeremy Thornock, Todd Harman, Valerio Pascucci, Martin Berzins
2018 Lecture Notes in Computer Science  
The need to scale next-generation industrial engineering problems to the largest computational platforms presents unique challenges. This paper focuses on data management related problems faced by the Uintah simulation framework at a production scale of 260K processes. Uintah provides a highly scalable asynchronous many-task runtime system, which in this work is used for the modeling of a 1000 megawatt electric (MWe) ultra-supercritical (USC) coal boiler. At 260K processes, we faced both
more » ... l I/O and visualization related challenges, e.g., the default file-per-process I/O approach of Uintah did not scale on Mira. In this paper we present a simple to implement, restructuring based parallel I/O technique. We impose a restructuring step that alters the distribution of data among processes. The goal is to distribute the dataset such that each process holds a larger chunk of data, which is then written to a file independently. This approach finds a middle ground between two of the most common parallel I/O schemes-file per process I/O and shared file I/O-in terms of both the total number of generated files, and the extent of communication involved during the data aggregation phase. To address scalability issues when visualizing the simulation data, we developed a lightweight renderer using OSPRay, which allows scientists to visualize the data interactively at high quality and make production movies. Finally, this work presents a highly efficient and scalable radiation model based on the sweeping method, which significantly outperforms previous approaches in Uintah, like discrete ordinates. The integrated approach allowed the USC boiler problem to run on 260K CPU cores on Mira. S. Kumar and A. Humphrey-Authors contributed equally.
doi:10.1007/978-3-319-69953-0_13 fatcat:lodqec3lujdxrmd3qphlmds7cq

Table of Contents

2019 2019 15th International Conference on eScience (eScience)  
Petruzza (University of Utah), Ilya Baldin (University of North Carolina), Laura Christopherson (University of North Carolina), Ryan Mitchell (University of Southern California), Loic Pottier (University  ...  (Battelle Ecology, Inc.), Christine Laney (Battelle Ecology, Inc.), Ivan Lobo-Padilla (Battelle Ecology, Inc.), Jeremy Sampson (Battelle Ecology, Inc.), John Staarmann (Battelle Ecology, Inc.), and Steve  ... 
doi:10.1109/escience.2019.00004 fatcat:gkka77yxbjdldlhejv5pxl5kqq