Filters








28 Hits in 0.87 sec

Sustainability, Transformational Leadership, and Social Entrepreneurship

Etayankara Muralidharan, Saurav Pathak
2018 Sustainability  
This article examines the extent to which culturally endorsed transformational leadership theories (CLTs) and the sustainability of society, both considered societal level institutional indicators, impact the emergence of social entrepreneurship. Using 107,738 individual-level responses from 27 countries for the year 2009 obtained from the Global Entrepreneurship Monitor (GEM) survey, and supplementing with country-level data obtained from Global Leadership and Organizational Behavior
more » ... Behavior Effectiveness (GLOBE) and Sustainability Society Foundation (SSF), our findings from multilevel analysis show that transformational CLTs and sustainability conditions of society positively influence the likelihood of individuals becoming social entrepreneurs. Further, the effectiveness of transformational CLTs matters more for social entrepreneurship when the sustainability of society is low, which suggests the interaction between cultural leadership styles and societal sustainability. This article contributes to comparative entrepreneurship research by introducing strong cultural antecedents of social entrepreneurship in transformational CLTs and societal sustainability. We discuss various implications and limitations of our study, and we suggest directions for future research.
doi:10.3390/su10020567 fatcat:r6gp5y742vf2xh6qzeixu3as5a

GLOBE Leadership Dimensions: Implications for Cross-Country Entrepreneurship Research

Etayankara Muralidharan, Saurav Pathak
2018 AIB Insights  
In particular, a recent study (Muralidharan & Pathak, 2018) reports negative moderation effects between transformational CLTs (constructed as a composite out of charismatic, humane-oriented and team-oriented  ... 
doi:10.46697/001c.16839 fatcat:d3fhejrmjvd5la27t7t7gc7v2u

Architecture-Adaptive Code Variant Tuning

Saurav Muralidharan, Amit Roy, Mary Hall, Michael Garland, Piyush Rai
2016 SIGPLAN notices  
Code variants represent alternative implementations of a computation, and are common in high-performance libraries and applications to facilitate selecting the most appropriate implementation for a specific execution context (target architecture and input dataset). Automating code variant selection typically relies on machine learning to construct a model during an offline learning phase that can be quickly queried at runtime once the execution context is known. In this paper, we define a new
more » ... , we define a new approach called architectureadaptive code variant tuning, where the variant selection model is learned on a set of source architectures, and then used to predict variants on a new target architecture without having to repeat the training process. We pose this as a multi-task learning problem, where each source architecture corresponds to a task; we use device features in the construction of the variant selection model. This work explores the effectiveness of multi-task learning and the impact of different strategies for device feature selection. We evaluate our approach on a set of benchmarks and a collection of six NVIDIA GPU architectures from three distinct generations. We achieve performance results that are mostly comparable to the previous approach of tuning for a single GPU architecture without having to repeat the learning phase.
doi:10.1145/2954679.2872411 fatcat:uu72lp6lkjdxxbcv4eh4tybq6y

A Programmable Approach to Model Compression [article]

Vinu Joseph, Saurav Muralidharan, Animesh Garg, Michael Garland, Ganesh Gopalakrishnan
2019 arXiv   pre-print
Deep neural networks frequently contain far more weights, represented at a higher precision, than are required for the specific task which they are trained to perform. Consequently, they can often be compressed using techniques such as weight pruning and quantization that reduce both model size and inference time without appreciable loss in accuracy. Compressing models before they are deployed can therefore result in significantly more efficient systems. However, while the results are
more » ... sults are desirable, finding the best compression strategy for a given neural network, target platform, and optimization objective often requires extensive experimentation. Moreover, finding optimal hyperparameters for a given compression strategy typically results in even more expensive, frequently manual, trial-and-error exploration. In this paper, we introduce a programmable system for model compression called Condensa. Users programmatically compose simple operators, in Python, to build complex compression strategies. Given a strategy and a user-provided objective, such as minimization of running time, Condensa uses a novel sample-efficient constrained Bayesian optimization algorithm to automatically infer desirable sparsity ratios. Our experiments on three real-world image classification and language modeling tasks demonstrate memory footprint reductions of up to 65x and runtime throughput improvements of up to 2.22x using at most 10 samples per search. We have released a reference implementation of Condensa at https://github.com/NVlabs/condensa.
arXiv:1911.02497v1 fatcat:mrd3phetvjgptbtz4gjg77dmcm

Consequences of Cultural Leadership Styles for Social Entrepreneurship: A Theoretical Framework

Etayankara Muralidharan, Saurav Pathak
2019 Sustainability  
The purpose of this conceptual article is to understand how the interplay of national-level institutions of culturally endorsed leadership styles, government effectiveness, and societal trust affects individual likelihood to become social entrepreneurs. We present an institutional framework comprising cultural leadership styles (normative institutions), government effectiveness (regulatory institutions), and societal trust (cognitive institutions) to predict individual likelihood of social
more » ... hood of social entrepreneurship. Using the insight of culture–entrepreneurship fit and drawing on institutional configuration perspective we posit that culturally endorsed implicit leadership theories (CLTs) of charismatic and participatory leadership positively impact the likelihood of individuals becoming social entrepreneurs. Further, we posit that this impact is particularly pronounced when a country's regulatory quality manifested by government effectiveness is supportive of social entrepreneurship and when there exist high levels of societal trust. Research on CLTs and their impact on entrepreneurial behavior is limited. We contribute to comparative entrepreneurship research by introducing a cultural antecedent of social entrepreneurship in CLTs and through a deeper understanding of their interplay with national-level institutions to draw the boundary conditions of our framework.
doi:10.3390/su11040965 fatcat:kw6dthi6jraovbcupgzj4l3xfe

Architecture-Adaptive Code Variant Tuning

Saurav Muralidharan, Amit Roy, Mary Hall, Michael Garland, Piyush Rai
2016 SIGARCH Computer Architecture News  
Code variants represent alternative implementations of a computation, and are common in high-performance libraries and applications to facilitate selecting the most appropriate implementation for a specific execution context (target architecture and input dataset). Automating code variant selection typically relies on machine learning to construct a model during an offline learning phase that can be quickly queried at runtime once the execution context is known. In this paper, we define a new
more » ... , we define a new approach called architectureadaptive code variant tuning, where the variant selection model is learned on a set of source architectures, and then used to predict variants on a new target architecture without having to repeat the training process. We pose this as a multi-task learning problem, where each source architecture corresponds to a task; we use device features in the construction of the variant selection model. This work explores the effectiveness of multi-task learning and the impact of different strategies for device feature selection. We evaluate our approach on a set of benchmarks and a collection of six NVIDIA GPU architectures from three distinct generations. We achieve performance results that are mostly comparable to the previous approach of tuning for a single GPU architecture without having to repeat the learning phase.
doi:10.1145/2980024.2872411 fatcat:greecv2bcfemjcdr5ludajncvi

A collection-oriented programming model for performance portability

Saurav Muralidharan, Michael Garland, Bryan Catanzaro, Albert Sidelnik, Mary Hall
2015 SIGPLAN notices  
This paper describes Surge, a collection-oriented programming model that enables programmers to compose parallel computations using nested high-level data collections and operators. Surge exposes a code generation interface, decoupled from the core computation, that enables programmers and autotuners to easily generate multiple implementations of the same computation on various parallel architectures such as multi-core CPUs and GPUs. By decoupling computations from architecture-specific
more » ... re-specific implementation, programmers can target multiple architectures more easily, and generate a search space that facilitates optimization and customization for specific architectures. We express in Surge four real-world benchmarks from domains such as sparse linear-algebra and machine learning and from the same performance-portable specification, generate OpenMP and CUDA C++ implementations. Surge generates efficient, scalable code which achieves up to 1.32x speedup over handcrafted, well-optimized CUDA code.
doi:10.1145/2858788.2688537 fatcat:lzbucu7dgbdkrhwpduxva5cgaq

Architecture-Adaptive Code Variant Tuning

Saurav Muralidharan, Amit Roy, Mary Hall, Michael Garland, Piyush Rai
2016 ACM SIGOPS Operating Systems Review  
Code variants represent alternative implementations of a computation, and are common in high-performance libraries and applications to facilitate selecting the most appropriate implementation for a specific execution context (target architecture and input dataset). Automating code variant selection typically relies on machine learning to construct a model during an offline learning phase that can be quickly queried at runtime once the execution context is known. In this paper, we define a new
more » ... , we define a new approach called architectureadaptive code variant tuning, where the variant selection model is learned on a set of source architectures, and then used to predict variants on a new target architecture without having to repeat the training process. We pose this as a multi-task learning problem, where each source architecture corresponds to a task; we use device features in the construction of the variant selection model. This work explores the effectiveness of multi-task learning and the impact of different strategies for device feature selection. We evaluate our approach on a set of benchmarks and a collection of six NVIDIA GPU architectures from three distinct generations. We achieve performance results that are mostly comparable to the previous approach of tuning for a single GPU architecture without having to repeat the learning phase.
doi:10.1145/2954680.2872411 fatcat:ly4zo74uafgynpqpknfx32hjqe

Going Beyond Classification Accuracy Metrics in Model Compression [article]

Vinu Joseph, Shoaib Ahmed Siddiqui, Aditya Bhaskara, Ganesh Gopalakrishnan, Saurav Muralidharan, Michael Garland, Sheraz Ahmed, Andreas Dengel
2021 arXiv   pre-print
IEEE Micro, 40(5):17–25, 2020. [31] Vinu Joseph, Saurav Muralidharan, and Michael Garland. Condensa: Programmable model compression. https://nvlabs.github.io/condensa/, 2019.  ...  In 2019 32nd International Conference on VLSI Design and 2019 18th International Conference on Embedded Systems (VLSID), pages 215–220, 2019. [30] Vinu Joseph, Ganesh L Gopalakrishnan, Saurav Muralidharan  ... 
arXiv:2012.01604v2 fatcat:76iy4ehbpneixb6bzo7vprwi7q

A Two-Staged Approach to Technology Entrepreneurship: Differential Effects of Intellectual Property Rights

Saurav Pathak, Etayankara Muralidharan
2020 Technology Innovation Management Review  
, 2016; Muralidharan & Pathak, 2017; Muralidharan & Pathak, 2018) .  ...  Mediation Model A Two-Staged Approach to Technology Entrepreneurship: Differential Effects of Intellectual Property Rights Saurav Pathak & Etayankara Muralidharan Implications for policy and practice  ... 
doi:10.22215/timreview/1364 fatcat:sokx2kpj5vbdvbv2hcbudsnrua

Nitro: A Framework for Adaptive Code Variant Tuning

Saurav Muralidharan, Manu Shantharam, Mary Hall, Michael Garland, Bryan Catanzaro
2014 2014 IEEE 28th International Parallel and Distributed Processing Symposium  
Autotuning systems intelligently navigate a search space of possible implementations of a computation to find the implementation(s) that best meets a specific optimization criteria, usually performance. This paper describes Nitro, a programmer-directed autotuning framework that facilitates tuning of code variants, or alternative implementations of the same computation. Nitro provides a library interface that permits programmers to express code variants along with metainformation that aids the
more » ... ion that aids the system in selecting among the set of variants at run time. Machine learning is employed to build a model through training on this meta-information, so that when a new input is presented, Nitro can consult the model to select the appropriate variant. In experiments with five real-world irregular GPU benchmarks from sparse numerical methods, graph computations and sorting, Nitro-tuned variants achieve over 93% of the performance of variants selected through exhaustive search. Further, we describe optimizations and heuristics in Nitro that substantially reduce training time and other overheads.
doi:10.1109/ipdps.2014.59 dblp:conf/ipps/MuralidharanSHGC14 fatcat:km5kdegmkrfg3cofmudmm23bkm

A collection-oriented programming model for performance portability

Saurav Muralidharan, Michael Garland, Bryan Catanzaro, Albert Sidelnik, Mary Hall
2015 Proceedings of the 20th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming - PPoPP 2015  
This paper describes Surge, a collection-oriented programming model that enables programmers to compose parallel computations using nested high-level data collections and operators. Surge exposes a code generation interface, decoupled from the core computation, that enables programmers and autotuners to easily generate multiple implementations of the same computation on various parallel architectures such as multi-core CPUs and GPUs. By decoupling computations from architecture-specific
more » ... re-specific implementation, programmers can target multiple architectures more easily, and generate a search space that facilitates optimization and customization for specific architectures. We express in Surge four real-world benchmarks from domains such as sparse linear-algebra and machine learning and from the same performance-portable specification, generate OpenMP and CUDA C++ implementations. Surge generates efficient, scalable code which achieves up to 1.32x speedup over handcrafted, well-optimized CUDA code.
doi:10.1145/2688500.2688537 dblp:conf/ppopp/MuralidharanGCSH15 fatcat:itm5vile45f7zaxmdmckqk6rtq

Architecture-Adaptive Code Variant Tuning

Saurav Muralidharan, Amit Roy, Mary Hall, Michael Garland, Piyush Rai
2016 Proceedings of the Twenty-First International Conference on Architectural Support for Programming Languages and Operating Systems - ASPLOS '16  
Code variants represent alternative implementations of a computation, and are common in high-performance libraries and applications to facilitate selecting the most appropriate implementation for a specific execution context (target architecture and input dataset). Automating code variant selection typically relies on machine learning to construct a model during an offline learning phase that can be quickly queried at runtime once the execution context is known. In this paper, we define a new
more » ... , we define a new approach called architectureadaptive code variant tuning, where the variant selection model is learned on a set of source architectures, and then used to predict variants on a new target architecture without having to repeat the training process. We pose this as a multi-task learning problem, where each source architecture corresponds to a task; we use device features in the construction of the variant selection model. This work explores the effectiveness of multi-task learning and the impact of different strategies for device feature selection. We evaluate our approach on a set of benchmarks and a collection of six NVIDIA GPU architectures from three distinct generations. We achieve performance results that are mostly comparable to the previous approach of tuning for a single GPU architecture without having to repeat the learning phase.
doi:10.1145/2872362.2872411 dblp:conf/asplos/MuralidharanRHG16 fatcat:y3hhmp53avfivnuekajgwx2cwq

Abstractions and strategies for adaptive programming

Saurav Muralidharan
2018
T h e U n i v e r s i t y o f U t a h G r a d u a t e S c h o o l STATEMENT OF DISSERTATION APPROVAL The dissertation of Saurav Muralidharan has been approved by the following supervisory committee members  ... 
doi:10.26053/0h-tzat-f800 fatcat:7bi3byv3kncxznma4lwdh46wve

Editorial: Insights

Stoyan Tanev, Gregory Sandstrom
2020 Technology Innovation Management Review  
The issue starts with a paper by Saurav Pathak & Etayankara Muralidharan, "A Two-Staged Approach to Technology Entrepreneurship: Differential Effects of Intellectual Property Rights".  ... 
doi:10.22215/timreview/1363 fatcat:6mtl2ojearfdxafdrlyopxd77e
« Previous Showing results 1 — 15 out of 28 results