A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2011; you can also visit the original URL.
The file type is application/pdf
.
Filters
Adaptive set pinning
2008
Proceedings of the 13th international conference on Architectural support for programming languages and operating systems - ASPLOS XIII
As part of the trend towards Chip Multiprocessors (CMPs) for the next leap in computing performance, many architectures have explored sharing the last level of cache among different processors for better ...
Furthermore, we show that an adaptive set pinning scheme improves over the benefits obtained by the set pinning scheme by significantly reducing the number of off-chip accesses. ...
Reducing off-chip accesses is the key to a successful shared cache management scheme in a CMP with large shared L2/L3 cache (16) . ...
doi:10.1145/1346281.1346299
dblp:conf/asplos/SrikantaiahKI08
fatcat:dylz67u7bbhp7kp44fegot35fm
Adaptive set pinning
2008
ACM SIGOPS Operating Systems Review
As part of the trend towards Chip Multiprocessors (CMPs) for the next leap in computing performance, many architectures have explored sharing the last level of cache among different processors for better ...
Furthermore, we show that an adaptive set pinning scheme improves over the benefits obtained by the set pinning scheme by significantly reducing the number of off-chip accesses. ...
Reducing off-chip accesses is the key to a successful shared cache management scheme in a CMP with large shared L2/L3 cache (16) . ...
doi:10.1145/1353535.1346299
fatcat:lai53txnvza45af3usd4mc3ncm
Adaptive set pinning
2008
SIGARCH Computer Architecture News
As part of the trend towards Chip Multiprocessors (CMPs) for the next leap in computing performance, many architectures have explored sharing the last level of cache among different processors for better ...
Furthermore, we show that an adaptive set pinning scheme improves over the benefits obtained by the set pinning scheme by significantly reducing the number of off-chip accesses. ...
Reducing off-chip accesses is the key to a successful shared cache management scheme in a CMP with large shared L2/L3 cache (16) . ...
doi:10.1145/1353534.1346299
fatcat:nec5zo4lvrfghnitpdsryxdave
Adaptive set pinning
2008
SIGPLAN notices
As part of the trend towards Chip Multiprocessors (CMPs) for the next leap in computing performance, many architectures have explored sharing the last level of cache among different processors for better ...
Furthermore, we show that an adaptive set pinning scheme improves over the benefits obtained by the set pinning scheme by significantly reducing the number of off-chip accesses. ...
Reducing off-chip accesses is the key to a successful shared cache management scheme in a CMP with large shared L2/L3 cache (16) . ...
doi:10.1145/1353536.1346299
fatcat:6hctydfbwnet5j5cg45kminvb4
Adaptive Block Pinning Based: Dynamic Cache Partitioning for Multi-core Architectures
2010
International Journal of Computer Science & Information Technology (IJCSIT)
a novel partitioning scheme known as Adaptive Block Pinning which would result in a better utilization of the cache resources in CMPs. ...
The widening speed gap between processors and memory along with the issue of limited on-chip memory bandwidth make the last-level cache utilization a crucial factor in designing future multicore processors ...
[6] proposed a novel dynamic partitioning scheme for managing shared caches in Chip Multi Processors, known as Adaptive Set Pinning. ...
doi:10.5121/ijcsit.2010.2604
fatcat:tbgzbr2xvjcjhegy52ffylargq
Winning with Pinning in NoC
2009
2009 17th IEEE Symposium on High Performance Interconnects
In Chip Multiprocessors (CMPs), on-chip interconnect carries data and coherence traffic exchanged between onchip cache banks. ...
In this paper, we explore circuit pinning, an efficient way of establishing circuits that promotes higher circuit utilization, adapts to changes in communication characteristics, simplifies network control ...
Since cache organizations as well as cache sizes affect the communication traffic on chip, we demonstrate the benefits of circuit pinning through a variety of configurations. ...
doi:10.1109/hoti.2009.15
dblp:conf/hoti/AbousamraMJ09
fatcat:yx6v5t3qw5grjfxladscd3c7ya
Adaptive Zone-Aware Multi-bank on Chip last level L2 Cache Partitioning for Chip Multiprocessors
2010
International Journal of Computer Applications
In this paper, we have discussed a novel dynamic partitioning scheme known as Adaptive Block Pinning which attempts to partition the last-level shared cache in a judicious and optimal manner thereby increasing ...
Beckmann and Wood [1] gathered the current proposals for managing wire delays and combined them with Chip Multiprocessors. ...
doi:10.5120/1131-1482
fatcat:yunmjobs7fdw5aqx5bxrjszxfu
Study of Various Factors Affecting Performance of Multi-Core Processors
2013
International Journal of Distributed and Parallel systems
64 to 2048 entries on a 4 node, 8 node 16 node and 64 node Chip multiprocessor which in turn presents an open area of research on multicore processors with private/shared last level cache as the future ...
On chip Cache memory is a resource of primary concern as it can be dominant in controlling overall throughput. ...
Further performance gap between processors and memory require adaptive novel techniques to manage on chip cache memory judiciously. Figure2. Single Chip Multiprocessor (CMP) ...
doi:10.5121/ijdps.2013.4404
fatcat:ipcaejvdybaipejw5ehavdham4
Understanding the applicability of CMP performance optimizations on data mining applications
2009
2009 IEEE International Symposium on Workload Characterization (IISWC)
In this paper, we examine the data usage characteristics of a set of parallel data mining applications to determine the applicability of existing chip multiprocessor approaches to these applications. ...
A variety of cache organizations, data management techniques, and hardware optimizations that take advantage of specific data characteristics have been developed to improve application performance. ...
This work was supported in part by the National Science Foundation under grant CCF-0702689. ...
doi:10.1109/iiswc.2009.5306779
dblp:conf/iiswc/JibajaS09
fatcat:5d5ly3sgl5f3lfiviic7oveoxe
Energy reduction in multiprocessor systems using transactional memory
2005
ISLPED '05. Proceedings of the 2005 International Symposium on Low Power Electronics and Design, 2005.
In this work we focus on new energy consumption issues unique to multiprocessor systems: synchronization of accesses to shared memory. ...
We investigate and compare different means of providing atomic access to shared memory, including locks and lock-free synchronization (i.e., transactional memory), with respect to energy as well as performance ...
The first set of bars shows DL1 cache accesses, the second set of bars DL2 cache accesses, and the third set of bars shared memory accesses. ...
doi:10.1109/lpe.2005.195542
fatcat:ohnr3nfnijgabgivvyeipmgcfu
Energy reduction in multiprocessor systems using transactional memory
2005
Proceedings of the 2005 international symposium on Low power electronics and design - ISLPED '05
In this work we focus on new energy consumption issues unique to multiprocessor systems: synchronization of accesses to shared memory. ...
We investigate and compare different means of providing atomic access to shared memory, including locks and lock-free synchronization (i.e., transactional memory), with respect to energy as well as performance ...
The first set of bars shows DL1 cache accesses, the second set of bars DL2 cache accesses, and the third set of bars shared memory accesses. ...
doi:10.1145/1077603.1077683
dblp:conf/islped/MoreshetBH05
fatcat:n2yqcqewbrfxfjoiujxooa6ocy
An Evaluation of OpenMP on Current and Emerging Multithreaded/Multicore Processors
[chapter]
2008
Lecture Notes in Computer Science
Multiprocessors based on simultaneous multithreaded (SMT) or multicore (CMP) processors are continuing to gain a significant share in both highperformance and mainstream computing markets. ...
Our results show that the exploitation of the multiple processor cores on each chip results in significant performance benefits. ...
The threads usually share a single set of resources such as execution units, caches and the TLB. CMPs on the other hand integrate multiple independent processor cores on a chip. ...
doi:10.1007/978-3-540-68555-5_11
fatcat:5sdth4krs5b3hhnsht24ymz5k4
Flex memory: Exploiting and managing abundant off-chip optical bandwidth
2011
2011 Design, Automation & Test in Europe
By combining both techniques, surplus off-chip bandwidth can be utilized and effectively managed adapting to the workloads intensity. ...
To further preserve locality and maintain service parallelism for different workloads, page folding technique is employed to achieve adaptive data mapping in photonics-connected DRAM chips via optical ...
With DWDM, multiple wavelengths can share a waveguide, breaking the limitation of I/O pins and highly boosting the off-chip bandwidth. ...
doi:10.1109/date.2011.5763157
dblp:conf/date/WangZHLL11
fatcat:v2d3w7fjxrfhndfnmukxsdiag4
The Scalable Coherent Interface, IEEE P 1596, status and possible applications to data acquisition and physics
1990
IEEE Transactions on Nuclear Science
SC1 goals include a minimum bandwidth of 1 GByte/sec per processor in multiprocessor systems with thousands of processors; efficient support of a coherent distributed-cache image of distributed shared ...
The coming generation of supercomputers-on-a-chip has raised computation price/performance expectations, wreaking havoc in the supercomputer industry. ...
SC1 goals include a minimum bandwidth of 1 GByte/sec per processor in multiprocessor systems with thousands of processors; efficient support of a coherent distributed-cache image of distributed shared ...
doi:10.1109/23.106646
fatcat:h3v4o5r5nrhytgvjrlhzwtnc6a
Nahalal: Cache Organization for Chip Multiprocessors
2007
IEEE computer architecture letters
This paper addresses cache organization in Chip Multiprocessors (CMPs). ...
Nahalal exhibits significant improvements in cache access latency compared to a traditional cache design. ...
They identified the imbalance between the number of accesses to shared cache lines and the number of shared cache lines in the working set, and pointed out the importance of shared lines to overall memory ...
doi:10.1109/l-ca.2007.6
fatcat:bhqhoxnfdzfxtmzti4f3kuokeu
« Previous
Showing results 1 — 15 out of 1,089 results