A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2006; you can also visit the original URL.
The file type is application/pdf
.
Filters
Designing reliable algorithms in unreliable memories
2007
Computer Science Review
In this paper we will survey some recent work on reliable computation in the presence of memory faults. ...
An algorithm is resilient to memory faults if, despite the corruption of some memory values before or during its execution, it is nevertheless able to get a correct output at least on the set of uncorrupted ...
Unfortunately, even very few memory faults may jeopardize the correctness of the underlying algorithms, and thus the quest for reliable computation in unreliable memories arises in an increasing number ...
doi:10.1016/j.cosrev.2007.10.001
fatcat:y4723guad5fcdba72a43asc25u
Designing Reliable Algorithms in Unreliable Memories
[chapter]
2005
Lecture Notes in Computer Science
In this paper we will survey some recent work on reliable computation in the presence of memory faults. ...
An algorithm is resilient to memory faults if, despite the corruption of some memory values before or during its execution, it is nevertheless able to get a correct output at least on the set of uncorrupted ...
Unfortunately, even very few memory faults may jeopardize the correctness of the underlying algorithms, and thus the quest for reliable computation in unreliable memories arises in an increasing number ...
doi:10.1007/11561071_1
fatcat:ght6kmcoanbnhchrwbcele5wu4
Resilience in Numerical Methods: A Position on Fault Models and Methodologies
[article]
2014
arXiv
pre-print
Given a selective reliability programming model that requires reliability only when and where needed, such checks can make algorithms reliable despite unbounded faults. ...
We argue instead that numerical algorithms can benefit from a numerical unreliability fault model, where faults manifest as unbounded perturbations to floating-point data. ...
We argue that resilient numerical methods should be designed around an abstract fault model of numerical unreliability, in much the same way C/R is designed around an abstract model of system unreliability ...
arXiv:1401.3013v1
fatcat:jqj3gp44gbcj7ps5ac72hmx3r4
The impact of faulty memory bit cells on the decoding of spatially-coupled LDPC codes
2015
2015 49th Asilomar Conference on Signals, Systems and Computers
Approximate computing is such an alternative design paradigm, where 100% reliable operation requirements are relaxed in order to limit the overhead of classical fault mitigation techniques [5] . ...
to close to the case where the memory is reliable. ...
doi:10.1109/acssc.2015.7421423
dblp:conf/acssc/MuVABKBFSC15
fatcat:arvalduocvhk5lh7vtvujjc22q
Energy Efficiency through Significance-Based Computing
2014
Computer
Hardware architecture At the base of the system, we've designed and developed a many-core platform where cores and their respective cache memories can operate in a conventional, fully reliable mode, as ...
In addition, unreliable CPUs should offer a special set of reliable instructions and hardened memory areas that the system software and/or application code can use to implement critical operations, such ...
doi:10.1109/mc.2014.182
fatcat:np6xvscp3jaathoz4ach3j4ivi
Fault-tolerant linear solvers via selective reliability
[article]
2012
arXiv
pre-print
Furthermore, they store most of their data unreliably, and spend most of their time in unreliable mode. ...
However, many algorithms only need reliability for certain data and phases of computation. This suggests an algorithm and system codesign approach. ...
Acknowledgment This work was supported in part by a faculty sabbatical appointment from Sandia National Laboratories and a grant from the U.S. Department of Energy and ...
arXiv:1206.1390v1
fatcat:ttowhmggzrbb5hsawtvli7pgue
Synergistic Architecture and Programming Model Support for Approximate Micropower Computing
2015
2015 IEEE Computer Society Annual Symposium on VLSI
Using aggressive voltage scaling can reduce power consumption, but memory operations become unreliable. ...
A compiler pass places data into memory regions with different reliability guarantees according to their tolerance to errors. ...
RM is the reliable memory (SCM), NM is the unreliable memory (6T-SRAM), and finally TM is the tolerant memory that can be optionally activated. ...
doi:10.1109/isvlsi.2015.64
dblp:conf/isvlsi/TagliaviniRBM15
fatcat:do3naxkiibh33lxwddhvyhwiwq
Cooperative Application/OS DRAM Fault Recovery
[chapter]
2012
Lecture Notes in Computer Science
In this paper, we describe work on a cross-layer application / OS framework to handle uncorrected memory errors. ...
Unfortunately, many fault-tolerance methods in use, such as rollback recovery, are unsuitable for many expected errors, for example DRAM failures. ...
The algorithm accomplishes this by dividing its computations into reliable and unreliable phases. ...
doi:10.1007/978-3-642-29740-3_28
fatcat:yatljdnp3zahpkipe6s6t3wroa
A facility reliability problem: Formulation, properties, and algorithm
2009
Naval Research Logistics
In this paper, we study the facility reliability problem: how to design a reliable supply chain network in the presence of random facility disruptions with the option of hardening selected facilities. ...
We consider a facility location problem incorporating two types of facilities, one that is unreliable and another that is reliable (which is not subject to disruption, but is more expensive). ...
Acknowledgments The research was supported in part by the National Science Foundation (DMI-0457503) and the Office of Naval Research (N00014-05-1-0190). This support is gratefully appreciated. ...
doi:10.1002/nav.20385
fatcat:xxb6oxrx2zh4hojurrurfr6rbi
Reliability-based coded modulation with low-density parity-check codes
2006
IEEE Transactions on Communications
In this letter, we consider the interleaver design in bit-interleaved coded modulation (BICM) with low-density parity-check (LDPC) codes. ...
The design paradigm is to provide more coding protection through iterative decoding to bits that are less protected by modulation (and are thus less reliable at the output of demodulator). ...
The application of low-density parity-check (LDPC) codes [2] in BICM framework was studied in [3] , ...
doi:10.1109/tcomm.2006.869865
fatcat:2kj6anqmzvbbhceiywoqp2xm4m
An Information Theoretical Framework for Analysis and Design of Nanoscale Fault-Tolerant Memories Based on Low-Density Parity-Check Codes
2007
IEEE Transactions on Circuits and Systems Part 1: Regular Papers
The equivalence of the restoration phase in the TK method and faulty Gallager B algorithm enabled us to establish a theoretical framework for solving problems in reliable storage on unreliable media using ...
In this paper, we develop a theoretical framework for the analysis and design of fault-tolerant memory architectures. ...
algorithms have not been exploited so far to improve reliability of memory systems. ...
doi:10.1109/tcsi.2007.902611
fatcat:wqcjktxr4beehnyxceju3t2unu
Data mapping for unreliable memories
2012
2012 50th Annual Allerton Conference on Communication, Control, and Computing (Allerton)
In this paper, we address this issue by investigating the impact of unreliable memories on general DSP systems. ...
Future digital signal processing (DSP) systems must provide robustness on algorithm and application level to the presence of reliability issues that come along with corresponding implementations in modern ...
, provided that the corresponding algorithms and system architectures are designed to take such hardware errors into account [10] - [13] . ...
doi:10.1109/allerton.2012.6483283
dblp:conf/allerton/RothBSKB12
fatcat:sxceesnwtfcgdflzycpdvourzm
Analysis of the Signal Reliability Measure and an Evaluation Procedure
1979
IEEE transactions on computers
This is approximately half the n-I E 2n"i = 2(2" -1) i=O computation steps required by the algorithm in [1]. ...
In order to generate the 2" elements of S. the algorithm based on (8), (9), and (10) requires 2" computation steps. ...
Employing the functional reliability measure results in a less accurate reliability comparison of different designs of a digital circuit. ...
doi:10.1109/tc.1979.1675326
fatcat:cpdls2nzwve5vi7fmyeu5q54mi
Data fusion without knowledge of the ground truth using Tseltin-like Automata
2016
2016 IEEE International Conference on Systems, Man, and Cybernetics (SMC)
By virtue of the limited memory requirement of our devised LA, we achieve adaptive behavior at the cost of negligible loss in the accuracy. ...
Our approach leverages a Random Walk (RW) inspired by Tsetlin LA so that to gradually learn the identity of the reliable and unreliable sensors. ...
Our solution can adaptively and in an on-line manner distinguish between reliable sensors and unreliable senors using finite memory. ...
doi:10.1109/smc.2016.7844941
dblp:conf/smc/YazidiS16
fatcat:i5pidml2rjhojhiuwguskfc47e
Performance analysis of faulty Gallager-B decoding of QC-LDPC codes with applications
2014
Telfor Journal
One possible application of the presented analysis in designing memory architecture with unreliable components is considered. ...
In this paper we evaluate the performance of Gallager-B algorithm, used for decoding low-density paritycheck (LDPC) codes, under unreliable message computation. ...
Furthermore, the application of the described algorithm in the design of memory architecture with unreliable components is considered. ...
doi:10.5937/telfor1401007o
fatcat:5d33ediqhjemhorl26nqaqiove
« Previous
Showing results 1 — 15 out of 32,601 results