A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2019; you can also visit <a rel="external noopener" href="https://link.springer.com/content/pdf/10.1007%2Fs11265-011-0653-3.pdf">the original URL</a>. The file type is <code>application/pdf</code>.
<i title="Springer Nature">
<a target="_blank" rel="noopener" href="https://fatcat.wiki/container/wgplegupdndx5o6decidr2va24" style="color: black;">Journal of Signal Processing Systems</a>
Field-Programmable Technology (FPT) describes those electronic systems where the hardware as well as the software can be programmed on an application by application basis. Field Programmable Gate Arrays (FPGAs) represent the most common form of FPT, widely used in applications such as signal and image processing, telecommunications and computer networking. FPT continues to intrigue researchers; current research ranges from transistor-level programmable logic, through to high performance<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1007/s11265-011-0653-3">doi:10.1007/s11265-011-0653-3</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/7vextmaqsfcuhjiwtais3nk35e">fatcat:7vextmaqsfcuhjiwtais3nk35e</a> </span>
more »... ions, and encompasses the design methodologies and design tools needed for such systems. In "Reconfigurable Blocks Based on Balanced Ternary," Paul Beckett and Tayab Memon describe a new programmable logic block which takes advantage of some of the unique characteristics of Silicon-on-Insulator in order to deliver balanced ternary (−1,0,+1) logic functions and memory cells. In "Using Data Contention in Dual-ported Memories for Security Applications", Tim Güneysu investigates the problem of how to protect design IP in a technology where hardware programs are difficult to secure. His paper explores device-specific behaviour during write collisions in dual-port FPGA memories to support IP protection. For embedded systems that use floating point arithmetic, the choice has been fast hardware or slow software implementations. In "Improving Floating-Point Performance in Less Area: Fractured Floating Point Units (FFPUs)," Neil Hockert and Katherine Compton look at partial hardware support for floating point, which gives better performance than software floating point, and lower area than a full hardware unit. As logic circuits get larger, the time for physical design at logic level discourages broad exploration of high-level architectural design alternatives. In "Rapid Synthesis and Simulation of Computational Ciruits in an MPPA," David Grant, Graeme Smecher, Guy Lemieux and Rosemary Francis present a tool flow (RVETool) for rapidly compiling computational circuits into a Massively Parallel Processor Array. In "Automated Mapping of the MapReduce Pattern onto Parallel Computing Platforms," Qiang Lie, Tim Todman, Wayne Luk and George Constantinides explore using FPGAs effectively for large computational problems. Having identified that many applications use a "MapReduce" computational pattern, they develop a
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20190502180814/https://link.springer.com/content/pdf/10.1007%2Fs11265-011-0653-3.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/49/7f/497fb7ca1311ea66270101b67d0187b16619a834.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1007/s11265-011-0653-3"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> springer.com </button> </a>