A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2004; you can also visit the original URL.
The file type is application/pdf
.
Filters
An analysis of Memnet---an experiment in high-speed shared-memory local networking
1988
Symposium proceedings on Communications architectures and protocols - SIGCOMM '88
"An In- Newark, DE, May 1988. [Delp Farber 861 Gary S. Delp and David J. Farber. nel: an experiment in high-speed memory mapped local network interfaces. ...
The The Architecture
and Implemen-
tation of Memnet:
a High-Speed Shared-Memory
Com-
puter Communication
Network. ...
doi:10.1145/52324.52342
dblp:conf/sigcomm/DelpSF88
fatcat:vo34vjln2vgyjetjbdjey2uh3q
MemNet: A Persistent Memory Network for Image Restoration
[article]
2017
arXiv
pre-print
Motivated by the fact that human thoughts have persistency, we propose a very deep persistent memory network (MemNet) that introduces a memory block, consisting of a recursive unit and a gate unit, to ...
Comprehensive experiments demonstrate the necessity of the MemNet and its unanimous superiority on all three tasks over the state of the arts. Code is available at https://github.com/tyshiwo/MemNet. ...
The second is in DRCN, the weights of the basic modules (i.e., the convolutional layers) are shared; while in MemNet, the weights of the memory blocks are different. ...
arXiv:1708.02209v1
fatcat:bzqzex6dfjhpxeidjjtsrv3l3u
Multi-Level Feature Fusion Mechanism for Single Image Super-Resolution
[article]
2020
arXiv
pre-print
The correlation among the features of the holistic approach leads to a continuous global memory of information mechanism. ...
Convolution neural network (CNN) has been widely used in Single Image Super Resolution (SISR) so that SISR has been a great success recently. ...
[8] proposed a deep network called MemNet [8] consisting of cascaded memory blocks which can fuse global features for better vision result. ...
arXiv:2002.05962v1
fatcat:bga2qaxgkbdm5jmvvja4bu7oay
Memory Warps for Learning Long-Term Online Video Representations
[article]
2018
arXiv
pre-print
This is in contrast to prior works that often rely on computationally heavy 3D convolutions, ignore actual motion when aligning features over time, or operate in an off-line mode to utilize future frames ...
We apply our online framework to object detection in videos, obtaining a large 2.3 times speed-up and losing only 0.9% mAP on ImageNet-VID dataset, compared to prior works that even use future frames. ...
Additional results of ablation studies This section provides (i) a comparison between different aggregation schemes for MemNet, (ii) an analysis of the underlying feature representation our memory module ...
arXiv:1803.10861v1
fatcat:iy4fmp352bc4nn66s7td3hjiie
Fast and Memory-Efficient Network Towards Efficient Image Super-Resolution
[article]
2022
arXiv
pre-print
We propose a novel sequential attention branch, where every pixel is assigned an important factor according to local and global contexts, to enhance high-frequency details. ...
In addition, we tailor the residual block for EISR and propose an enhanced residual block (ERB) to further accelerate the network inference. ...
However, the use of skip connection in-troduces extra memory consumption of feature map size C×H×W , and lowers the inference speed due to additional memory access cost, which is shown experimentally that ...
arXiv:2204.08397v1
fatcat:bevbmjebi5czhcenqjkj22furm
Fast and Accurate Single Image Super-Resolution via Information Distillation Network
[article]
2018
arXiv
pre-print
However, as the depth and width of the networks increase, CNN-based super-resolution methods have been faced with the challenges of computational complexity and memory consumption in practice. ...
Experimental results demonstrate that the proposed method is superior to the state-of-the-art methods, especially in terms of time performance. ...
The authors also present a very deep end-to-end persistent memory network (MemNet) [23] for image restoration task, which tackles the long-term dependency problem in the previous CNN architectures. ...
arXiv:1803.09454v1
fatcat:dxogi3stpvecdi6fcqexpknrzi
Fast and Accurate Single Image Super-Resolution via Information Distillation Network
2018
2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition
However, as the depth and width of the networks increase, CNN-based super-resolution methods have been faced with the challenges of computational complexity and memory consumption in practice. ...
Experimental results demonstrate that the proposed method is superior to the state-of-the-art methods, especially in terms of time performance. ...
Acknowledgment This work was supported in part by the National Natural Science Foundation of China under Grant 61472304, 61432014 and U1605252. ...
doi:10.1109/cvpr.2018.00082
dblp:conf/cvpr/HuiWG18
fatcat:5k7jyf534bhanmvpk66exvlzxq
Memory-Efficient Hierarchical Neural Architecture Search for Image Restoration
[article]
2021
arXiv
pre-print
For the outer search space, we design a cell-sharing strategy to save memory, and considerably accelerate the search speed. The proposed HiNAS method is both memory and computation efficient. ...
Experiments show that the architectures found by HiNAS have fewer parameters and enjoy a faster inference speed, while achieving highly competitive performance compared with state-of-the-art methods. ...
As cell sharing saves memory consumption in the supernet, during search, we can use larger batch sizes speed up the convergence and search speed. ...
arXiv:2012.13212v3
fatcat:4rabc6sow5evhlsiy2btmxm5ga
Lightweight Image Super-Resolution with Information Multi-distillation Network
2019
Proceedings of the 27th ACM International Conference on Multimedia - MM '19
Extensive experiments suggest that the proposed method performs favorably against the state-of-the-art SR algorithms in term of visual quality, memory footprint, and inference time. ...
Thanks to the powerful representation capabilities of the deep networks, numerous previous ways can learn the complex non-linear mapping between low-resolution (LR) image patches and their high-resolution ...
ACKNOWLEDGMENTS This work was supported in part by the National Natural Science ...
doi:10.1145/3343031.3351084
dblp:conf/mm/HuiGYW19
fatcat:uw3jwnrrjbccrbsfmd3uld72bi
Lightweight Feature Fusion Network for Single Image Super-Resolution
2019
IEEE Signal Processing Letters
In this letter, We propose a lightweight feature fusion network (LFFN) that can fully explore multi-scale contextual information and greatly reduce network parameters while maximizing SISR results. ...
SFFM fuses the features from different modules in a self-adaptive learning manner with softmax function, making full use of hierarchical information with a small amount of parameter cost. ...
This ratio can be further decreased to 12.13% by replacing 3×3 convolution in spindle block with depthwise convolution. More analysis will be described in experiment.
C. ...
doi:10.1109/lsp.2018.2890770
fatcat:o3qro6k2wrc5pdadv3hvsfxvpm
Single-Image Super-Resolution Neural Network via Hybrid Multi-Scale Features
2022
Mathematics
In this paper, we propose an end-to-end single-image super-resolution neural network by leveraging hybrid multi-scale features of images. ...
By effectively exploiting these multi-scale and local-global features, our network involves far fewer parameters, leading to a large decrease in memory usage and computation during inference. ...
Tai et al. used a persistent memory network (MemNet) [9] by using a very deep network. ...
doi:10.3390/math10040653
fatcat:6u2u6b2x5vb6rmc57wbjgnc46e
Non-Local Recurrent Network for Image Restoration
[article]
2018
arXiv
pre-print
The main contributions of this work are: (1) Unlike existing methods that measure self-similarity in an isolated manner, the proposed non-local module can be flexibly integrated into existing deep networks ...
Many classic methods have shown non-local self-similarity in natural images to be an effective prior for image restoration. ...
MemNet [38] builds dense connections among several types of memory blocks, and weights are shared in the same type of memory blocks but are different across various types. ...
arXiv:1806.02919v2
fatcat:mnto5klghbdsxlp3war64qqgza
Iterative Document Representation Learning Towards Summarization with Polishing
[article]
2019
arXiv
pre-print
to read an article multiple times in order to fully understand and summarize its contents. ...
Experimental results on the CNN/DailyMail and DUC2002 datasets demonstrate that our model significantly outperforms state-of-the-art extractive systems when evaluated by machines and by humans. ...
., 2017) , where the authors present a deep network called Hybrid MemNet for the single document summarization task, using a memory network as the document encoder. ...
arXiv:1809.10324v2
fatcat:dbeohuakrvcg5fpuigsy3c6jye
Iterative Document Representation Learning Towards Summarization with Polishing
2018
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing
to read an article multiple times in order to fully understand and summarize its contents. ...
Experimental results on the CNN/DailyMail and DUC2002 datasets demonstrate that our model significantly outperforms state-of-the-art extractive systems when evaluated by machines and by humans. ...
., 2017) , where the authors present a deep network called Hybrid MemNet for the single document summarization task, using a memory network as the document encoder. ...
doi:10.18653/v1/d18-1442
dblp:conf/emnlp/ChenGTSZY18
fatcat:4f6chad2crbl5fg4jzcufdlqqu
Single Image Super-Resolution via Cascaded Multi-Scale Cross Network
[article]
2018
arXiv
pre-print
The deep convolutional neural networks have achieved significant improvements in accuracy and speed for single image super-resolution. ...
To improve information flow and to capture sufficient knowledge for reconstructing the high-frequency details, we propose a cascaded multi-scale cross network (CMSC) in which a sequence of subnetworks ...
[22] propose the deepest persistent memory network (MemNet) for image restoration, in which a memory block is applied to achieve persistent memory and multiple memory blocks are stacked with a densely ...
arXiv:1802.08808v1
fatcat:jyvni34d4jhbzpiqntykwfp32a
« Previous
Showing results 1 — 15 out of 84 results