A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2020; you can also visit the original URL.
The file type is application/pdf
.
Filters
BlockDrop: Dynamic Inference Paths in Residual Networks
[article]
2019
arXiv
pre-print
We introduce BlockDrop, an approach that learns to dynamically choose which layers of a deep network to execute during inference so as to best reduce total computation without degrading prediction accuracy ...
Exploiting the robustness of Residual Networks (ResNets) to layer dropping, our framework selects on-the-fly which residual blocks to evaluate for a given novel image. ...
Then we introduce our policy network in Sec. 3.2, which learns to dynamically select inference paths conditioned on the input image. ...
arXiv:1711.08393v4
fatcat:hrmjqu2p6raqlbhi65bx4z3x2u
BlockDrop: Dynamic Inference Paths in Residual Networks
2018
2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition
We introduce BlockDrop, an approach that learns to dynamically choose which layers of a deep network to execute during inference so as to best reduce total computation without degrading prediction accuracy ...
Exploiting the robustness of Residual Networks (ResNets) to layer dropping, our framework selects on-the-fly which residual blocks to evaluate for a given novel image. ...
Then we introduce our policy network in Sec. 3.2, which learns to dynamically select inference paths conditioned on the input image. ...
doi:10.1109/cvpr.2018.00919
dblp:conf/cvpr/WuNKRDGF18
fatcat:b4vxf6d3mjgjro3kvf6rznsvoq
Diversifying Inference Path Selection: Moving-Mobile-Network for Landmark Recognition
[article]
2019
arXiv
pre-print
We intuitively find that M^2Net can essentially promote the diversity of the inference path (selected blocks subset) selection, so as to enhance the recognition accuracy. ...
To this end, many methods have been proposed for efficient network learning, and applications in portable mobile devices. ...
Dynamic Layer Selection Several methods have been proposed to dynamically drop the residual layer in residual networks. ...
arXiv:1912.00418v1
fatcat:svznrlrcsjgbhn5ihu5niyv7mi
CoDiNet: Path Distribution Modeling with Consistency and Diversity for Dynamic Routing
[article]
2021
arXiv
pre-print
Dynamic routing networks, aimed at finding the best routing paths in the networks, have achieved significant improvements to neural networks in terms of accuracy and efficiency. ...
From the perspective of space mapping, prevalent methods of dynamic routing didn't consider how inference paths would be distributed in the routing space. ...
[8] found that only short paths of deep residual networks are needed. ...
arXiv:2005.14439v3
fatcat:3uccjhqitvhw5blw5hbrs6r4o4
URNet : User-Resizable Residual Networks with Conditional Gating Module
[article]
2019
arXiv
pre-print
There are methods to reduce the cost by compressing networks or varying its computational path dynamically according to the input image. ...
We propose User-Resizable Residual Networks (URNet), which allows users to adjust the scale of the network as needed during evaluation. ...
Dynamic Path Network The works in [21, 34, 23, 28, 3] are based on the idea of not fully using the network's entire feed forward graph, but picking a subset of the graph specific for each input. ...
arXiv:1901.04687v2
fatcat:gylmmvjsfzcadpekcpxudnvtca
URNet: User-Resizable Residual Networks with Conditional Gating Module
2020
PROCEEDINGS OF THE THIRTIETH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE AND THE TWENTY-EIGHTH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE
There are methods to reduce the cost by compressing networks or varying its computational path dynamically according to the input image. ...
URNet can control the amount of computation and its inference path according to user's demand without degrading the accuracy significantly. ...
Introduction Generally, the computational graph in a deep neural network is fixed and unchanged during inference time. ...
doi:10.1609/aaai.v34i04.5886
fatcat:hz4sr5vfp5cshfph2dw2yinyp4
SGAD: Soft-Guided Adaptively-Dropped Neural Network
[article]
2018
arXiv
pre-print
Compared with the 32 layers residual neural networks, the presented SGAD can reduce the FLOPs by 77% with less than 1% drop in accuracy on CIFAR-10. ...
Based on the developed guideline and adaptive dropping mechanism, an innovative soft-guided adaptively-dropped (SGAD) neural network is proposed in this paper. ...
SkipNet and BlockDrop Wu et al. [2018] utilize reinforcement learning to dynamically choose executed residual units in a pretrained ResNet for different input samples. ...
arXiv:1807.01430v1
fatcat:2dmg3hovhndo7bvb775u6vz53a
Adaptive Anomaly Detection for Internet of Things in Hierarchical Edge Computing: A Contextual-Bandit Approach
[article]
2021
arXiv
pre-print
The advances in deep neural networks (DNN) have significantly enhanced real-time detection of anomalous data in IoT applications. ...
Then, we design an adaptive model selection scheme that is formulated as a contextual-bandit problem and solved by using a reinforcement learning policy network. ...
Similarly, [28] proposed a BlockDrop scheme that learns to dynamically drop or keep a residual block of a trained deep residual networks (ResNets) during inference, to minimize the number of residual ...
arXiv:2108.03872v1
fatcat:r4d3izj5zvbj5lvhad5yf7d7oq
Dynamic Neural Networks: A Survey
[article]
2021
arXiv
pre-print
Dynamic neural network is an emerging research topic in deep learning. ...
Compared to static models which have fixed computational graphs and parameters at the inference stage, dynamic networks can adapt their structures or parameters to different inputs, leading to notable ...
Rather than skipping layers in classic ResNets, dynamic recursive network [65] iteratively executes one block with shared parameters in each residual stage. ...
arXiv:2102.04906v4
fatcat:zelspxwv6nel7kv2yu6ynakyuu
Self-Distillation: Towards Efficient and Compact Neural Networks
2021
IEEE Transactions on Pattern Analysis and Machine Intelligence
Moreover, the additional classifiers in self-distillation allow the neural network to work in a dynamic manner, which leads to a much higher acceleration. ...
Remarkable achievements have been obtained by deep neural networks in the last several years. ...
Secondly, the success of dynamic inference in this paper shows that the bottleneck of dynamic inference is how to train a high performance early-exit in the neural network, instead of how to control different ...
doi:10.1109/tpami.2021.3067100
pmid:33735074
fatcat:6cymqo72bbbchbcoyy76ih7jqq
End-To-End Data-Dependent Routing in Multi-Path Neural Networks
[article]
2021
arXiv
pre-print
Our multi-path networks show superior performance to existing widening and adaptive feature extraction, and even ensembles, and deeper networks at similar complexity in the image recognition task. ...
The conventional widening of networks by having more filters in each layer introduces a quadratic increment of parameters. ...
This allows those networks to be flexible to the context of the input, making the network more dynamic during inference. ...
arXiv:2107.02450v1
fatcat:6xpkpwd43rfhvfxsezlhzzhja4
Pluggable micronetwork for layer configuration relay in a dynamic deep neural surface
2021
IEEE Access
Blockdrop [34] has also leveraged residual units to dynamically select or drop the entire residual block using the proposed policy network. ...
[29] have utilized residual blocks with gated inference. The proposed gated inference consists of two parts. ...
doi:10.1109/access.2021.3110709
fatcat:eyzxjsneyvhileruyobwcj75la
Fully Dynamic Inference with Deep Neural Networks
[article]
2020
arXiv
pre-print
inference dynamics at the level of layers and individual convolutional filters/channels. ...
On the CIFAR-10 dataset, LC-Net results in up to 11.9× fewer floating-point operations (FLOPs) and up to 3.3 inference methods. ...
For example, BlockDrop, a reinforcement learning-based approach, can learn which arbitrary sets of residual blocks to drop in a ResNet architecture [30] . ...
arXiv:2007.15151v1
fatcat:i7nlcqqmw5by7pthmgth36ufxm
Dynamic Graph: Learning Instance-aware Connectivity for Neural Networks
[article]
2020
arXiv
pre-print
Instead of using the same path of the network, DG-Net aggregates features dynamically in each node, which allows the network to have more representation ability. ...
In this paper, we address this issue by raising the Dynamic Graph Network (DG-Net). The network learns the instance-aware connectivity, which creates different forward paths for different instances. ...
Dynamic Networks. Dynamic networks, adjusting the network architecture to the corresponding input, have been recently studied in the computer vision domain. ...
arXiv:2010.01097v1
fatcat:gndgp47sgbbb7fhun6pkkyijxe
Dynamic Channel Pruning: Feature Boosting and Suppression
[article]
2019
arXiv
pre-print
In contrast to channel pruning methods which permanently remove channels, it preserves the full network structures and accelerates convolution by dynamically skipping unimportant input and output channels ...
We compare FBS to a range of existing channel pruning and dynamic execution schemes and demonstrate large improvements on ImageNet classification. ...
Acknowledgements This work is supported in part by the National Key R&D Program of China (No. 2018YFB1004804), the National Natural Science Foundation of China (No. 61806192). ...
arXiv:1810.05331v2
fatcat:jzdmfturw5e75l4f47owyyiljm
« Previous
Showing results 1 — 15 out of 31 results