Filters








126,299 Hits in 6.7 sec

Prediction of discrete cosine transformed coefficients in resized pixel blocks

Jin Li, Weiwei Chen, Moncef Gabbouj, Jarmo Takala, Hexin Chen
2011 2011 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)  
A hybrid model was developed to predict the zeroquantized discrete cosine transform (ZQDCT) coefficients for intra blocks in our previous work.  ...  However, the complicated overhead computations seriously degrade its performance in complexity reduction. This paper proposes a new prediction algorithm with less overhead operations.  ...  original codec, and is the overheads in the test models.  ... 
doi:10.1109/icassp.2011.5946586 dblp:conf/icassp/LiCGTC11 fatcat:gehdha7yxfbete5qbozb7tncki

ProFormer: Towards On-Device LSH Projection Based Transformers [article]

Chinnadhurai Sankar, Sujith Ravi, Zornitsa Kozareva
2021 arXiv   pre-print
fastest and smallest on-device model.  ...  In comparison with a 2-layer BERT model, ProFormer reduced the embedding memory footprint from 92.16 MB to 1.3 KB and requires 16 times less computation overhead, which is very impressive making it the  ...  For fair comparison, we also test ProFormer with K = 4, which only occupies 38.4% the memory footprint of 2-layer BERT-base model and reduces the computation overhead by 16 times.  ... 
arXiv:2004.05801v2 fatcat:wxykv2vd7ze5hee6okzheamtwi

25 kV–50 Hz railway power supply system emulation for power-hardware-in-the-loop testings

Caroline Stackler, Nathan Evans, Luc Bourserie, François Wallart, Florent Morel, Philippe Ladoux
2019 IET Electrical Systems in Transportation  
A procedure is developed to model the network. Then, the model is simplified to reduce the computation requirements and discretised for real-time implementation.  ...  The model is computed in Matlab-Simulink, a SpeedGoat Performance Machine and a linear power supply are used for a real time implementation. The converter under test and the test bench are presented.  ...  Tests results To illustrate the usefulness of the infrastructure model, some tests have been carried out on the small scale mock-up of the on-board converter presented in the previous section.  ... 
doi:10.1049/iet-est.2018.5011 fatcat:2m2sx6pmqje5tchuuj2i5ijbw4

Accumulated Polar Feature-based Deep Learning for Efficient and Lightweight Automatic Modulation Classification with Channel Compensation Mechanism [article]

Chieh-Fang Teng, Ching-Yao Chou, Chun-Hsiang Chen, An-Yeu Wu
2020 arXiv   pre-print
Moreover, in applying this lightweight NN-CE in a time-varying fading channel, two efficient mechanisms of online retraining are proposed, which can reduce transmission overhead and retraining overhead  ...  To address such an issue, automatic modulation classification (AMC) can help to reduce signaling overhead by blindly recognizing the modulation types without handshaking.  ...  significantly reduces both training overhead and model complexity.  ... 
arXiv:2001.01395v2 fatcat:26oguavs3zehrj45uampfzdi7m

Unrolling Loops Containing Task Parallelism [chapter]

Roger Ferrer, Alejandro Duran, Xavier Martorell, Eduard Ayguadé
2010 Lecture Notes in Computer Science  
In these cases, the transformation will try to aggregate the multiple tasks that appear after a classic unrolling phase to reduce the overheads per iteration.  ...  Classic loop unrolling allows to increase the performance of sequential loops by reducing the overheads of the non-computational parts of the loop.  ...  They are general enough to be applied to any task-parallel programming model. Unrolling countable loops is beneficial since it reduces the overhead related to the end of loop test and branching.  ... 
doi:10.1007/978-3-642-13374-9_30 fatcat:zip3nicw6nabtpjetdthrbvmom

LighTN: Light-weight Transformer Network for Performance-overhead Tradeoff in Point Cloud Downsampling [article]

Xu Wang, Yi Jin, Yigang Cen, Tao Wang, Bowen Tang, Yidong Li
2022 arXiv   pre-print
The result of extensive experiments on classification and registration tasks demonstrates LighTN can achieve state-of-the-art performance with limited resource overhead.  ...  However, Transformer-based architectures potentially consume too many resources which are usually worthless for low overhead task networks in downsampling range.  ...  For a fair comparison, we leverage the official train-test split strategy with 9840 CAD models for training stage and 2648 CAD models for testing stage.  ... 
arXiv:2202.06263v1 fatcat:hw2ekixioza37j27d66rdnsrz4

Zero-Quantized Inter DCT Coefficient Prediction for Real-Time Video Coding

Jin Li, Moncef Gabbouj, Jarmo Takala
2012 IEEE transactions on circuits and systems for video technology (Print)  
Simulation results show that the proposed model reduces the complexity of transform and quantization more efficiently than competing techniques.  ...  Moreover, the proposed algorithm can perform the prediction on 1-D transforms in both the pixel domain and the transform domain.  ...  All the overheads in the test models are taken into account for measurement.  ... 
doi:10.1109/tcsvt.2011.2160749 fatcat:oddmr6ntu5hdzekcnpo2fwpla4

Training Large Neural Networks with Constant Memory using a New Execution Algorithm [article]

Bharadwaj Pudipeddi, Maral Mesmakhosroshahi, Jinwen Xi, Sujeeth Bharadwaj
2020 arXiv   pre-print
Widely popular transformer-based NLP models such as BERT and Turing-NLG have enormous capacity trending to billions of parameters.  ...  L2L is also able to fit models up to 50 Billion parameters on a machine with a single 16GB V100 and 512GB CPU memory and without requiring any model partitioning.  ...  To show the performance of L2L on models larger that BERT, we performed a test on a transformer-based model with settings used in Turing-NLG (Rajbhandari et al., 2019) .  ... 
arXiv:2002.05645v5 fatcat:k2de2clszvbhdk3s4uxtdxakkm

Transfer Learning-Based Model Protection With Secret Key [article]

MaungMaung AprilPyone, Hitoshi Kiya
2021 arXiv   pre-print
It utilizes a learnable encryption step with a secret key to generate learnable transformed images. Models with pre-trained weights are fine-tuned by using such transformed images.  ...  By taking advantage of transfer learning, the proposed method enables us to train a large protected model like a model trained with ImageNet by using a small subset of a training dataset.  ...  They utilized a block-wise transformation with a key for model protection [12] . However, it was tested only on CIFAR-10 [15] , and the protected model was trained from scratch.  ... 
arXiv:2103.03525v1 fatcat:gsafrjy665c43kl2yevehig73q

Reducing Activation Recomputation in Large Transformer Models [article]

Vijay Korthikanti, Jared Casper, Sangkug Lym, Lawrence McAfee, Michael Andersch, Mohammad Shoeybi, Bryan Catanzaro
2022 arXiv   pre-print
We evaluate our approach on language models up to one trillion parameters in scale and show that our method reduces activation memory by 5x, while reducing execution time overhead from activation recomputation  ...  Training large transformer models is one of the most important computational challenges of modern AI.  ...  Data parallelism introduces some overhead due to the gradient all-reduce required between the data parallel groups. However, for large transformer models, this overhead is not large.  ... 
arXiv:2205.05198v1 fatcat:4laqx6wrzjfwrnnfkshdgkkxxi

DyTox: Transformers for Continual Learning with DYnamic TOken eXpansion [article]

Arthur Douillard, Alexandre Ramé, Guillaume Couairon, Matthieu Cord
2022 arXiv   pre-print
As a result, they struggle to scale to a large number of tasks without significant overhead. In this paper, we propose a transformer architecture based on a dedicated encoder/decoder framework.  ...  Our model reaches excellent results on CIFAR100 and state-of-the-art performances on the large-scale ImageNet100 and ImageNet1000 while having less parameters than concurrent dynamic frameworks.  ...  Our task-specialized model could help reduce these biases.  ... 
arXiv:2111.11326v2 fatcat:6r7rk5j5lbh6tn53tig335vzpq

Analysis of overhead cost behavior: case study on decision-making approach

Petr Novák, Ján Dvorský, Boris Popesko, Jiří Strouhal
2017 Journal of International Studies  
Analysis of overhead cost behavior: case study on decisionmaking approach 75 cost asymmetric behavior called "sticky costs".  ...  We used the model adjusted in accordance with Anderson et al. (2003) . and we kept the model clearly transformed and assembled so that there remained only those variables that had a statistically significant  ...  ACKNOWLEDGEMENT This paper is one of the research outputs of the project GA 14-21654P/P403 "Variability of Cost Groups and Its Projection in the Costing Systems of Manufacturing Enterprises" registered  ... 
doi:10.14254/2071-8330.2017/10-1/5 fatcat:6iaidl2wvvathnegnhcakzi47y

Post-placement temperature reduction techniques

Wei Liu, Alberto Nannarelli, Andrea Calimera, Enrico Macii, Massimo Poncino
2010 2010 Design, Automation & Test in Europe Conference & Exhibition (DATE 2010)  
Experiments in a set of tests on circuits implemented in STM 65nm technologies show that our methods achieve better peak temperature reduction than directly increasing circuit's area.  ...  We propose two post-placement techniques to reduce peak temperature by intelligently allocating whitespace in the hotspots.  ...  As an example, the power (left) and thermal (right) profile of test set one is shown in Figure 5 .  ... 
doi:10.1109/date.2010.5457127 dblp:conf/date/LiuNCMP10 fatcat:3udntybxfbfqtbre4vgzmyb4fi

An empirical analysis of manufacturing overhead cost drivers

Rajiv D. Banker, Gordon Potter, Roger G. Schroeder
1995 Journal of Accounting & Economics  
Most of the variation in overhead costs, however, is explained by measures of manufacturing transactions, not volume.  ...  Most of the variation in overhead costs, however, is explained by measures of manufacturing transactions, not volume.  ...  When the transaction variables are reduced by one-fourth of their corresponding standard deviations, the predicted overhead costs based on model (A) decrease by 33.2%, with AREAPP contributing a 10.4%  ... 
doi:10.1016/0165-4101(94)00372-c fatcat:rqyt32hhbzexhlenrgwkqhr2qq

Machine Learning for Sensor Transducer Conversion Routines [article]

Thomas Newton, James T. Meech, Phillip Stanley-Marbell
2021 arXiv   pre-print
We present a Pareto analysis of the tradeoff between accuracy and computational overhead for the models and models that reduce the computational overhead of the existing industry-standard conversion routines  ...  These results show that machine learning methods for learning conversion routines can produce conversion routines with reduced computational overhead which maintain good accuracy.  ...  TESTED MODELS We evaluated both function approximation methods and time series methods. A.  ... 
arXiv:2108.11374v2 fatcat:qj7yvr6eljbmbfkq6gvncskotm
« Previous Showing results 1 — 15 out of 126,299 results