A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2022; you can also visit the original URL.
The file type is application/pdf
.
Filters
Robust (Controlled) Table-to-Text Generation with Structure-Aware Equivariance Learning
[article]
2022
arXiv
pre-print
Accordingly, we propose an equivariance learning framework, which encodes tables with a structure-aware self-attention mechanism. ...
Controlled table-to-text generation seeks to generate natural language descriptions for highlighted subparts of a table. ...
Conclusion We propose LATTICE, a structure-aware equivariance learning framework for controlled tableto-text generation. ...
arXiv:2205.03972v1
fatcat:qkf7uiqvfzcjbfmlpjrzakriwm
Robust (Controlled) Table-to-Text Generation with Structure-Aware Equivariance Learning
2022
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
unpublished
Accordingly, we propose an equivariance learning framework, LATTICE ( ), which encodes tables with a structure-aware self-attention mechanism. ...
Controlled table-to-text generation seeks to generate natural language descriptions for highlighted subparts of a table. ...
Conclusion We propose LATTICE, a structure-aware equivariance learning framework for controlled tableto-text generation. ...
doi:10.18653/v1/2022.naacl-main.371
fatcat:5wh5axtdzran5ijov3bdhaupji
ACNe: Attentive Context Normalization for Robust Permutation-Equivariant Learning
[article]
2021
arXiv
pre-print
In this paper, we propose Attentive Context Normalization (ACN), a simple yet effective technique to build permutation-equivariant networks robust to outliers. ...
Permutation-equivariant networks have become a popular solution-they operate on individual data points with simple perceptrons and extract contextual information with global pooling. ...
With learned features - Table 6 . Finally, we report that our method also works well with two state-of-the-art, learned local feature methods -SuperPoint [14] and LF-Net [35] . ...
arXiv:1907.02545v5
fatcat:svyngsl37rchnbdsnyll3qvuxu
ACNe: Attentive Context Normalization for Robust Permutation-Equivariant Learning
2020
2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
In this paper, we propose Attentive Context Normalization (ACN), a simple yet effective technique to build permutation-equivariant networks robust to outliers. ...
Permutation-equivariant networks have become a popular solution -they operate on individual data points with simple perceptrons and extract contextual information with global pooling. ...
Robust line fitting -Fig. 1 and Table 1 To generate 2D points on a random line, as well as outliers, we first sample 2D points uniformly within the range [ 1, +1] . ...
doi:10.1109/cvpr42600.2020.01130
dblp:conf/cvpr/SunJTTY20
fatcat:uoydzm7tdbcujfxud2ca7uyxoe
Training Efficiency and Robustness in Deep Learning
[article]
2021
arXiv
pre-print
Finally, we study adversarial robustness in deep learning and approaches to achieve maximal adversarial robustness without training with additional data. ...
Next, we seek improvements to optimization speed in general-purpose optimization methods in deep learning. ...
ad-hoc video search and video-to-text description generation. ...
arXiv:2112.01423v1
fatcat:3yqco7htnjdbng4hx2ilkrnkaq
Combining Different V1 Brain Model Variants to Improve Robustness to Image Corruptions in CNNs
[article]
2021
arXiv
pre-print
Recently, it has been shown that simulating a primary visual cortex (V1) at the front of CNNs leads to small improvements in robustness to these image perturbations. ...
Finally, we show that using distillation, it is possible to partially compress the knowledge in the ensemble model into a single model with a V1 front-end. ...
These could include cortical computations such as divisive normalization
or gain-control mechanisms to combine the different V1 neuronal populations and generate even
stronger improvements in robustness ...
arXiv:2110.10645v2
fatcat:7zi7yskvz5dttjviyulzdbxxne
t-Statistic Based Correlation and Heterogeneity Robust Inference
2010
Journal of business & economic statistics
We develop a general approach to robust inference about a scalar parameter of interest when the data is potentially heterogeneous and correlated in a largely unknown way. ...
One might thus conduct robust large sample inference as follows: partition the data into q ≥ 2 groups, estimate the model for each group, and conduct a standard t-test with the resulting q parameter estimators ...
On a fundamental level, some a priori knowledge about the correlation structure is required in order to be able to learn from the data. ...
doi:10.1198/jbes.2009.08046
fatcat:dg7qtqii3ve75c7k5jioq3nczy
T-Statistic Based Correlation and Heterogeneity Robust Inference
2007
Social Science Research Network
We develop a general approach to robust inference about a scalar parameter of interest when the data is potentially heterogeneous and correlated in a largely unknown way. ...
One might thus conduct robust large sample inference as follows: partition the data into q ≥ 2 groups, estimate the model for each group, and conduct a standard t-test with the resulting q parameter estimators ...
On a fundamental level, some a priori knowledge about the correlation structure is required in order to be able to learn from the data. ...
doi:10.2139/ssrn.964224
fatcat:wfwf6mg2fvgp3fcggjo5fb5iom
CNN Architectures for Geometric Transformation-Invariant Feature Representation in Computer Vision: A Review
2021
SN Computer Science
Using these methods, it is possible to develop task-oriented solutions to deal with nontrivial transformations. ...
Recently, deep learning techniques have proven very successful in visual recognition tasks but they typically perform poorly with small data or when deployed in environments that deviate from training ...
It increases the robustness of the output feature maps to minor deformations and small variations in image structure. ...
doi:10.1007/s42979-021-00735-0
fatcat:3zrkaan7dncoja4e32u7jgwo4m
Table of Contents
2021
IEEE/ACM Transactions on Audio Speech and Language Processing
Hu A Graph-to-Sequence Learning Framework for Summarizing Opinionated Texts . . . . . . ....P. Wei, J. Zhao, and W. ...
HuangPROTOTYPE-TO-STYLE: Dialogue Generation With Style-Aware Editing on Retrieval Memory . . . . . . . . . . . . . . . . . ...Overview of the Eighth Dialog System Technology Challenge: DSTC8 . . . . . ...
doi:10.1109/taslp.2021.3137066
fatcat:ocit27xwlbagtjdyc652yws4xa
Table of Contents
2021
IEEE/ACM Transactions on Audio Speech and Language Processing
Nordholm PROTOTYPE-TO-STYLE: Dialogue Generation With Style-Aware Editing on Retrieval Memory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ...
Liu Learning to Generate Explainable Plots for Neural Story Generation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ...
doi:10.1109/taslp.2021.3137064
fatcat:rpka3f2bhjh37c7pkhiowyndhm
Grid-to-Graph: Flexible Spatial Relational Inductive Biases for Reinforcement Learning
[article]
2021
arXiv
pre-print
We show that, with GTG, R-GCNs generalize better both in terms of in-distribution and out-of-distribution compared to baselines based on Convolutional Neural Networks and Neural Logic Machines on challenging ...
Based on this insight, we propose Grid-to-Graph (GTG), a mapping from grid structures to relational graphs that carry useful spatial relational inductive biases when processed through a Relational Graph ...
Out-of-Distribution Systematic Generalization In Table 1 and Table 2 , we show how policies learned by our relational models can generalize to environments outside of the training distribution. ...
arXiv:2102.04220v1
fatcat:ryjhh6xr2zayrak3mnj6pmoixu
Fanaroff-Riley classification of radio galaxies using group-equivariant convolutional neural networks
[article]
2021
arXiv
pre-print
CNN must learn explicitly to classify all rotated versions of a particular type of object individually. ...
However, although conventional convolutions are equivariant to translation, they are not equivariant to other isometries of the input image data, such as rotation and reflection. ...
Therefore, by changing the degree of equivariance as a function of layer depth one can control the degree to which local equivariance is enforced. ...
arXiv:2102.08252v2
fatcat:expswjvztzcszlo3jrqvvdws4e
Geometric Deep Learning: Grids, Groups, Graphs, Geodesics, and Gauges
[article]
2021
arXiv
pre-print
While learning generic functions in high dimensions is a cursed estimation problem, most tasks of interest are not generic, and come with essential pre-defined regularities arising from the underlying ...
This text is concerned with exposing these regularities through unified geometric principles that can be applied throughout a wide spectrum of applications. ...
Acknowledgements This text represents a humble attempt to summarise and synthesise decades of existing knowledge in deep learning architectures, through the geometric lens of invariance and symmetry. ...
arXiv:2104.13478v2
fatcat:odbzfsau6bbwbhulc233cfsrom
Pre-training of Equivariant Graph Matching Networks with Conformation Flexibility for Drug Binding
[article]
2022
arXiv
pre-print
learning tasks: an atom-level prompt-based denoising generative task and a conformation-level snapshot ordering task to seize the flexibility information inside MD trajectories with very fine temporal ...
To tackle this obstacle, we present a novel spatial-temporal pre-training method based on the modified Equivariant Graph Matching Networks (EGMN), dubbed ProtMD, which has two specially designed self-supervised ...
Unlike the naive generative self-supervised learning, a time-series prompt is added to regulate and control the time interval between the source and target conformations. ...
arXiv:2204.08663v3
fatcat:ps3eakq5rbcfla3eolh4av4zu4
« Previous
Showing results 1 — 15 out of 205 results