Filters








9 Hits in 0.63 sec

HexaConv [article]

Emiel Hoogeboom, Jorn W.T. Peters, Taco S. Cohen, Max Welling
2018 arXiv   pre-print
We find that, due to the reduced anisotropy of hexagonal filters, planar HexaConv provides better accuracy than planar convolution with square filters, given a fixed parameter budget.  ...  CONCLUSION We have introduced G-HexaConv, an extension of group convolutions for hexagonal pixelations. Hexagonal grids allow 6-fold rotations without the need for interpolation.  ...  Source code of G-HexaConvs is available on Github: https://github.com/ehoogeboom/hexaconv.  ... 
arXiv:1803.02108v1 fatcat:eyb4bblayjeyjporwhoidcgt4q

PDO-eConvs: Partial Differential Operator Based Equivariant Convolutions [article]

Zhengyang Shen, Lingshen He, Zhouchen Lin, Jinwen Ma
2020 arXiv   pre-print
In addition, Hoogeboom et al. (2018) proposed HexaConv and showed how one can implement planar convolutions and group convolutions over hexagonal lattices, instead of square ones.  ...  Using comparable numbers of parameters, our methods perform significantly better than HexaConv (5.38% vs. 8.64% on C10).  ...  Following HexaConv, we use our PDO-eConvs to establish models that are equivariant to group p6 (p6m), where n = 4 and k i = 6, 13, 26 (k i = 6, 9, 18).  ... 
arXiv:2007.10408v2 fatcat:idm5xfehgza3few352p4ul7uoq

Invariant Tensor Feature Coding [article]

Yusuke Mukuta, Tatsuya Harada
2019 arXiv   pre-print
To obtain the local feature that D6 acts orthogonally, we pretrained the CNN with hexaconv [13] .  ...  Hexaconv models the input images in the hexagonal axis and applies D6 group-equivariant convolutional layers to construct the CNN.  ... 
arXiv:1906.01857v2 fatcat:oubgq3jwivawvhbfwgbtuh2f2y

Indexed Operations for Non-rectangular Lattices Applied to Convolutional Neural Networks

Mikael Jacquemont, Luca Antiga, Thomas Vuillaume, Giorgia Silvestri, Alexandre Benoit, Patrick Lambert, Gilles Maurin
2019 Proceedings of the 14th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications  
For this experiment and the one on the AID dataset (see Sec. 5 .3), we compare our results with the two baseline networks of HexaConv paper (Hoogeboom et al., 2018) .  ... 
doi:10.5220/0007364303620371 dblp:conf/visapp/JacquemontAVSBL19 fatcat:dlht6xpqmradbkmynpkelzr6f4

Natural Graph Networks [article]

Pim de Haan, Taco Cohen, Max Welling
2020 arXiv   pre-print
Hexaconv. arXiv preprint arXiv:1803.02108, 2018. Urs Schreiber and Konrad Waldorf. Parallel transport and functors. arXiv preprint arXiv:0705.0452, 2007. .  ...  A Natural Graph Network on these reduced features is exactly equivalent to HexaConv [Hoogeboom et al., 2018] D Category-theoretical formalization Our goal is to define a general construction for equivariant  ... 
arXiv:2007.08349v2 fatcat:bble7vlwiza77k7i2jv3tvvcce

Continual Learning for Grounded Instruction Generation by Observing Human Following Behavior

Noriyuki Kojima, Alane Suhr, Yoav Artzi
2021 Transactions of the Association for Computational Linguistics  
Because the CEREALBAR environment is a grid of hexagons, we use HEXACONV (Hoogeboom et al., 2018) .  ... 
doi:10.1162/tacl_a_00428 fatcat:b7h6q4qpf5btdggoqu3x6xmlhq

Fourier Series Expansion Based Filter Parametrization for Equivariant Convolutions [article]

Qi Xie and Qian Zhao and Zongben Xu and Deyu Meng
2021 arXiv   pre-print
By replacing the commonly used square lattices with hexagonal lattices, HexaConv succeeded in expanding rotation equivariant to π /3 degree rotation (p6m) [7] .  ...  Afterwards, HexaConv [7] expanded rotation equivariant to π /3 degree rotations (i.e., p6 and p6m group equivariant) by replacing the commonly used square lattices with hexagonal lattices.  ... 
arXiv:2107.14519v1 fatcat:uspkp2dlmfeavezvehuibpuoiu

Gauge Equivariant Convolutional Networks and the Icosahedral CNN [article]

Taco S. Cohen, Maurice Weiler, Berkay Kicanaoglu, Max Welling
2019 arXiv   pre-print
Gauge Equivariant Icosahedral Convolution Gauge equivariant convolution on the icosahedron is implemented in three steps: G-Padding, kernel expansion, and 2d convolution / HexaConv (Hoogeboom et al.,  ... 
arXiv:1902.04615v3 fatcat:o7rb4j7nhjee7bfiatarn7gnsm

Continual Learning for Grounded Instruction Generation by Observing Human Following Behavior [article]

Noriyuki Kojima, Alane Suhr, Yoav Artzi
2021 arXiv   pre-print
Because the CERE-ALBAR environment is a grid of hexagons, we use HEXACONV (Hoogeboom et al., 2018) .  ... 
arXiv:2108.04812v1 fatcat:cbxm3rwqjrbsplanc3ew43fkvy