A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2022; you can also visit the original URL.
The file type is application/pdf
.
A Data-scalable Transformer for Medical Image Segmentation: Architecture, Model Efficiency, and Benchmark
[article]
2022
arXiv
pre-print
Transformer, as a new generation of neural architecture, has demonstrated remarkable performance in natural language processing and computer vision. However, existing vision Transformers struggle to learn with limited medical data and are unable to generalize on diverse medical image tasks. To tackle these challenges, we present UTNetV2 as a data-scalable Transformer towards generalizable medical image segmentation. The key designs incorporate desirable inductive bias, hierarchical modeling
arXiv:2203.00131v3
fatcat:dmuh4yga4rahzjjdy4ttg7eei4