A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2021; you can also visit the original URL.
The file type is application/pdf
.
LAMPRET: Layout-Aware Multimodal PreTraining for Document Understanding
[article]
2021
arXiv
pre-print
Document layout comprises both structural and visual (eg. font-sizes) information that is vital but often ignored by machine learning models. The few existing models which do use layout information only consider textual contents, and overlook the existence of contents in other modalities such as images. Additionally, spatial interactions of presented contents in a layout were never really fully exploited. To bridge this gap, we parse a document into content blocks (eg. text, table, image) and
arXiv:2104.08405v1
fatcat:k2ss7rzs5ngqlgsds44ybgvf5i