A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2020; you can also visit the original URL.
The file type is application/pdf
.
DCAN: Dual Channel-Wise Alignment Networks for Unsupervised Scene Adaptation
[chapter]
2018
Lecture Notes in Computer Science
Harvesting dense pixel-level annotations to train deep neural networks for semantic segmentation is extremely expensive and unwieldy at scale. While learning from synthetic data where labels are readily available sounds promising, performance degrades significantly when testing on novel realistic data due to domain discrepancies. We present Dual Channel-wise Alignment Networks (DCAN), a simple yet effective approach to reduce domain shift at both pixel-level and feature-level. Exploring
doi:10.1007/978-3-030-01228-1_32
fatcat:gdesitnfhjh7jatknwpypwav3m