A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2019; you can also visit <a rel="external noopener" href="http://pdfs.semanticscholar.org/8961/677300a9ee30ca51e1a3cf9815b4a162265b.pdf">the original URL</a>. The file type is <code>application/pdf</code>.
Deep Representation Learning with Part Loss for Person Re-Identification
<span title="">2019</span>
<i title="Institute of Electrical and Electronics Engineers (IEEE)">
<a target="_blank" rel="noopener" href="https://fatcat.wiki/container/dhlhr4jqkbcmdbua2ca45o7kru" style="color: black;">IEEE Transactions on Image Processing</a>
</i>
Learning discriminative representations for unseen person images is critical for person Re-Identification (ReID). Most of current approaches learn deep representations in classification tasks, which essentially minimize the empirical classification risk on the training set. As shown in our experiments, such representations easily get overfitted on a discriminative human body part among the training set. To gain the discriminative power on unseen person images, we propose a deep representation
<span class="external-identifiers">
<a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/tip.2019.2891888">doi:10.1109/tip.2019.2891888</a>
<a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/sixozjalnnh6lmcfturmo4jese">fatcat:sixozjalnnh6lmcfturmo4jese</a>
</span>
more »
... arning procedure named Part Loss Networks (PL-Net), to minimize both the empirical classification risk and the representation learning risk. The representation learning risk is evaluated by the proposed part loss, which automatically detects human body parts, and computes the person classification loss on each part separately. Compared with traditional global classification loss, simultaneously considering part loss enforces the deep network to learn representations for different parts and gain the discriminative power on unseen persons. Experimental results on three person ReID datasets, i.e., Mar-ket1501, CUHK03, VIPeR, show that our representation outperforms existing deep representations.
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20190228005455/http://pdfs.semanticscholar.org/8961/677300a9ee30ca51e1a3cf9815b4a162265b.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext">
<button class="ui simple right pointing dropdown compact black labeled icon button serp-button">
<i class="icon ia-icon"></i>
Web Archive
[PDF]
<div class="menu fulltext-thumbnail">
<img src="https://blobs.fatcat.wiki/thumbnail/pdf/89/61/8961677300a9ee30ca51e1a3cf9815b4a162265b.180px.jpg" alt="fulltext thumbnail" loading="lazy">
</div>
</button>
</a>
<a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/tip.2019.2891888">
<button class="ui left aligned compact blue labeled icon button serp-button">
<i class="external alternate icon"></i>
ieee.com
</button>
</a>