A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2018; you can also visit <a rel="external noopener" href="https://www.spiedigitallibrary.org/journals/Journal-of-Electronic-Imaging/volume-25/issue-6/061624/Global-motion-compensated-visual-attention-based-video-watermarking/10.1117/1.JEI.25.6.061624.pdf">the original URL</a>. The file type is <code>application/pdf</code>.
Global motion compensated visual attention-based video watermarking
<span title="2016-12-20">2016</span>
<i title="SPIE-Intl Soc Optical Eng">
<a target="_blank" rel="noopener" href="https://fatcat.wiki/container/2emxxxtvbjco7e2f75dok676dy" style="color: black;">Journal of Electronic Imaging (JEI)</a>
</i>
Imperceptibility and robustness are two key but complementary requirements of any watermarking algorithm. Low-strength watermarking yields high imperceptibility but exhibits poor robustness. High-strength watermarking schemes achieve good robustness but often suffer from embedding distortions resulting in poor visual quality in host media. This paper proposes a unique video watermarking algorithm that offers a fine balance between imperceptibility and robustness using motion compensated
<span class="external-identifiers">
<a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1117/1.jei.25.6.061624">doi:10.1117/1.jei.25.6.061624</a>
<a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/q7gic5pm5rerjls4elofkwuu5q">fatcat:q7gic5pm5rerjls4elofkwuu5q</a>
</span>
more »
... based visual attention model (VAM). The proposed VAM includes spatial cues for visual saliency as well as temporal cues. The spatial modeling uses the spatial wavelet coefficients while the temporal modeling accounts for both local and global motion to arrive at the spatiotemporal VAM for video. The model is then used to develop a video watermarking algorithm, where a two-level watermarking weighting parameter map is generated from the VAM saliency maps using the saliency model and data are embedded into the host image according to the visual attentiveness of each region. By avoiding higher strength watermarking in the visually attentive region, the resulting watermarked video achieves high perceived visual quality while preserving high robustness. The proposed VAM outperforms the state-of-the-art video visual attention methods in joint saliency detection and low computational complexity performance. For the same embedding distortion, the proposed visual attention-based watermarking achieves up to 39% (nonblind) and 22% (blind) improvement in robustness against H.264/AVC compression, compared to existing watermarking methodology that does not use the VAM. The proposed visual attention-based video watermarking results in visual quality similar to that of low-strength watermarking and a robustness similar to those of high-strength watermarking.
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20180720220616/https://www.spiedigitallibrary.org/journals/Journal-of-Electronic-Imaging/volume-25/issue-6/061624/Global-motion-compensated-visual-attention-based-video-watermarking/10.1117/1.JEI.25.6.061624.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext">
<button class="ui simple right pointing dropdown compact black labeled icon button serp-button">
<i class="icon ia-icon"></i>
Web Archive
[PDF]
<div class="menu fulltext-thumbnail">
<img src="https://blobs.fatcat.wiki/thumbnail/pdf/3c/04/3c04a6e5784b34d3030413fec6589ae5e09f326d.180px.jpg" alt="fulltext thumbnail" loading="lazy">
</div>
</button>
</a>
<a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1117/1.jei.25.6.061624">
<button class="ui left aligned compact blue labeled icon button serp-button">
<i class="external alternate icon"></i>
Publisher / doi.org
</button>
</a>