A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2018; you can also visit the original URL.
The file type is
This paper presents a method to extract the foreground images from live videos by means of automatic object segmentation. The parameters such as colour, motion of the pixel and image texture or more specifically texture constraints are used for segmentation. A cellular neural network which combines both colour as well as motion of pixels which varies from frame to frame is implemented which helps in accurately separating the boundaries and thus reducing misclassifications. The global motion ofdoi:10.18831/djcse.in/2016021001 fatcat:e5f6wkgfuvh4feu2dgwz2tj4fm