Filters








4,954 Hits in 6.7 sec

Network video image processing for security, surveillance, and situational awareness

Abhijit Mahalanobis, Jamie L. Cannon, Steven R. Stanfill, Robert R. Muise, Mubarak A. Shah, Raghuveer M. Rao, Sohail A. Dianat, Michael D. Zoltowski
<span title="2004-08-10">2004</span> <i title="SPIE"> Digital Wireless Communications VI </i> &nbsp;
Lockheed Martin and the University of Central Florida (UCF) are jointly investigating the use of a network of COTS video cameras and computers for a variety of security and surveillance operations.  ...  The approach leverages the previously developed KNIGHT human detection and tracking system developed at UCF, and Lockheed Martin's automatic target detection and recognition (ATD/R) algorithms.  ...  We employ a novel approach of finding the limits of FOV of a camera as visible in the other cameras that is very fast compared to conventional camera calibration based approaches.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1117/12.548981">doi:10.1117/12.548981</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/l5sfj5vgqzazpecu4ujyue4z7y">fatcat:l5sfj5vgqzazpecu4ujyue4z7y</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20170811211933/http://vision.eecs.ucf.edu/papers/spie_mco_04_with_UCF.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/7c/06/7c06f572636d5f280d004099252415140d56ff96.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1117/12.548981"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> Publisher / doi.org </button> </a>

Self-encoded Marker for Optical Prospective Head Motion Correction in MRI [chapter]

Christoph Forman, Murat Aksoy, Joachim Hornegger, Roland Bammer
<span title="">2010</span> <i title="Springer Berlin Heidelberg"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/2w3awgokqne6te4nvlofavy5a4" style="color: black;">Lecture Notes in Computer Science</a> </i> &nbsp;
For brain MRI, a promising approach recently suggested is to track the patient using an in-bore camera and a checkerboard marker attached to the patient's forehead.  ...  In in-vivo experiments, the motion compensated images in scans with large motion during data acquisition indicate a correlation of 0.982 compared to a corresponding motion-free reference.  ...  Discussion A crucial limitation of existing in-bore tracking systems for prospective motion correction in MRI is the narrow FOV of the camera.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1007/978-3-642-15705-9_32">doi:10.1007/978-3-642-15705-9_32</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/aensougflvg4xop3cxp5fcmdsm">fatcat:aensougflvg4xop3cxp5fcmdsm</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20180725173344/https://link.springer.com/content/pdf/10.1007%2F978-3-642-15705-9_32.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/65/21/6521b50647411ae48de438e305169b2e57976545.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1007/978-3-642-15705-9_32"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> springer.com </button> </a>

Self-encoded marker for optical prospective head motion correction in MRI

Christoph Forman, Murat Aksoy, Joachim Hornegger, Roland Bammer
<span title="">2011</span> <i title="Elsevier BV"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/kpkfymbkufcnzjfc5ydyokby4y" style="color: black;">Medical Image Analysis</a> </i> &nbsp;
For brain MRI, a promising approach recently suggested is to track the patient using an in-bore camera and a checkerboard marker attached to the patient's forehead.  ...  In in-vivo experiments, the motion compensated images in scans with large motion during data acquisition indicate a correlation of 0.982 compared to a corresponding motion-free reference.  ...  Discussion A crucial limitation of existing in-bore tracking systems for prospective motion correction in MRI is the narrow FOV of the camera.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1016/j.media.2011.05.018">doi:10.1016/j.media.2011.05.018</a> <a target="_blank" rel="external noopener" href="https://www.ncbi.nlm.nih.gov/pubmed/21708477">pmid:21708477</a> <a target="_blank" rel="external noopener" href="https://pubmed.ncbi.nlm.nih.gov/PMC3164440/">pmcid:PMC3164440</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/jw3w6bp2freftkroodtfmv36pi">fatcat:jw3w6bp2freftkroodtfmv36pi</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20170809143846/http://www5.informatik.uni-erlangen.de/Forschung/Publikationen/2010/Forman10-SEM.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/df/fb/dffbf8ee9331d6c19d5afd92039bf353d595486e.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1016/j.media.2011.05.018"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> elsevier.com </button> </a> <a target="_blank" rel="external noopener" href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3164440" title="pubmed link"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> pubmed.gov </button> </a>

Starkit: RoboCup Humanoid KidSize 2021 Worldwide Champion Team Paper [article]

Egor Davydenko, Ivan Khokhlov, Vladimir Litvinenko, Ilya Ryakin, Ilya Osokin, Azer Babaev
<span title="2021-10-15">2021</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
These features include vision-related matters, such as detection and localization, mechanical and algorithmic novelties.  ...  We give an overview of the approaches that were tried out along with the analysis of their preconditions, perspectives and the evaluation of their performance.  ...  to the large FoV imaging.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2110.08377v1">arXiv:2110.08377v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/7r7yqetg25cvpoziwwybqdhgga">fatcat:7r7yqetg25cvpoziwwybqdhgga</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20211020172246/https://arxiv.org/pdf/2110.08377v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/70/c5/70c5629802936fb37b3048a0a9eca5870438e6cb.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2110.08377v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Multi-camera parallel tracking and mapping with non-overlapping fields of view

Michael J. Tribou, Adam Harmat, David W.L. Wang, Inna Sharf, Steven L. Waslander
<span title="2015-04-23">2015</span> <i title="SAGE Publications"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/uhsvnr5ecvb4die3422lvgaz6q" style="color: black;">The international journal of robotics research</a> </i> &nbsp;
A novel real-time pose estimation system is presented for solving the visual SLAM problem using a rigid set of central cameras arranged such that there is no overlap in their fields-of-view.  ...  The accuracy and performance of the proposed pose estimation system are confirmed for various motion profiles in both indoor and challenging outdoor environments, despite no overlap in the camera fields-of-view  ...  Conclusions In this work, a novel visual SLAM framework based on -manifolds was proposed for multi-camera clusters with non-overlapping FOV.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1177/0278364915571429">doi:10.1177/0278364915571429</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/xuuevepsavaihgipjdkjbhrnd4">fatcat:xuuevepsavaihgipjdkjbhrnd4</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20190227201017/http://pdfs.semanticscholar.org/860d/8199bd8e67f5142a58a81a427abc6fb95ee6.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/86/0d/860d8199bd8e67f5142a58a81a427abc6fb95ee6.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1177/0278364915571429"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> sagepub.com </button> </a>

Persistent Objects Tracking Across Multiple Non Overlapping Cameras

Jinman Kang, Isaac Cohen, Gerard Medioni
<span title="">2005</span> <i title="IEEE"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/wsjivbkuezdvxdnrhihbwjrxlu" style="color: black;">2005 Seventh IEEE Workshops on Applications of Computer Vision (WACV/MOTION&#39;05) - Volume 1</a> </i> &nbsp;
We present an approach for persistent tracking of moving objects observed by non-overlapping and moving cameras.  ...  It provides a rich description of the detected regions, and produces an efficient blob similarity measure for tracking.  ...  Conclusion We have presented a novel approach for persistent tracking of moving objects across non-overlapping views.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/acvmot.2005.92">doi:10.1109/acvmot.2005.92</a> <a target="_blank" rel="external noopener" href="https://dblp.org/rec/conf/wacv/KangCM05.html">dblp:conf/wacv/KangCM05</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/pgrx2d5vkzaf5oohaewyws4dsa">fatcat:pgrx2d5vkzaf5oohaewyws4dsa</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20060910005207/http://iris.usc.edu/~icohen/pdf/Motion05.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/ca/95/ca95b70a7ab8896f724593ede25a1e8b87509869.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/acvmot.2005.92"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> ieee.com </button> </a>

Novel view synthesis using a translating camera

Geetika Sharma, Ankita Kumar, Shakti Kamal, Santanu Chaudhury, J.B. Srivastava
<span title="">2005</span> <i title="Elsevier BV"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/6r4znskbk5h2ngu345slqsm6eu" style="color: black;">Pattern Recognition Letters</a> </i> &nbsp;
We propose a method for synthesis of views corresponding to translational motion of the camera. Our scheme can handle occlusions and changes in visibility in the synthesized views.  ...  Our synthesis scheme can also be used to detect translational pan motion of the camera in a given video sequence. We have also presented experimental results to illustrate this feature of our scheme.  ...  We present a novel approach to camera motion detection based on the geometric relationships between objects in a static scene and the constraints imposed by translational camera motion.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1016/j.patrec.2004.08.011">doi:10.1016/j.patrec.2004.08.011</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/w36wga7itzha3aci6y4agkcppa">fatcat:w36wga7itzha3aci6y4agkcppa</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20170809031815/http://eprint.iitd.ac.in/bitstream/2074/1520/1/sharmanov2005.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/e4/4c/e44c8d713d49973631e3288d7135049f4b07340a.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1016/j.patrec.2004.08.011"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> elsevier.com </button> </a>

Remote scanning for ultra-large field of view in wide-field microscopy and full-field OCT

Gaelle Recher, Pierre Nassoy, Amaury Badon
<span title="2020-04-06">2020</span> <i title="The Optical Society"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/fxa4hwkh5vf4nejqcsg3pblm5u" style="color: black;">Biomedical Optics Express</a> </i> &nbsp;
Our approach, called remote scanning, is compatible with all camera-based microscopes.  ...  We finally demonstrate that the method is especially suited to image motion-sensitive samples and large biological samples such as millimetric engineered tissues.  ...  Caumont and A. Mombereau for spheroids production, N. Courtois-Allain and F. Saltel for providing us the HUH6 cell line and Pierre Bon for fruitful discussions and critical reading.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1364/boe.383329">doi:10.1364/boe.383329</a> <a target="_blank" rel="external noopener" href="https://www.ncbi.nlm.nih.gov/pubmed/32499945">pmid:32499945</a> <a target="_blank" rel="external noopener" href="https://pubmed.ncbi.nlm.nih.gov/PMC7249822/">pmcid:PMC7249822</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/oglnfnpfkfdpbjjfhpfb5ctrba">fatcat:oglnfnpfkfdpbjjfhpfb5ctrba</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20210717142511/https://hal.archives-ouvertes.fr/hal-03010795/document" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/83/65/83657a7c7f912eb32f00468ae497aaa77bc8d690.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1364/boe.383329"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="unlock alternate icon" style="background-color: #fb971f;"></i> Publisher / doi.org </button> </a> <a target="_blank" rel="external noopener" href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7249822" title="pubmed link"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> pubmed.gov </button> </a>

Learning Spatio-Temporal Topology Of A Multi-Camera Network By Tracking Multiple People

Yunyoung Nam, Junghun Ryu, Yoo-Joo Choi, We-Duke Cho
<span title="2007-06-21">2007</span> <i title="Zenodo"> Zenodo </i> &nbsp;
This paper presents a novel approach for representing the spatio-temporal topology of the camera network with overlapping and non-overlapping fields of view (FOVs).  ...  To track people successfully in multiple camera views, we used the Merge-Split (MS) approach for object occlusion in a single camera and the grid-based approach for extracting the accurate object feature  ...  There are two major approaches for dealing with occlusion using a single camera. The first approach is the Merge-Split (MS) approach that merges the detected occlusion blobs into a single new blob.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.5281/zenodo.1334437">doi:10.5281/zenodo.1334437</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/d7syufv5cfcr3jog4nskxtawju">fatcat:d7syufv5cfcr3jog4nskxtawju</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20220127114446/https://zenodo.org/record/1334437/files/7797.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/eb/48/eb48ef5b3d21a62d73e40a5e6f4b7f3bcfd81d8e.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.5281/zenodo.1334437"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="unlock alternate icon" style="background-color: #fb971f;"></i> zenodo.org </button> </a>

Lidar Based Intelligent Obstacle Avoidance System for Autonomous Ground Vehicles

<span title="2020-03-30">2020</span> <i title="Blue Eyes Intelligence Engineering and Sciences Engineering and Sciences Publication - BEIESP"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/3sfifsouvjgadp4gfj54u3z2ku" style="color: black;">International journal of recent technology and engineering</a> </i> &nbsp;
As a first step towards this, researchers have developed a vast number of camera vision-based efficient neural network algorithms for detecting and avoiding obstacles.  ...  Existing lidar sensor-based obstacle detection and avoidance systems like 2D collision cone approaches are not suitable for real-time applications, as they lag in providing accurate and quick responses  ...  This model effectively generates commands for the low-level vehicle control systems. [28] proposed a lidar-video dataset based approach, which provides large-scale, high-quality point clouds scanned by  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.35940/ijrte.f8029.038620">doi:10.35940/ijrte.f8029.038620</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/hbvtr47ztfddxdoo3khlzqv4k4">fatcat:hbvtr47ztfddxdoo3khlzqv4k4</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200504073126/https://www.ijrte.org/wp-content/uploads/papers/v8i6/F8029038620.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/dd/22/dd220fcc8f61f9eabadce061b4f8fb1f88a40d73.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.35940/ijrte.f8029.038620"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="unlock alternate icon" style="background-color: #fb971f;"></i> Publisher / doi.org </button> </a>

A Novel Method for Extrinsic Calibration of Multiple RGB-D Cameras Using Descriptor-Based Patterns [article]

Hang Liu, Hengyu Li, Xiahua Liu, Jun Luo, Shaorong Xie, Yu Sun
<span title="2018-09-10">2018</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
This letter presents a novel method to estimate the relative poses between RGB-D cameras with minimal overlapping fields of view in a panoramic RGB-D camera system.  ...  The proposed approach relies on descriptor-based patterns to provide well-matched 2D keypoints in the case of a minimal overlapping field of view between cameras.  ...  Planes and lines have large spatial spans; thus, they can be observed by cameras with little or no overlapping FoV.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1807.07856v4">arXiv:1807.07856v4</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/flilj4v4mze2rmlw4cw4i56kzi">fatcat:flilj4v4mze2rmlw4cw4i56kzi</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200929153140/https://arxiv.org/pdf/1807.07856v4.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/fb/0c/fb0cf0a4cb6f1351deba58d7058f30f1e3a96105.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1807.07856v4" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Multi-Camera Topology Recovery from Coherent Motion

Zehavit Mandel, Ilan Shimshoni, Daniel Keren
<span title="">2007</span> <i title="IEEE"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/oad4cs47avberktz5jsuglsluu" style="color: black;">2007 First ACM/IEEE International Conference on Distributed Smart Cameras</a> </i> &nbsp;
This paper therefore suggests to accomplish the task automatically, using a distributed algorithm. Each camera detects motion locally and transmits the detected motion position to the other cameras.  ...  Each camera determines the number of regions based on the amount of motion detected in its field of view.  ...  Khan, Omar and Shah in [3] find the FOV lines of the cameras. They employ the novel approach of finding the limits of the field of view of a camera as visible in the other cameras.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/icdsc.2007.4357530">doi:10.1109/icdsc.2007.4357530</a> <a target="_blank" rel="external noopener" href="https://dblp.org/rec/conf/icdsc/MandelSK07.html">dblp:conf/icdsc/MandelSK07</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/i6u6hiov3rblrcsdmpztlifzsu">fatcat:i6u6hiov3rblrcsdmpztlifzsu</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20080419003015/http://www.cs.haifa.ac.il/~dkeren/mypapers/ICDSC07.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/5e/4f/5e4f267216f8026ba8aaeb272461d4b608488d1c.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/icdsc.2007.4357530"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> ieee.com </button> </a>

Improved Observation and Communication with a Distributed Compound Vision Surveillance System

Nikolay Semenov, Kimberly Newman
<span title="">2008</span> <i title="IEEE"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/6nsfdn5xgferpfgltbr7tb46gi" style="color: black;">2008 Second International Symposium on Universal Communication</a> </i> &nbsp;
In this article we propose a new concept for the implementation of surveillance that shows significant improvement over the performance of distributed single cameras system by providing multiple angles  ...  A fixed or rotating camera has limitations as to the coverage area and resolution of the captured image.  ...  Summary and conclusions Insect vision systems provide a novel approach to the design and implementation of cameras for video surveillance.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/isuc.2008.60">doi:10.1109/isuc.2008.60</a> <a target="_blank" rel="external noopener" href="https://dblp.org/rec/conf/iucs/SemenovN08.html">dblp:conf/iucs/SemenovN08</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/u7lhccz76vgcdf6lwqnrudf7z4">fatcat:u7lhccz76vgcdf6lwqnrudf7z4</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200320031218/http://ecee.colorado.edu/~ecen4633/papers/ISUC08.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/bb/7e/bb7e1b1559670e582570a7ddb8a2320867545b16.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/isuc.2008.60"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> ieee.com </button> </a>

Visual surveillance of human activity [chapter]

Larry Davis, Sandor Fejes, David Harwood, Yaser Yacoob, Ismail Hariatoglu, Michael J. Black
<span title="">1997</span> <i title="Springer Berlin Heidelberg"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/2w3awgokqne6te4nvlofavy5a4" style="color: black;">Lecture Notes in Computer Science</a> </i> &nbsp;
The approach, which is based on two simple geometric observations about directional components of ow elds, allows general camera motion, a large camera Field Of View (FOV), and scenes with large depth  ...  Figure 1 shows three examples of detecting independent motion from a handcarried camera. The camera FOV is relative large (55 o ) while the scenes contain di erent degree of depth variation.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1007/3-540-63931-4_226">doi:10.1007/3-540-63931-4_226</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/grh42yfnjzgg3dpadlnukrpsmy">fatcat:grh42yfnjzgg3dpadlnukrpsmy</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20170811041640/http://www.cfar.umd.edu/~yaser/accv_paper.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/28/64/2864fb2a1c2620cb9ca1944c50c9812a5c737263.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1007/3-540-63931-4_226"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> springer.com </button> </a>

Vehicle Surround Capture: Survey of Techniques and a Novel Omni-Video-Based Approach for Dynamic Panoramic Surround Maps

T. Gandhi, M.M. Trivedi
<span title="">2006</span> <i title="Institute of Electrical and Electronics Engineers (IEEE)"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/in6o6x6to5e2dls4y2ff52dy6u" style="color: black;">IEEE transactions on intelligent transportation systems (Print)</a> </i> &nbsp;
A novel approach for synthesizing the DPS using stereo and motion analysis of video images from a pair of. omni cameras on the vehicle is developed.  ...  Omni cameras, which give a panoramic view of the surroundings, can be useful for visualizing and analyzing the nearby surroundings of the vehicle.  ...  McCall for the help on the hardware of the car test bed and the final proofreading of this paper, as well as S. Cheng and S. Krotosky for the help on the stereo software.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/tits.2006.880635">doi:10.1109/tits.2006.880635</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/irusuaxnzbanvau43t2nzfczne">fatcat:irusuaxnzbanvau43t2nzfczne</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20170814105538/http://escholarship.org/uc/item/8x1376gk.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/ee/4d/ee4d009a10816bbb931883329a5604110510cf91.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/tits.2006.880635"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> ieee.com </button> </a>
&laquo; Previous Showing results 1 &mdash; 15 out of 4,954 results