Filters








8,920 Hits in 8.7 sec

Recovering Motion Fields: An Evaluation of Eight Optical Flow Algorithms

B. Galvin, B. McCane, K. Novins, D. Mason, S. Mills
<span title="">1998</span> <i title="British Machine Vision Association"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/6bfo5625nvdfvbgyf7ldi5wmfe" style="color: black;">Procedings of the British Machine Vision Conference 1998</a> </i> &nbsp;
Evaluating the performance of optical flow algorithms has been difficult because of the lack of ground-truth data sets for complex scenes.  ...  The resulting flow maps are used to assist in the comparison of eight optical flow algorithms using three complex, synthetic scenes.  ...  Horn and Schunck state: "One endeavors to recover an optical flow that is close to the motion field" [5] .  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.5244/c.12.20">doi:10.5244/c.12.20</a> <a target="_blank" rel="external noopener" href="https://dblp.org/rec/conf/bmvc/GalvinMNMM98.html">dblp:conf/bmvc/GalvinMNMM98</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/bgqdikys6bekro2i4efmfkwypa">fatcat:bgqdikys6bekro2i4efmfkwypa</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20170706013307/http://ami.dis.ulpgc.es/biblio/bibliography/documentos/galvin98recovering.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/6a/dd/6add4ca7b133ca763023b98886865fa1a8efd833.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.5244/c.12.20"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> Publisher / doi.org </button> </a>

Receptive Field Dynamics Underlying MST Neuronal Optic Flow Selectivity

Chen Ping Yu, William K. Page, Roger Gaborski, Charles J. Duffy
<span title="">2010</span> <i title="American Physiological Society"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/nr2f6gixzrgdlloxlemhfdztdi" style="color: black;">Journal of Neurophysiology</a> </i> &nbsp;
Receptive field dynamics underlying MST neuronal optic flow selectivity. flow informs moving observers about their heading direction.  ...  Our goal was to test the hypothesis that the optic flow responses reflect the sum of the local motion responses.  ...  planar motion, and large-field optic flow.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1152/jn.01085.2009">doi:10.1152/jn.01085.2009</a> <a target="_blank" rel="external noopener" href="https://www.ncbi.nlm.nih.gov/pubmed/20457855">pmid:20457855</a> <a target="_blank" rel="external noopener" href="https://pubmed.ncbi.nlm.nih.gov/PMC2867578/">pmcid:PMC2867578</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/ld5hxuivvjckhpjw6p7sakmh3i">fatcat:ld5hxuivvjckhpjw6p7sakmh3i</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20181030085658/https://www.physiology.org/doi/pdf/10.1152/jn.01085.2009" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/2c/54/2c543446734be4c5b5f9d39e0d00443d54885428.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1152/jn.01085.2009"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> Publisher / doi.org </button> </a> <a target="_blank" rel="external noopener" href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2867578" title="pubmed link"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> pubmed.gov </button> </a>

Motion field and occlusion time estimation via alternate exposure flow

Anita Sellent, Martin Eisemann, Marcus Magnor
<span title="">2009</span> <i title="IEEE"> 2009 IEEE International Conference on Computational Photography (ICCP) </i> &nbsp;
While traditional optical flow algorithms rely on consecutive short-exposed images only, longexposed images capture motion directly in the form of motion blur.  ...  This paper presents an extension to optical flow-based motion estimation using alternating short-and longexposed images.  ...  As the local time derivative needs to be numerically evaluated to solve the optical flow equation, optical flow algorithms work best with pinpoint-sharp images as input, i.e., with images depicting a dynamic  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/iccphot.2009.5559005">doi:10.1109/iccphot.2009.5559005</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/lqtejof6yzbjvlv72bdlb2tnri">fatcat:lqtejof6yzbjvlv72bdlb2tnri</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20170705112744/https://www.cg.cs.tu-bs.de/upload/publications/sellent2009alternate.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/f7/ec/f7ece3d88b5b9b7ae3abdadc2b1c056686c0e1ca.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/iccphot.2009.5559005"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> ieee.com </button> </a>

Rapid, topology-based particle tracking for high-resolution measurements of large complex 3D motion fields

Mohak Patel, Susan E. Leggett, Alexander K. Landauer, Ian Y. Wong, Christian Franck
<span title="2018-04-03">2018</span> <i title="Springer Nature"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/tnqhc2x2aneavcd3gx5h7mswhm" style="color: black;">Scientific Reports</a> </i> &nbsp;
Consequently, existing algorithms are inefficient when tracking large numbers of particles (tens of thousands or more) undergoing large motion over an extended duration or many image frames.  ...  adaptable to general motion fields of unknown character.  ...  To show T-PT's ability to recover large and high spatial gradient displacement data encountered in an unsteady, complex flow field between two image frames, we simulate a 2D flow around a stack of periodically  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1038/s41598-018-23488-y">doi:10.1038/s41598-018-23488-y</a> <a target="_blank" rel="external noopener" href="https://www.ncbi.nlm.nih.gov/pubmed/29615650">pmid:29615650</a> <a target="_blank" rel="external noopener" href="https://pubmed.ncbi.nlm.nih.gov/PMC5882970/">pmcid:PMC5882970</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/vx2eadq4kvdzjhkazbjfupr5wa">fatcat:vx2eadq4kvdzjhkazbjfupr5wa</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200208211531/http://europepmc.org/backend/ptpmcrender.fcgi?accid=PMC5882970&amp;blobtype=pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/5a/b9/5ab9a7b7b64aca9377c4c1dec87be22ec286167b.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1038/s41598-018-23488-y"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="unlock alternate icon" style="background-color: #fb971f;"></i> Publisher / doi.org </button> </a> <a target="_blank" rel="external noopener" href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5882970" title="pubmed link"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> pubmed.gov </button> </a>

VELOCITY FIELD COMPUTATION IN VIBRATED GRANULAR MEDIA USING AN OPTICAL FLOW BASED MULTISCALE IMAGE ANALYSIS METHOD

Johan Debayle, Ahmed Raihane, Abdelkrim Belhaoua, Olivier Bonnefoy, Gérard Thomas, Jean-Marc Chaix, Jean-Charles Pinoli
<span title="2011-05-03">2011</span> <i title="Slovenian Society for Stereology and Quantitative Image Analysis"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/3sdgkbxhvbbrzegutwnwbzczau" style="color: black;">Image Analysis and Stereology</a> </i> &nbsp;
The differential method based on optical flow conservation consists in describing a dense motion field with vectors associated to each pixel.  ...  An instrumented laboratory device provides sinusoidal vibrations and enables external optical observations of sand motion in 3D transparent boxes.  ...  VALIDATION In general, the performance of the optical flow methods are evaluated on real sequences and synthetic sequences for which motion fields are known.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.5566/ias.v28.p35-43">doi:10.5566/ias.v28.p35-43</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/hbfyowxm4ba4rkbykibo3cqgci">fatcat:hbfyowxm4ba4rkbykibo3cqgci</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20190429000709/https://www.ias-iss.org/ojs/IAS/article/download/847/750" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/12/e1/12e162dd97cb0d3c418765c0acd41e04917784ad.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.5566/ias.v28.p35-43"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="unlock alternate icon" style="background-color: #fb971f;"></i> Publisher / doi.org </button> </a>

Non-parametric and light-field deformable models

C. Mario Christoudias, Louis-Philippe Morency, Trevor Darrell
<span title="">2006</span> <i title="Elsevier BV"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/dsarpzh6wfgadnyijwv5l6gh2i" style="color: black;">Computer Vision and Image Understanding</a> </i> &nbsp;
We demonstrate our light-field appearance model using 50 light-fields of the human head captured from a real-time camera array.  ...  In our experiments we evaluate the performance of each method and provide a comparison with conventional, linear single-and multi-view deformable models.  ...  The shape of a light-field can also be computed using optical flow.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1016/j.cviu.2006.06.001">doi:10.1016/j.cviu.2006.06.001</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/tawp5wkgxjendmhkwfmv4hnmqe">fatcat:tawp5wkgxjendmhkwfmv4hnmqe</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20170706093226/http://people.csail.mit.edu/lmorency/Papers/stam_jrnl.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/aa/da/aada64cbeaaaf63783568088314df32287f4268d.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1016/j.cviu.2006.06.001"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> elsevier.com </button> </a>

An Ego-Motion Detection System Employing Directional-Edge-Based Motion Field Representations

Jia HAO, Tadashi SHIBATA
<span title="">2010</span> <i title="Institute of Electronics, Information and Communications Engineers (IEICE)"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/xosmgvetnbf4zpplikelekmdqe" style="color: black;">IEICE transactions on information and systems</a> </i> &nbsp;
In this paper, a motion field representation algorithm based on directional edge information has been developed.  ...  The performance of the ego-motion detection system was evaluated under various circumstances, and the effectiveness of this work has been verified.  ...  Optical flow field methods were developed to overcome the difficulties of traveling through unfamiliar environments.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1587/transinf.e93.d.94">doi:10.1587/transinf.e93.d.94</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/ieybg5fr7jginj6hb4rv6cbxyu">fatcat:ieybg5fr7jginj6hb4rv6cbxyu</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20170924122745/https://www.jstage.jst.go.jp/article/transinf/E93.D/1/E93.D_1_94/_pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/45/96/4596c3e5fa6c2813188db8178ab49c1f3095df87.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1587/transinf.e93.d.94"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> Publisher / doi.org </button> </a>

Vision-based navigation through urban canyons

Stefan Hrabar, Gaurav Sukhatme
<span title="">2009</span> <i title="Wiley"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/hozy4uxohvhslew6ka2x3ecogy" style="color: black;">Journal of Field Robotics</a> </i> &nbsp;
Two commonly used vision-based obstacle avoidance techniques (namely stereo vision and optic flow) are implemented on an aerial and a ground-based robotic platform and evaluated for urban canyon navigation  ...  Optic flow is evaluated for its ability to produce a centering response between obstacles, and stereo vision is evaluated for detecting obstacles to the front.  ...  ACKNOWLEDGMENTS This work was supported in part by contract 1277958 [from the Jet Propulsion Laboratory (JPL) to the University of Southern California (USC)] and by National Science Foundation (NSF) grants  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1002/rob.20284">doi:10.1002/rob.20284</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/hdg2urcr3zgwppaxlid5v2ajiy">fatcat:hdg2urcr3zgwppaxlid5v2ajiy</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20150908004102/http://robotics.usc.edu/publications/media/uploads/pubs/772.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/d3/c3/d3c35f77e20a6047c549cc5d318357e847b2a48f.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1002/rob.20284"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> wiley.com </button> </a>

Efficient three-dimensional scene modeling and mosaicing

Tudor Nicosevici, Nuno Gracias, Shahriar Negahdaripour, Rafael Garcia
<span title="">2009</span> <i title="Wiley"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/hozy4uxohvhslew6ka2x3ecogy" style="color: black;">Journal of Field Robotics</a> </i> &nbsp;
The methods are developed within the framework of sequential structure from motion, in which a 3D model of the environment is maintained and updated as new visual information becomes available.  ...  This paper presents an end-to-end solution for creating accurate three-dimensional (3D) textured models using monocular video sequences.  ...  Hanumant Singh of the Woods Hole Oceanographic Institute for providing the JHU pool data and Dr. R. Pam Reid and her team for the coral reef sequences.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1002/rob.20305">doi:10.1002/rob.20305</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/inz4vva7qbb6portitd54ifv5i">fatcat:inz4vva7qbb6portitd54ifv5i</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20170705065154/http://eia.udg.edu/~rafa/papers/JFR2009.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/23/85/2385fee8115e99c42428ee8b9634f7ece1983821.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1002/rob.20305"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> wiley.com </button> </a>

Wavelet-Based Optical Flow for Two-Component Wind Field Estimation from Single Aerosol Lidar Data

Pierre Dérian, Christopher F. Mauzey, Shane D. Mayor
<span title="">2015</span> <i title="American Meteorological Society"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/l7zwmdzxdbfwveco4t3yu42uyi" style="color: black;">Journal of Atmospheric and Oceanic Technology</a> </i> &nbsp;
The algorithm, a wavelet-based optical flow estimator named Typhoon, produces dense two-component vector flow fields that correspond to the apparent motion of microscale aerosol features.  ...  The flow fields, estimated every 17 s, were compared with measurements from an independent Doppler lidar.  ...  To address these issues, this study proposes to evaluate a recently developed motion estimation algorithm dedicated to fluid flows.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1175/jtech-d-15-0010.1">doi:10.1175/jtech-d-15-0010.1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/u5saiyrxs5ddtolbwtus5nro54">fatcat:u5saiyrxs5ddtolbwtus5nro54</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20170812052124/http://lidar.csuchico.edu/publications/Derian_et_al_2015_JTECH.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/c2/8b/c28b042ca672b4596471cd2d383547a407f75ebe.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1175/jtech-d-15-0010.1"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="unlock alternate icon" style="background-color: #fb971f;"></i> Publisher / doi.org </button> </a>

Evaluating Superpixels in Video: Metrics Beyond Figure-Ground Segmentation

Peer Neubert, Peter Protzel
<span title="">2013</span> <i title="British Machine Vision Association"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/6bfo5625nvdfvbgyf7ldi5wmfe" style="color: black;">Procedings of the British Machine Vision Conference 2013</a> </i> &nbsp;
Our evaluation is based on two recently published datasets coming with ground truth optical flow fields.  ...  We discuss how these ground optical truth fields can be used to evaluate segmentation algorithms and compare several existing superpixel algorithms.  ...  Two novel Criteria for Evaluating Superpixel Segmentations based on Ground Truth Optical Flow Given an image pair I 1 and I 2 from an image sequence, the optical flow is the vector field describing the  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.5244/c.27.54">doi:10.5244/c.27.54</a> <a target="_blank" rel="external noopener" href="https://dblp.org/rec/conf/bmvc/NeubertP13.html">dblp:conf/bmvc/NeubertP13</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/kogfplnmqjet5fczlsg7blgvsi">fatcat:kogfplnmqjet5fczlsg7blgvsi</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20180721074243/http://www.bmva.org/bmvc/2013/Papers/paper0054/paper0054.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/04/cb/04cbd9edb0cde8ad701bbec606afde4bdd4b6c98.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.5244/c.27.54"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> Publisher / doi.org </button> </a>

Robust Reference Frame Extraction from Unsteady 2D Vector Fields with Convolutional Neural Networks [article]

Byungsoo Kim, Tobias Günther
<span title="2019-03-25">2019</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
The proposed network is evaluated on an unseen numerical fluid flow simulation.  ...  Robust feature extraction is an integral part of scientific visualization.  ...  Without prior smoothing the linear method removes almost no ambient motion, keeping the dominant downstream ambient motion of the input flow.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1903.10255v1">arXiv:1903.10255v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/qvgts6zycre57c223hgsc7q7cq">fatcat:qvgts6zycre57c223hgsc7q7cq</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200826165800/https://arxiv.org/pdf/1903.10255v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/36/88/36886dc3065195326fe9cfdd5524682af43b13c9.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1903.10255v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

An Efficient Numerical Approach for Field Infrared Smoke Transmittance Based on Grayscale Images

<span title="2017-12-29">2017</span> <i title="MDPI AG"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/smrngspzhzce7dy6ofycrfxbim" style="color: black;">Applied Sciences</a> </i> &nbsp;
In addition, an image processing algorithm is used to extract the gray values of certain pixel points from grayscale images, and the positions of the selected points are discussed.  ...  Smoke transmittance is one of the most important parameters which can evaluate the obscuration performance of smoke.  ...  of China (Grant No. 11672041).  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.3390/app8010040">doi:10.3390/app8010040</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/xlmj2hznqnefvefeunvwa43szm">fatcat:xlmj2hznqnefvefeunvwa43szm</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20180728040224/https://res.mdpi.com/def5020027ac5513330d9a3939fd17a614a330af921c76a07b1bef0d9b4e1f2f4cb7e753b33943a036734ccce927e5494de0f9e4401df917d3af80ece06ee1cc5b3e200a066598c18457707fc11f3ec939a5fe32fd0d2760d94741e1d8c586a58cfe58de0b975c9bdf5efa5c0279b0bda4f9fed06f8536b8a29bdac9fff51c77955a9f2f981fe223a33e3c8c8859c69d8341dd?filename=&amp;attachment=1" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/2f/db/2fdbb840a91875d1e0166a00453cb07b55e2c635.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.3390/app8010040"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="unlock alternate icon" style="background-color: #fb971f;"></i> mdpi.com </button> </a>

Emulating the Visual Receptive-Field Properties of MST Neurons with a Template Model of Heading Estimation

John A. Perrone, Leland S. Stone
<span title="1998-08-01">1998</span> <i title="Society for Neuroscience"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/s7bticdwizdmhll4taefg57jde" style="color: black;">Journal of Neuroscience</a> </i> &nbsp;
The detectors in this template model respond to global optic flow by sampling image motion over a large portion of the visual field through networks of local motion sensors with properties similar to those  ...  We have proposed previously a computational neural-network model by which the complex patterns of retinal image motion generated during locomotion (optic flow) can be processed by specialized detectors  ...  Lastly, their model predicts that the response of an MST neuron to its preferred stimulus or a piece of it (optic flow caused by the relative motion between an object and the observer) will be largely  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1523/jneurosci.18-15-05958.1998">doi:10.1523/jneurosci.18-15-05958.1998</a> <a target="_blank" rel="external noopener" href="https://www.ncbi.nlm.nih.gov/pubmed/9671682">pmid:9671682</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/g2gun5jnendwjbmoeckwzwwj4q">fatcat:g2gun5jnendwjbmoeckwzwwj4q</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20170820065841/http://www.jneurosci.org/content/jneuro/18/15/5958.full.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/c4/c9/c4c9e752e011fe7d89072431903cc2ec64b36189.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1523/jneurosci.18-15-05958.1998"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> Publisher / doi.org </button> </a>

Coordinated control of an underwater glider fleet in an adaptive ocean sampling field experiment in Monterey Bay

Naomi E. Leonard, Derek A. Paley, Russ E. Davis, David M. Fratantoni, Francois Lekien, Fumin Zhang
<span title="2010-09-21">2010</span> <i title="Wiley"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/hozy4uxohvhslew6ka2x3ecogy" style="color: black;">Journal of Field Robotics</a> </i> &nbsp;
The results demonstrate an innovative tool for ocean sampling and provide a proof-of-concept for an important field robotics endeavor that integrates coordinated motion control with adaptive sampling.  ...  One of the central goals of the field experiment was to test and demonstrate newly developed techniques for coordinated motion control of autonomous vehicles carrying environmental sensors to efficiently  ...  Acknowledgments We acknowledge John Lund of WHOI and Jeffrey Sherman of SIO for their important contributions to glider operations.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1002/rob.20366">doi:10.1002/rob.20366</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/dtdlzmmwpraulghiifoq24gx3y">fatcat:dtdlzmmwpraulghiifoq24gx3y</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20170808160704/http://terpconnect.umd.edu/~dpaley/papers/jfr10.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/43/45/4345bec4e5639dbc344d978b9d96c6dede9cce93.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1002/rob.20366"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> wiley.com </button> </a>
&laquo; Previous Showing results 1 &mdash; 15 out of 8,920 results