📄 http:^^www.cs.wisc.edu^computer-vision^pubs.html
字号:
Our work emphasizes the need for (1) controlling camera motionthrough efficient processing of the image stream, and (2) designingprovably-correct strategies, i.e., strategies whose success can beaccurately characterized in terms of the geometry of the viewedobject. For each task, efficiency is achieved by extracting fromeach image only the information necessary to move the cameradifferentially, assuming a dense sequence of images, and using 2Drather than 3D information to control camera motion. Provablecorrectness is achieved by controlling camera motion based on theoccluding contour's dynamic shape and maintaining specifictask-dependent geometric constraints that relate the camera's motionto the differential geometry of the object.</blockquote><LI> <B><A NAME="cvpr93-kutulakos"> Toward Global Surface Reconstruction by Purposive Viewpoint Adjustment</A></B><br> K. N. Kutulakos and C. R. Dyer, <CITE> Proc. Computer Vision and Pattern Recognition Conf.</CITE>, 1993, 726-727. (<!WA34><!WA34><!WA34><A HREF="ftp://ftp.cs.wisc.edu/computer-vision/cvpr93-kutulakos.ps">postscript</A> or <!WA35><!WA35><!WA35><A HREF="ftp://ftp.cs.wisc.edu/computer-vision/cvpr93-kutulakos.ps.gz">10K gzip'ed postscript</A>)<P><blockquote>We consider the following problem: How should an observer change viewpointin order to generate a dense image sequence of an arbitrary smooth surfaceso that it can be incrementally reconstructed using the occluding contourand the epipolar parameterization? We present a collection of qualitativebehaviors that, when integrated appropriately, purposefully controlviewpoint based on the appearance of the surface in order to provably solvethis problem.</blockquote><LI> <B><A NAME="tr1124-kutulakos"> Object Exploration By Purposive, Dynamic Viewpoint Adjustment</A></B><br> K. N. Kutulakos, C. R. Dyer, V. J. Lumelsky, Computer Sciences Department Technical Report 1124, November 1992. (<!WA36><!WA36><!WA36><A HREF="ftp://ftp.cs.wisc.edu/computer-vision/tr1124-kutulakos.ps">postscript</A> or <!WA37><!WA37><!WA37><A HREF="ftp://ftp.cs.wisc.edu/computer-vision/tr1124-kutulakos.ps.gz">110K gzip'ed postscript</A>)<P><blockquote> We present a viewing strategy for exploring the surface of an unknown object (i.e., making all of its points visible) by purposefully controlling the motion of an active observer. It is based on a simple relation between (1) the instantaneous direction of motion of the observer, (2) the visibility of points projecting to the occluding contour, and (3) the surface normal at those points: If the dot product of the surface normal at such points and the observer's velocity is positive, the visibility of the points is guaranteed under an infinitesimal viewpoint change. We show that this leads to an object exploration strategy in which the observer <EM>purposefully</EM> controls its motion based on the occluding contour in order to impose structure on the set of surface points explored, make its representation simple and qualitative, and provably solve the exploration problem for smooth generic surfaces of arbitrary shape. Unlike previous approaches where exploration is cast as a discrete process (i.e., asking where to look next?) and where the successful exploration of arbitrary objects is not guaranteed, our approach demonstrates that dynamic viewpoint control through directed observer motion leads to a qualitative exploration strategy that is provably-correct, depends only on the dynamic appearance of the occluding contour, and does not require the recovery of detailed three-dimensional shape descriptions from every position of the observer.</blockquote><LI> <B><A NAME="icra94-kutulakos"> Provable Strategies for Vision-Guided Exploration in Three Dimensions</A></B><BR> K. N. Kutulakos, C. R. Dyer, and V. J. Lumelsky, <CITE>Proc. 1994 IEEE Int. Conf. Robotics and Automation</CITE>, 1994, 1365-1372. (<!WA38><!WA38><!WA38><A HREF="ftp://ftp.cs.wisc.edu/computer-vision/icra94-kutulakos.ps">postscript</A> or <!WA39><!WA39><!WA39><A HREF="ftp://ftp.cs.wisc.edu/computer-vision/icra94-kutulakos.ps.gz">210K gzip'ed postscript</A>)<P><blockquote>An approach is presented for exploring an unknown, arbitrary surfacein three-dimensional (3D) space by a mobile robot. The maincontributions are (1) an analysis of the capabilities a robot mustpossess and the trade-offs involved in the design of an explorationstrategy, and (2) two provably-correct exploration strategies thatexploit these trade-offs and use visual sensors (e.g., cameras andrange sensors) to plan the robot's motion. No such analysis existedpreviously for the case of a robot moving freely in 3D space. Theapproach exploits the notion of the <EM>occlusion boundary</EM>, i.e.,the points separating the visible from the occluded parts of anobject. The occlusion boundary is a collection of curves that``slide'' over the surface when the robot's position is continuouslycontrolled, inducing the visibility of surface points over which theyslide. The paths generated by our strategies force the occlusionboundary to slide over the entire surface. The strategies provide abasis for integrating motion planning and visual sensing under acommon computational framework.</blockquote><LI> <B><A NAME="icra93-kutulakos"> Vision-Guided Exploration: A Step toward General Motion Planning in Three Dimensions</A></B><br> K. N. Kutulakos, V. J. Lumelsky, and C. R. Dyer, <CITE> Proc. 1993 IEEE Int. Conf. on Robotics and Automation</CITE>, 1993, 289-296. (<!WA40><!WA40><!WA40><A HREF="ftp://ftp.cs.wisc.edu/computer-vision/icra93-kutulakos.ps">postscript</A> or <!WA41><!WA41><!WA41><A HREF="ftp://ftp.cs.wisc.edu/computer-vision/icra93-kutulakos.ps.gz">50K gzip'ed postscript</A>)<BR> (Longer version appears as Computer Sciences Department <CITE>Technical Report 1111</CITE> (<!WA42><!WA42><!WA42><A HREF="ftp://ftp.cs.wisc.edu/computer-vision/tr1111-kutulakos.ps">postscript</A> or <!WA43><!WA43><!WA43><A HREF="ftp://ftp.cs.wisc.edu/computer-vision/tr1111-kutulakos.ps.gz">90K gzip'ed postscript</A>).)<P><blockquote>We present an approach for solving the path planning problem for a mobilerobot operating in an unknown, three dimensional environment containingobstacles of arbitrary shape. The main contributions of this paper are (1)an analysis of the type of sensing information that is necessary andsufficient for solving the path planning problem in such environments, and(2) the development of a framework for designing a provably-correctalgorithm to solve this problem. Working from first principles, without anyassumptions about the environment of the robot or its sensing capabilities,our analysis shows that the ability to explore the obstacle surfaces (i.e.,to make all their points visible) is intrinsically linked with the abilityto plan the motion of the robot. We argue that current approaches to thepath planning problem with incomplete information simply do not extend tothe general three-dimensional case, and that qualitatively differentalgorithms are needed.</blockquote></UL><HR><P><H2><A NAME="motion">Motion Analysis</A></H2><UL><LI><B><A NAME="iccv95-seitz"> Complete Scene Structure from Four Point Correspondences</A></B><BR> S. M. Seitz and C. R. Dyer, <CITE>Proc. 5th Int. Conf. Computer Vision</CITE>, 1995, 330-337. (<!WA44><!WA44><!WA44><A HREF="ftp://ftp.cs.wisc.edu/computer-vision/iccv95-seitz.ps">postscript</A> or <!WA45><!WA45><!WA45><A HREF="ftp://ftp.cs.wisc.edu/computer-vision/iccv95-seitz.ps.gz">250K gzip'ed postscript</A>)<P><blockquote>A new technique is presented for computing 3D scene structure from point and line features in monocular image sequences. Unlike previous methods, the technique guarantees the completeness of the recovered scene, ensuring that every scene feature that is detected in each image is reconstructed.The approach relies on the presence of four or more reference features whose correspondences are known in all the images. Under an orthographic or affine camera model, the parallax of the reference features provides constraints that simplify the recovery of the rest of the visible scene.An efficient recursive algorithm is described that uses a unified framework for point and line features. The algorithm integrates the tasks of feature correspondence and structure recovery, ensuring that all reconstructible features are tracked. In addition, the algorithm is immune to outliers andfeature-drift, two weaknesses of existing structure-from-motion techniques.Experimental results are presented for real images.</blockquote><LI> <B><A NAME="nram94-seitz"> Detecting Irregularities in Cyclic Motion</A></B><BR> S. M. Seitz and C. R. Dyer, <CITE>Proc. Workshop on Motion of Non-Rigid and Articulated Objects</CITE>, 1994, 178-185. (<!WA46><!WA46><!WA46><A HREF="ftp://ftp.cs.wisc.edu/computer-vision/nram94-seitz.ps">postscript</A> or <!WA47><!WA47><!WA47><A HREF="ftp://ftp.cs.wisc.edu/computer-vision/nram94-seitz.ps.gz">910K gzip'ed postscript</A>)<P><blockquote>Real cyclic motions tend not to be perfectly even, i.e., the period varies slightly from one cycle to the next, because of physically important changesin the scene. A generalization of period is defined for cyclic motionsthat makes periodic variation explicit. This representation, called the period trace, is compact and purely temporal, describing the evolution of an object or scene without reference to spatial quantities such as position or velocity. By delimiting cycles and identifying correspondences across cycles, the period trace provides a means of temporally registering a cyclic motion. In addition, several purely temporal motion features are derived, relating to the nature and location of irregularities. Results are presented using real image sequences and applications to athletic and medical motion analysis are discussed.</blockquote><LI> <B><A NAME="cvpr94-seitz"> Affine Invariant Detection of Periodic Motion</A></B><BR> S. M. Seitz and C. R. Dyer, <CITE>Proc. Computer Vision and Pattern Recognition Conf.</CITE>, 1994, 970-975. (<!WA48><!WA48><!WA48><A HREF="ftp://ftp.cs.wisc.edu/computer-vision/cvpr94-seitz.ps">postscript</A> or <!WA49><!WA49><!WA49><A HREF="ftp://ftp.cs.wisc.edu/computer-vision/cvpr94-seitz.ps.gz">1M gzip'ed postscript</A>)<BR> (Different version appears as Computer Sciences Department <CITE>Technical Report 1225</CITE> (<!WA50><!WA50><!WA50><A HREF="ftp://ftp.cs.wisc.edu/computer-vision/tr1225-seitz.ps">postscript</A> or <!WA51><!WA51><!WA51><A HREF="ftp://ftp.cs.wisc.edu/computer-vision/tr1225-seitz.ps">890K gzip'ed postscript</A>).)<P><blockquote>Current approaches for detecting periodic motion assume a stationary cameraand place limits on an object's motion. These approaches rely on theassumption that a periodic motion projects to a set of periodic imagecurves, an assumption that is invalid in general.Using affine-invariance, wederive necessary and sufficient conditions for an image sequence to bethe projection of a periodic motion. No restrictions are placed oneither the motion of the camera or the object.Our algorithm is shown to be provably-correct fornoise-free data and is extended to be robust with respect toocclusions and noise. The extended algorithm is evaluated with real andsynthetic image sequences.</blockquote><LI> <B><A NAME="cvgip93-allmen"> Computing Spatiotemporal Relations for Dynamic Perceptual Organization</A></B><BR> M. Allmen and C. R. Dyer, <CITE>Computer Vision, Graphics and Image Processing: Image Understanding</CITE><B> 58</B>, 1993, 338-351. (<!WA52><!WA52><!WA52><A HREF="ftp://ftp.cs.wisc.edu/computer-vision/cvgip93-allmen.ps">postscript</A> or <!WA53><!WA53><!WA53><A HREF="ftp://ftp.cs.wisc.edu/computer-vision/cvgip93-allmen.ps.gz">200K gzip'ed postscript</A>)<BR> (Earlier version appeared as Computer Sciences Department <CITE>Technical Report 1130</CITE> (<!WA54><!WA54><!WA54><A HREF="ftp://ftp.cs.wisc.edu/computer-vision/tr1130-allmen.ps">postscript</A> or <!WA55><!WA55><!WA55><A HREF="ftp://ftp.cs.wisc.edu/computer-vision/tr1130-allmen.ps.gz">200K gzip'ed postscript</A>).) <P><blockquote>To date, the overwhelming use of motion in computational vision has beento recover the three-dimensional structure of thescene. We propose that there are other, more powerful, uses for motion.Toward this end,we define dynamic perceptual organization as an extensionof the traditional (static) perceptual organization approach.Just asstatic perceptual organization groups coherent features in an image,dynamic perceptual organization groups coherent motions through an imagesequence. Using dynamic perceptual organization, we propose a new paradigmfor motion understanding and show why it can bedone independently of the recovery of scene structure and scene motion.The paradigm starts with a spatiotemporal cube of image data and organizesthe paths of points so thatinteractions between the paths and perceptualmotions such as common, relative and cyclicare made explicit.The results of this can then be used for high-level motion recognition tasks.</blockquote><LI> <B><A NAME="qv93-waldon"> Dynamic Shading, Motion Parallax and Qualitative Shape</A></B><BR> S. Waldon and C. R. Dyer, <CITE>Proc. IEEE Workshop on Qualitative Vision</CITE>, 1993, 61-70. (<!WA56><!WA56><!WA56><A HREF="ftp://ftp.cs.wisc.edu/computer-vision/qv93-waldon.ps">postscript</A> or <!WA57><!WA57><!WA57><A HREF="ftp://ftp.cs.wisc.edu/computer-vision/qv93-waldon.ps.gz">140K gzip'ed postscript</A>)<P><blockquote>We address the problem of qualitative shaperecovery from moving surfaces. Our analysis is unique in that weconsider specular interreflections and explore the effects of bothmotion parallax and changes in shading. To study this situation wedefine an image flow field called the reflection flow field,which describes the motion of reflection points and the motion of thesurface. From a kinematic analysis, we show that the reflection flowis qualitatively different from the motion parallax because it isdiscontinuous at or near parabolic curves. We also show that when thegradient of the reflected image is strong, gradient-based flowmeasurement techniques approximate the reflection flow field and not
⌨️ 快捷键说明
复制代码
Ctrl + C
搜索代码
Ctrl + F
全屏模式
F11
切换主题
Ctrl + Shift + D
显示快捷键
?
增大字号
Ctrl + =
减小字号
Ctrl + -