⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 http:^^www.cs.wisc.edu^computer-vision^pubs.html

📁 This data set contains WWW-pages collected from computer science departments of various universities
💻 HTML
📖 第 1 页 / 共 4 页
字号:
the motion parallax. We conclude from these analyses that reliablequalitative shape information is generally available only atdiscontinuities in the image flow field.</blockquote><LI> <B><A NAME="thesis-allmen">     Image Sequence Description using Spatiotemporal Flow Curves:     Toward Motion-Based Recognition</A></B><BR>     Ph.D. Dissertation, M. C. Allmen,     Computer Sciences Department Technical Report 1040,     August 1991.     (<!WA58><!WA58><!WA58><A HREF="ftp://ftp.cs.wisc.edu/computer-vision/thesis-allmen.ps">postscript</A>     or <!WA59><!WA59><!WA59><A HREF="ftp://ftp.cs.wisc.edu/computer-vision/thesis-allmen.ps.gz">1.1M gzip'ed postscript</A>)<P><blockquote>Recovering a hierarchical motion description of a long image sequence isone way to recognize objects and their motions.Intermediate-level and high-level motion analysis, i.e., recognizing acoordinated sequence of eventssuch as walking and throwing,has been formulated previously as a process that follows high-level objectrecognition. This thesis develops an alternative approach tointermediate-level and high-level motion analysis.It does not depend on complex object descriptions and can therefore becomputed prior to object recognition. Toward this end,a new computational framework for low and intermediate-level processing oflong sequences of images ispresented.<P> Our new computational frameworkuses spatiotemporal (ST) surface flow and ST flow curves.As contours move, their projectionsinto the image also move. Over time, these projections sweep outST surfaces. Thus, thesesurfaces are direct representations of object motion.ST surface flow is defined as the natural extensionof optical flow toST surfaces. For every point on an ST surface, the instantaneousvelocity of that point on the surface is recovered.It is observed that arc length of a rigid contour does not change ifthat contour is moved in the direction of motion on the ST surface. Motivatedby this observation, a function measuring arc length change is defined.The direction of motion of a contour undergoingmotion parallel to the image plane is shown to be perpendicular to thegradient of this function.<P> ST surface flow is then used to recover ST flow curves. ST flow curvesare defined such that the tangent at a point on the curve equals the STsurface flow at that point. ST flow curves are then grouped so that eachcluster represents a temporally-coherent structure, i.e.,structures that resultfrom an object or surface in the scene undergoing motion. Using these clustersof ST flow curves, separate moving objects in the scene can be hypothesizedand occlusion and disocclusion between them can be identified.<P> The problem of detecting cyclic motion, while recognized by the psychologycommunity, has received very little attention in the computer visioncommunity. In order to show the representationalpower of ST flow curves, cyclic motion is detected using ST flow curveswithout prior recovery of complex object descriptions.</blockquote></UL><HR><P><H2><A NAME="shape">3D Shape Representation</A></H2><UL><LI><!WA60><!WA60><!WA60><img alg="o" src="http://www.cs.wisc.edu/~dyer/images/new.gif">      <B><A NAME="sigg96-seitz"> View Morphing</A></B><BR>     S. M. Seitz and C. R. Dyer, <CITE>Proc. SIGGRAPH 96</CITE>, 1996, To     appear. (<!WA61><!WA61><!WA61><A HREF="ftp://ftp.cs.wisc.edu/computer-vision/sigg96-seitz.ps">4.2M postscript</A>or <!WA62><!WA62><!WA62><A HREF="ftp://ftp.cs.wisc.edu/computer-vision/sigg96-seitz.ps.gz">1.6M gzip'ed postscript</A>)<P><blockquote>Image morphing techniques can generate compelling 2D transitions betweenimages.  However, differences in object pose or viewpoint often causeunnatural distortions in image morphs that are difficult to correctmanually.  Using basic principles of projectivegeometry, this paper introduces a simple extension to image morphingthat correctly handles 3D projective camera and scene transformations.The technique, called <I> view morphing</I>, works by prewarping two imagesprior to computing a morph and then postwarping the interpolated images.Because no knowledge of 3D shape is required, the technique may be appliedto photographs and drawings, as well as rendered scenes.The ability to synthesize changes both in viewpoint and image structureaffords a wide variety of interesting 3D effects via simple imagetransformations.</blockquote><LI><!WA63><!WA63><!WA63><img alg="o" src="http://www.cs.wisc.edu/~dyer/images/new.gif">      <B><A NAME="icpr96-seitz"> Toward Image-Based Scene Representation	  Using View Morphing</A></B><BR>     S. M. Seitz and C. R. Dyer, <CITE>Proc. 13th Int. Conf. Pattern	  Recognition, Vol. I, Track A: Computer Vision</CITE>, 1996, 84-89.(<!WA64><!WA64><!WA64><A HREF="ftp://ftp.cs.wisc.edu/computer-vision/icpr96-seitz.ps">1.2M postscript</A>or <!WA65><!WA65><!WA65><A HREF="ftp://ftp.cs.wisc.edu/computer-vision/icpr96-seitz.ps.gz">486K gzip'ed postscript</A>)     (Longer version appears as Computer Sciences Department     <CITE>Technical Report 1298</CITE>     (<!WA66><!WA66><!WA66><A HREF="ftp://ftp.cs.wisc.edu/computer-vision/tr1298-seitz.ps">postscript</A>     or <!WA67><!WA67><!WA67><A HREF="ftp://ftp.cs.wisc.edu/computer-vision/tr1298-seitz.ps">552K gzip'ed postscript</A>).)<P><blockquote>The question of which views may be inferred from a set of basis imagesis addressed.  Under certain conditions, a discrete set of imagesimplicitly describes scene appearance for a continuous range of viewpoints.In particular, it is demonstrated that two basis views of a static scenedetermine the set of all views on the line between their optical centers.Additional basis views further extend the range of predictable views to atwo- or three-dimensional region of viewspace.  These results are shown toapply under perspective projection subject to a generic visibilityconstraint called monotonicity.  In addition, a simple scanline algorithm ispresented for actually generating these views from a set of basis images.The technique, called <I> view morphing</I> may be applied to both calibratedand uncalibrated images.  At a minimum, two basis views and theirfundamental matrix are needed.  Experimental results are presented onreal images.  This work provides a theoretical foundation for image-basedrepresentations of 3D scenes by demonstrating that perspective viewsynthesis is a theoretically well-posed problem.</blockquote><LI> <B><A NAME="rvs95-seitz">     Physically-Valid View Synthesis by Image Interpolation</A></B><BR>     S. M. Seitz and C. R. Dyer, <CITE>Proc. Workshop on Representation     of Visual Scenes</CITE>, 1995, 18-25.     (<!WA68><!WA68><!WA68><A HREF="ftp://ftp.cs.wisc.edu/computer-vision/rvs95-seitz.ps">postscript</A>     or <!WA69><!WA69><!WA69><A HREF="ftp://ftp.cs.wisc.edu/computer-vision/rvs95-seitz.ps.gz">500K gzip'ed postscript</A>)<P><blockquote>Image warping is a popular tool forsmoothly transforming one image to another.  ``Morphing''techniques based on geometric image interpolation create compelling visualeffects, but the validity of such transformations has not been established.In particular, does 2D interpolation of twoviews of the same scene produce a sequence of physically valid in-betweenviews of that scene?  In this paper, we describe a simple image rectificationprocedure which guarantees that interpolation does in fact produce valid views,under generic assumptions about visibility and the projection process.Towards this end, it is first shown that two basis views are sufficient topredict the appearance of the scene within a specific range of new viewpoints.Second, it is demonstrated that interpolation of the rectified basis imagesproduces exactly this range of views.Finally, it is shown that generating this range of views is a theoreticallywell-posed problem, requiring neither knowledge of camera positions nor3D scene reconstruction.A scanline algorithm for view interpolation is presented that requires onlyfour user-provided feature correspondences to produce valid orthographicviews.  The quality of the resulting images is demonstrated withinterpolations of real imagery.</blockquote><LI> <B><A NAME="pami93-eggert">     The Scale Space Aspect Graph</A></B><BR>     D. W. Eggert, K. W. Bowyer, C. R. Dyer, H. I. Christensen     and D. B. Goldgof, <CITE>IEEE Trans. Pattern Analysis and     Machine Intelligence</CITE><B> 15</B>, 1993, 1114-1130.     (<!WA70><!WA70><!WA70><A HREF="ftp://ftp.cs.wisc.edu/computer-vision/pami93-eggert.ps">postscript</A>     or <!WA71><!WA71><!WA71><A HREF="ftp://ftp.cs.wisc.edu/computer-vision/pami93-eggert.ps.gz">280K gzip'ed postscript</A>)<BR>     (An earlier     version of this paper appeared in     <CITE>Proc. Computer Vision and Pattern Recognition Conf.</CITE>,     1992, 335-340     (<!WA72><!WA72><!WA72><A HREF="ftp://ftp.cs.wisc.edu/computer-vision/cvpr92-eggert.ps">postscript</A>     or <!WA73><!WA73><!WA73><A HREF="ftp://ftp.cs.wisc.edu/computer-vision/cvpr92-eggert.ps.gz">250K gzip'ed postscript</A>).)     <P> <blockquote>Currently the aspect graph is computed from the theoretical standpoint of perfect resolution in object shape, the viewpoint and the projected image.This means that the aspect graph may include details that an observer could never see in practice. Introducing the notion of scale into the aspect graph framework provides a mechanism for selecting a level of detail that is "large enough" to merit explicit representation. This effectively allows control over the number of nodes retained in the aspect graph. This paper introduces the concept of the scale space aspect graph, defines three different interpretations of the scale dimension, and presents a detailed example for a simple class of objects, with scale defined in terms of the spatial extent of features in the image.</blockquote><LI> <B><A NAME="cvgip92-seales">     Viewpoint from Occluding Contour</A></B><BR>     W. B. Seales and C. R. Dyer,      <CITE>Computer Vision, Graphics and Image Processing:  Image     Understanding</CITE><B> 55</B>, 1992, 198-211.     (<!WA74><!WA74><!WA74><A HREF="ftp://ftp.cs.wisc.edu/computer-vision/cvgip92-seales.ps">postscript</A>     or <!WA75><!WA75><!WA75><A HREF="ftp://ftp.cs.wisc.edu/computer-vision/cvgip92-seales.ps.gz">290K gzip'ed postscript</A>)<P><blockquote>In this paper we present the geometry and the algorithms for organizing aviewer-centered representation of the occluding contour of polyhedra.The contour is computed from a polyhedral boundary model as it would appearunder orthographic projection into the image plane from every viewpointon the view sphere.Using this representation, we show how to derive constraints on regions inviewpoint space from the relationship between detected image features andour precomputed contour model.Such constraints are based on both qualitative (viewpoint extent) andquantitative (angle measurements and relative geometry) information that hasbeen precomputed about how the contour appears in the image plane as a setof projected curves and T-junctions from self-occlusion.The results we show from an experimental system demonstrate that featuresof the occluding contour can be computed in a model-based framework, andand their geometry constrains the viewpoints from which a model will projectto a set of occluding contour features in an image.</blockquote><LI> <B><A NAME="ecai92-seales">     An Occlusion-Based Representation of Shape for      Viewpoint Recovery</A></B><BR>     W. B. Seales and C. R. Dyer,      <CITE>Proc. 10th European Conf. on Artificial Intelligence</CITE>,     1992, 816-820.     (<!WA76><!WA76><!WA76><A HREF="ftp://ftp.cs.wisc.edu/computer-vision/ecai92-seales.ps">postscript</A>     or <!WA77><!WA77><!WA77><A HREF="ftp://ftp.cs.wisc.edu/computer-vision/ecai92-seales.ps.gz">80K gzip'ed postscript</A>)<P><blockquote>In this paper we present the geometry and the algorithms for organizing andusing a viewer-centered representation of the occluding contour of polyhedra.The representation is computed from a polyhedral modelunder orthographic projection for all viewing directions.Using this representation, we derive constraints onviewpoint correspondences between image features andmodel contours.Our results show that the occluding contour, computedin a model-based framework, can be used to strongly constrain the viewpointswhere a 3D model matches the occluding contour features of the image.</blockquote><LI> <B><A NAME="thesis-seales">     Appearance Models of Three-Dimensional      Shape for Machine Vision and Graphics</A></B><BR>     Ph.D. Dissertation, W. B. Seales,     Computer Sciences Department Technical Report 1042,     August 1991.     (<!WA78><!WA78><!WA78><A HREF="ftp://ftp.cs.wisc.edu/computer-vision/thesis-seales.ps">postscript</A>     or <!WA79><!WA79><!WA79><A HREF="ftp://ftp.cs.wisc.edu/computer-vision/thesis-seales.ps.gz">460K gzip'ed postscript</A>)<P><blockquote>A fundamental problem common to both computer graphics and model-basedcomputer vision is how to efficiently model the appearance of a shape.Appearance is obtained procedurally by applying a projective transformationto a three-dimensional object-centered shape representation.This thesis presents a viewer-centered representation that is based on thevisual event, a viewpoint where a specific change in the structure of theprojected model occurs.We present and analyze the basis of this viewer-centered representationand the algorithms for its construction.Variations of this visual-event-based representation are applied to twospecific problems:  hidden line/surface display, and the solution for modelpose given an image contour.<P>

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -