http:^^www.cs.cornell.edu^info^projects^zeno^papers^welcome.html

来自「This data set contains WWW-pages collect」· HTML 代码 · 共 296 行 · 第 1/2 页

HTML
296
字号
MIME-Version: 1.0
Server: CERN/3.0
Date: Monday, 25-Nov-96 00:30:32 GMT
Content-Type: text/html
Content-Length: 11296
Last-Modified: Tuesday, 29-Oct-96 01:32:17 GMT

<html><head><title>Zeno Papers</title></head><body BGCOLOR="#BBBBBB"><table> <tr>  <td> <!WA0><!WA0><!WA0><!WA0><!WA0><!WA0><!WA0><!WA0><!WA0><!WA0><!WA0><!WA0><img src="http://www.cs.cornell.edu/Info/Projects/zeno/images/logo-small.gif"></td>  <td> <h2>Papers from Zeno Research</h2></td></table><!WA1><!WA1><!WA1><!WA1><!WA1><!WA1><!WA1><!WA1><!WA1><!WA1><!WA1><!WA1><img src="http://www.cs.cornell.edu/Info/Projects/zeno/images/alum.gif"><h4>Compressed Domain Transcoding of MPEG</h4><ul>    <i>Brian C. Smith, Soam Acharya</i><br>    <b>Abstract</b><br>    Current compression formats optimize for either compression or editing.    For example, motion JPEG (MJPEG) provides excellent random and moderate    overall compression, while MPEG optimizes for compression at the expense    of random access. Converting from one format to another, a    process called transcoding, is often desirable over the life of a    video segment. In this paper, we show how to transcode MPEG video to    motion-JPEG without fully decompressing the MPEG source. Our    compressed domain transcoding technique differs from previous work    because it uses a new technique that is optimized for software    implementation and because we compare the performance of a working    implementation of our compressed domain transcoder, instead of just    counting the number of multiplies needed to transcode. Our    experiments show that our compressed domain transcoder is 1.5    to 3 times faster than an optimized spatial domain transcoder,    and offers another benefit: a single parameter can improve the    speed of transcoding at the expense of the quality of the    resulting images. This speed/quality trade-off is important to    many real-time applications.    <br><br>    <!WA2><!WA2><!WA2><!WA2><!WA2><!WA2><!WA2><!WA2><!WA2><!WA2><!WA2><!WA2><img src="http://www.cs.cornell.edu/Info/Projects/zeno/images/redball.gif">     <!WA3><!WA3><!WA3><!WA3><!WA3><!WA3><!WA3><!WA3><!WA3><!WA3><!WA3><!WA3><a href="http://www.cs.cornell.edu/Info/Projects/zeno/Papers/tc.pdf">Acrobat </a> (280K)    <!WA4><!WA4><!WA4><!WA4><!WA4><!WA4><!WA4><!WA4><!WA4><!WA4><!WA4><!WA4><img src="http://www.cs.cornell.edu/Info/Projects/zeno/images/redball.gif">     <!WA5><!WA5><!WA5><!WA5><!WA5><!WA5><!WA5><!WA5><!WA5><!WA5><!WA5><!WA5><a href="http://www.cs.cornell.edu/Info/Projects/zeno/Papers/tc.ps.gz">Gzipped postscript </a> (281K)     <br> </ul><hr><h4>CU-SeeMe VR: Immersive Desktop Teleconferencing</h4>To appear in <i>ACM Multimedia '96</i><ul>    <i>Jefferson Han, Brian C. Smith</i><br>    <b>Abstract</b><br>    Current video-conferencing systems provide a    <i>video-in-a-window</i> user interface. This paper    presents a video-conferencing application called CU-SeeMe VR that    provides a richer interface.  CU-SeeMe VR is a distributed    video-conferencing system that allows users to connect to 3D worlds    and interact with other using live video and audio embedded in a    virtual space.  This paper describes a prototype implementation of    CU-SeeMe VR, including the user interface, system architecture, and    a detailed look at the enabling technologies.  Future directions    and metaphors for this space are discussed.    <br><br>    <!WA6><!WA6><!WA6><!WA6><!WA6><!WA6><!WA6><!WA6><!WA6><!WA6><!WA6><!WA6><img src="http://www.cs.cornell.edu/Info/Projects/zeno/images/redball.gif">     <!WA7><!WA7><!WA7><!WA7><!WA7><!WA7><!WA7><!WA7><!WA7><!WA7><!WA7><!WA7><a href="http://www.cs.cornell.edu/Info/Projects/zeno/Papers/Vr/vr.htm">HTML version</a>    <br><br>    <!WA8><!WA8><!WA8><!WA8><!WA8><!WA8><!WA8><!WA8><!WA8><!WA8><!WA8><!WA8><img src="http://www.cs.cornell.edu/Info/Projects/zeno/images/redball.gif">     <!WA9><!WA9><!WA9><!WA9><!WA9><!WA9><!WA9><!WA9><!WA9><!WA9><!WA9><!WA9><a href="http://www.cs.cornell.edu/Info/Projects/zeno/Papers/Vr/vr.pdf">Acrobat version</a> (211K)     <br> </ul><hr><h4>Compressed Domain Processing of JPEG-encoded Images</h4>To appear in <i>Real-Time Imaging Journal</i><ul>    <i>Brian C. Smith, Lawrence A. Rowe</i>, July, 1996<br>    <b>Abstract</b><br>    This paper addresses the problem of processing motion-JPEG video    data in the compressed domain. The operations covered are those    where a pixel in the output image is an arbitrary linear    combination of pixels in the input image, which includes    convolution, scaling, rotation, translation, morphing,    de-interlacing, image composition, and transcoding. This paper    further develops an approximation technique called condensation to    improve performance and evaluates condensations in terms of    processing speed and image quality. Using condensation, motion-JPEG    video can be processed at near real-time rates on current    generation workstations.    <br><br>    <!WA10><!WA10><!WA10><!WA10><!WA10><!WA10><!WA10><!WA10><!WA10><!WA10><!WA10><!WA10><img src="http://www.cs.cornell.edu/Info/Projects/zeno/images/redball.gif">     <!WA11><!WA11><!WA11><!WA11><!WA11><!WA11><!WA11><!WA11><!WA11><!WA11><!WA11><!WA11><a href="http://www.cs.cornell.edu/Info/Projects/zeno/Papers/rtij96.pdf">Acrobat version</a> (931K)     <br> </ul><hr><h4>Massively Distributed Video File Server Simulation: InvestigatingIntelligent Caching Schemes</h4><ul>    <i>Alexander Castro, C. Edward Lazzerini, Vivekananda Kolla</i>    December, 1995<br>    <b>Abstract</b><br>    This paper, the final report in    <!WA12><!WA12><!WA12><!WA12><!WA12><!WA12><!WA12><!WA12><!WA12><!WA12><!WA12><!WA12><a href="http://www.cs.cornell.edu/Info/Courses/Fall-95/CS631">CS631</a>,    a graduate multimedia systems course, presents the results of a    simulation study that compares the effectivesness of different caching    schemes within the DVFS architecture.    <br><br>    <!WA13><!WA13><!WA13><!WA13><!WA13><!WA13><!WA13><!WA13><!WA13><!WA13><!WA13><!WA13><img src="http://www.cs.cornell.edu/Info/Projects/zeno/images/redball.gif">     <!WA14><!WA14><!WA14><!WA14><!WA14><!WA14><!WA14><!WA14><!WA14><!WA14><!WA14><!WA14><a href="http://www.cs.cornell.edu/Info/Projects/zeno/DVFS/EdAlex/EdAlex.html">HTML version</a>     <br>    <!WA15><!WA15><!WA15><!WA15><!WA15><!WA15><!WA15><!WA15><!WA15><!WA15><!WA15><!WA15><img src="http://www.cs.cornell.edu/Info/Projects/zeno/images/redball.gif">     <!WA16><!WA16><!WA16><!WA16><!WA16><!WA16><!WA16><!WA16><!WA16><!WA16><!WA16><!WA16><a href="http://www.cs.cornell.edu/Info/Projects/zeno/DVFS/EdAlex/EdAlex.pdf">Acrobat version</a> (34K)     <br></ul><hr><h4>A Survey of Compressed Domain Processing Techniques</h4><i>Reconnecting Science and Humanities in Digital Libraries,   University of Kentuky</i><ul>    <i>Brian C. Smith</i>, Oct 1995<br>    <b>Abstract</b><br>    This short paper surveys current techniques for compressing    compressed multimedia data, including compressed audio, video, and    images.    <br><br>    <!WA17><!WA17><!WA17><!WA17><!WA17><!WA17><!WA17><!WA17><!WA17><!WA17><!WA17><!WA17><img src="http://www.cs.cornell.edu/Info/Projects/zeno/images/redball.gif">     <!WA18><!WA18><!WA18><!WA18><!WA18><!WA18><!WA18><!WA18><!WA18><!WA18><!WA18><!WA18><a href="http://www.cs.cornell.edu/Info/Projects/zeno/Papers/cdpsurvey/paper.html">HTML version</a>     <br>    <!WA19><!WA19><!WA19><!WA19><!WA19><!WA19><!WA19><!WA19><!WA19><!WA19><!WA19><!WA19><img src="http://www.cs.cornell.edu/Info/Projects/zeno/images/redball.gif">     <!WA20><!WA20><!WA20><!WA20><!WA20><!WA20><!WA20><!WA20><!WA20><!WA20><!WA20><!WA20><a href="http://www.cs.cornell.edu/Info/Projects/zeno/Papers/cdpsurvey/paper.pdf">Acrobat version</a> (160K)     <br> </ul><hr><h4>A Resolution Independent Video Language</h4>Presented at <!WA21><!WA21><!WA21><!WA21><!WA21><!WA21><!WA21><!WA21><!WA21><!WA21><!WA21><!WA21><a href="http://www.acm.org/sigmm/MM95">ACM Multimedia 95</a>.<ul>    <i>Jonathan Swartz, Brian C. Smith</i>,    November, 1995<br>    <b>Abstract</b><br>    As common as video processing is, programmers still implement video    programs as manipulations of arrays of pixels. This paper presents a    language extension called Rivl (pronounced "rival") where video is a    first class data type. Programs in Rivl use high level operators that    are independent of video resolution and format, increasing a program's

⌨️ 快捷键说明

复制代码Ctrl + C
搜索代码Ctrl + F
全屏模式F11
增大字号Ctrl + =
减小字号Ctrl + -
显示快捷键?