⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 mjpeg_howto.txt

📁 Motion JPEG编解码器源代码
💻 TXT
📖 第 1 页 / 共 5 页
字号:
  Other example:  > lavrec -f a -i t -q 80 -d 2 -C europe-west:SE20 test.avi        Should start recording now,     -f a        use AVI as output format,     -i t        use tuner input,     -q 80        set the quality to 80% of the captured image     -d 2        the size of the pictures are half size (352x288)     -C choose TV channels, and the corresponding -it and -iT (video        source: TV tuner) can currently be used on the Marvel G200/G400        and the Matrox Millenium G200/G400 with Rainbow Runner extension        (BTTV-Support is under construction). For more information on        how to make the TV tuner parts of these cards work, see the        Marvel/Linux project on: http://marvel.sourceforge.net  Last example:  > lavrec -f a -i p -g 352x288 -q 80 -s -l 70 -R l --software-encoding  test03.avi  The two new options are -g 352x288, which sets the size of the video  to be recorded when using --software-encoding, this enables the  software encoding of the recorded images. With this option you can  also record from a bttv based card. The processor load is high. This  option only works for generic video4linux cards (such as the  brooktree-848/878 based cards), it doesn't work for zoran-based cards.  3.2.  Other recording hints  All lavtools accept a file description like file*.avi, so you do not  have to name each file, but that would also be a posibillity to do.  Note: More options are described in the man-page, but with this you  should be able to get started.  Here are some hints for sensible settings. Turn the quality to 80% or  more for -d 2 capture. At full resolution as low as 40% seems to be  visually "perfect". -d 2 is already better than VHS video (a *lot*!).  For a Marvel you should not set the quality higher than 50 when you  record at full size (-d 1). If you use higher settings (-q 60) it is  more likely that you will encounter problems. Higher settings will  result in framedrops.  If you're aiming to create VCD's then there is  little to be gained recording at full resolution as you need to reduce  to -d 2 resolution later anyway.  you can record at other sizes than the obvious -d 1/2/4. You can use  combinations where you use halve horizontal size and full vertical  size: -d 21.  This would record for NTSC at a size of 352x480. This  helps if you want to create SVCDs, scaling the 352 Pixles put to 480  is not that visible for the eye as if you would use the other  combination -d 12.  Where you have the full horzontal resolution and  half vertical this Version will have a size of 720x288 for NTSC  3.3.  Some information about the typical lavrec output while recording  0.06.14:22 int: 00040 lst:0 ins:0 del:0 ae:0 td1=0.014 td2=0.029  The first part shows the time lavrec is recording.  int: the interval  between two frames. lst: the number of lost frames. ins and del: are  the number of frames inserted and deleted for sync correction. ae:  number of audio errors.  td1 and td2 are the audio/video time-  difference.  o   (int) frame interval should be around 33 (NTSC) or 40 (PAL/SECAM).     If it is very different, you'll likely get a bad recording and/or     many lost frames  o   (lst) lost frames are bad and mean that something is not working     very well during recording (too slow HD, too high CPU usage, ...)     Try recording with a greater decimation and possibly a lower     quality.  o   (ins, del) inserted OR deleted frames of them are normal -> sync.     If you have many lost AND inserted frames, you're asking too much     of your machine.  Use less demanding options or try a different     sound card.  o   (ae) audio errors are never good. Should be 0  o   (td1, td2) time differenceis always floating around 0, unless sync     correction is disabled (--synchronization!=2, 2 is default).  3.4.  Notes about "interlace field order - what can go wrong and how  to fix it"  Firstly, what does it mean for interlace field order to be wrong?  The whole mjpegtools image processing chain is frame-oriented. Since  it is video material that is captured each frame comprised a top field  (the 0th, 2nd, 4th and so lines) and a bottom field (the 1st, 3rd, 5th  and so on lines).  3.4.1.  There are three bad things that can happen with fields  1.  This is really only an issue for movies in PAL video where each     film frame is sent as a pair of fields. These can be sent top or     bottom field first and sadly it's not always the same, though     bottom-first appears to be usual. If you capture with the wrong     field order (you start capturing each frame with a bottom rather     than a top or vice versa) the frames of the movie get split     *between* frames in the stream. Played back on a TV where each     field is displayed on its own this is harmless. The sequence of     fields played back is exactly the same as the sequence of fields     broadcast. Unfortunately, playing back on a Computer monitor where     both fields of a frame appear at once it looks *terrible* because     each frame is effectively mixing two moments in time 1/25sec     apparent.  2.  The two fields can simply be swapped somehow so that top gets     treat as bottom and bottom treat as top. Juddering and "slicing" is     the result. This occasionally seems to happen due to hardware     glitches in the capture card.  3.  Somewhere in capturing/processing the *order* in time of the two     fields in each frame can get mislabeled somehow. This is not good     as it means that when playback eventually takes place a field     containing an image sampled earlier in time comes after an image     sampled later.  Weird "juddering" effects are the results.  3.4.2.  How can I recognize if I have one of these Problems ?  1. This can be hard to spot. If you have mysteriously flickery     pictures during playback try encoding a snippet with the reverse     field-order forced (see below). If things improve drastically you     know what the problem was and what the solution is!  2. The two fields can simply be swapped somehow so that top gets treat     as bottom and bottom treat as top. Juddering and "slicing" is the     result. This occasionally seems to happen due to hardware glitches     in the capture card. That problem lookes like that:                             Interlacing problem  3. Somewhere in capturing/processing the *order* in time of the two     fields in each frame can get mislabeled somehow. This is not good     as it means that when playback eventually takes place a field     containing an image sampled earlier in time comes after an image     sampled later. Weird "juddering" effects are the result.  If you use glav or lavplay be sure that you also use the -F/--flicker  option. This disables some things that make the picture look better.  If you want to look at the video you can also use yuvplay:  > lav2yuv | ... | yuvplay  If there is a field order problem you should see it with yuvplay.  3.4.3.  How can you fix it?  1. To fix this one the fields need to be "shifted" through the frames.     Use yuvcorrect's -T BOTT_FORWARD/TOP_FORWARD to shift the way     fields are allocated to frames. You can find out the current field     order for an MJPEG file by looking at the first few lines of debug     output from: > lav2yuv -v 2 the_mjpeg_file > /dev/null Or re-record     exchanging -f a for -F A or vice-versa.  2. This isn't too bad either. Use a tool that simply swaps the top and     bottom fields a second time. yuvcorrect can do this use the -T     LINE_SWITCH.  3. Is easy to fix. Either tell a tool someplace to relabel the fields     or simply tell the player to play back in swapped order (the latter     can be done "indirectly" by telling mpeg2enc when encoding to     reverse the flag (-z b|t) that tells the decoder which field order     to use.  In order to determine exactly what type of interlacing problem you  have, you need to extract some frames from the recorded stream and  take a look at them:  > mkdir pnm  > lav2yuv -f 40 video.avi | y4mtoppm | pnmsplit - pnm/image%d.pnm  > rm pnm/image?.pnm  > cd pnm  > xv  First we create a directory where we store the images. The lav2yuv -f  40 writes only the first 40 frames to stdout. The mjpegtools y4mtoppm  converts the frames to pnm images and the pnmsplit splits the picture  into two frames in the picture to two single pictures. Then we remove  the first 10 images because pnmsplit does not support the %0xd  numbering. Without a leading zero in the number, the files will be  sorted in the wrong order, leading to confusing playback.  Use your favorite graphic program (xv for example) to view the  pictures. As each picture only contain one field out of two they will  appear scaled vertically. If you look at the pictures you should see  the movie slowly advancing.  If you have a film you should always see 2 pictures that are nearly  the same (because the film frame is split into two field for  broadcasting) after each other.  You can observe this easily if you  have comb effects when you pause the film because both fields will be  displayed at the same time. The two pictures that belong together  should have an even number and the following odd number. So if you  take a look on pictures: 4 and 5 are nearly identical, 5 and 6 differ  (have movement), 6 and 7 identical, 7 and 8 differ , ....  To fix this problem you have to use yuvcorrect's -T BOTT_FORWARD or  TOP_FORWARD. You can also have the problem that the field order  (top/bottom) is still wrong. You may have to use yuvcorrect a second  time with -M LINE_SWITCH, or use the mpeg2enc -z (b|t) option.  To see if you guessed correctly, extract the frames again, reordering  them using yuvcorrect:  > lav2yuv -f 40 video.avi | yuvcorrect -T OPTION | y4mtoppm | pnmsplit  - pnm/image%d.pnm  Where "OPTION" is what you think it will corrects the problem.  This  is for material converted from film. Material produced directly for TV  is addressed below.  3.4.4.  Hey, what about NTSC movies ?  Movies are broadcast in NTSC using "3:2" pulldown which means that  half the capture frames contain fields from 1 movie frame and half  fields from 2 frames. To undo this effect for efficient MPEG encoding  you need to use yuvkineco.  If you have an interlaced source like a TV camera you have a frame  consists of two fields that are recorded at different points in time  and shown after each other. Spotting the problem here is harder. You  need to find something moving horizontally from the left to the right.  When you extract the fields, the thing should move in small steps from  the left to the right, not one large step forward, small step back,  large forward, small back......  You have to use the same options  mentioned aboth to correct the problem.  Do not expect that the field order is always the same (top- or bottom-  first) It may change between the channels, between the films, and it  may even change within a film. If it changes constant you may have to  encode with the mpeg2enc -I 1 or even -I 2.  You can only have this problems if you record at full size !!!  4.  Creating videos from other sources  Here are some hints and descriptions of how to create the videos from  other sources like images and other video types.  You might also be interested in taking a look at the Transcoding of  existing MPEG-2 section.  4.1.  Creating videos from images  You can use jpeg2yuv to create a yuv stream from separate JPEG images.  This stream is sent to stdout, so that it can either be saved into a  file, encoded directly to a mpeg video using mpeg2enc or used for  anything else.  Saving an yuv stream can be done like this:  > jpeg2yuv -f 25 -I p -j image%05d.jpg > result.yuv  Creates the file result.yuv containing the yuv video data with 25 FPS.  The -f option is used to set the frame rate. Note that image%05d.jpg  means that the jpeg files are named image00000.jpg, image00001.jpg and  so on. (05 means five digits, 04 means four digits, etc.) The -I p is  needed for specifing the interlacing. You have to check which type you  have.  If you don't have interlacing just choose p for progressive  If you want to encode a mpeg video directly from jpeg images without  saving a separate video file type:  > jpeg2yuv -f 25 -I p -j image%05d.jpg | mpeg2enc -o mpegfile.m1v  Does the same as above but saves a mpeg video rather than a yuv video.  See mpeg2enc section for details on how to use mpeg2enc.  You can also use yuvscaler between jpeg2yuv and mpeg2enc. If you want  to create a SVCD from your source images:  > jpeg2yuv -f 25 -I p -j image%05d.jpg | yuvscaler -O SVCD |  mpeg2enc  -f 4 -o video.m2v  You can use the -b option to set the number of the image to start  with. The number of images to be processed can be specified with the  -n number. For example, if your first image is image01.jpg rather than  image00.jpg, and you only want 60 images to be processed type:  >jpeg2yuv -b 1 -f 25 -I p -n 60 -j image*.jpg | yuv2lav -o  stream_without_sound.avi  Adding the sound to the stream then:  > lavaddwav stream_without_sound.avi sound.wav stream_with_sound.avi  For ppm input there is the ppmtoy4m util. There is a manpage for  ppmtoy4m that should be consulted for additional information.  So to create a mpeg video try this:  >cat *.ppm | ppmtoy4m -o 75 -n 60 -F 25:1 | mpeg2enc -o output.m1v  Cat's each *.ppm file to ppmtoy4m. There the first 75 frames  (pictures) are ignored and next 60 are encoded by mpeg2enc to  output.m1v. You can run it without the -o and -n option. The -F  options sets the frame rate, default is NTSC (30000:1001), for PAL you  have to use -F 25:1.  Other picture formats can also be used if there is a converter to ppm.  >ls *.tga | xargs -n1 tgatoppm | ppmtoy4m | yuvplay  A list of filenames (ls *.tga) is given to xargs that executes the  tgatoppm with one (-n 1) argument per call, and feeds the output into  ppmtoy4m.  This time the video is only shown on the screen. The xargs  is only needed if the converter (tgatoppm), can only operate on a  single image at a time.  If you want to use the ImageMagick 'convert' tool (a Swiss Army Knife)

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -