⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 index.html

📁 svm(支持向量机)分类算法本质上是二类分类器
💻 HTML
📖 第 1 页 / 共 5 页
字号:
<P><A href="mailto:svm-light@ls8.cs.uni-dortmund.de">Please send me email</A> and let me know that you got svm-light. I will put you on my mailing list to inform you about new versions and bug-fixes. SVM<I><SUP>light</I></SUP> comes with a quadratic programming tool for solving small intermediate quadratic programming problems. It is based on the method of Hildreth and D'Espo and solves small quadratic programs very efficiently. Nevertheless, if for some reason you want to use another solver, the new version still comes with an interface to PR_LOQO. The <a TARGET="_top" HREF="javascript:if(confirm('http://www.first.gmd.de/~smola/  \n\nThis file was not retrieved by Teleport Pro, because it is addressed on a domain or path outside the boundaries set for its Starting Address.  \n\nDo you want to open it from the server?'))window.location='http://www.first.gmd.de/~smola/'" tppabs="http://www.first.gmd.de/~smola/">PR_LOQO optimizer</a> was written by <a TARGET="_top" HREF="javascript:if(confirm('http://www.first.gmd.de/~smola/  \n\nThis file was not retrieved by Teleport Pro, because it is addressed on a domain or path outside the boundaries set for its Starting Address.  \n\nDo you want to open it from the server?'))window.location='http://www.first.gmd.de/~smola/'" tppabs="http://www.first.gmd.de/~smola/">A. Smola</a>. It can be requested from <a TARGET="_top" HREF="javascript:if(confirm('http://www.kernel-machines.org/code/prloqo.tar.gz  \n\nThis file was not retrieved by Teleport Pro, because it is addressed on a domain or path outside the boundaries set for its Starting Address.  \n\nDo you want to open it from the server?'))window.location='http://www.kernel-machines.org/code/prloqo.tar.gz'" tppabs="http://www.kernel-machines.org/code/prloqo.tar.gz">http://www.kernel-machines.org/code/prloqo.tar.gz</a>. </P><H2>Installation</H2><P>To install SVM<I><SUP>light</I></SUP> you need to download <TT>svm_light.tar.gz</TT>. Create a new directory:</P><DIR><TT><P>mkdir svm_light</P></TT></DIR><P>Move <TT>svm_light.tar.gz</TT> to this directory and unpack it with </P><DIR><TT><P>gunzip -c svm_light.tar.gz | tar xvf -</P></TT></DIR><P>Now execute </P><DIR><TT><P>make or make all</P></TT></DIR><P>which compiles the system and creates the two executables </P><DIR><TT>svm_learn (learning module)</TT><BR><TT>svm_classify (classification module)</TT></DIR><P>If you do not want to use the built-in optimizer but PR_LOQO instead, create a subdirectory in the svm_light directory with </P><DIR><TT><P>mkdir pr_loqo</P></TT></DIR><P>and copy the files <TT>pr_loqo.c</TT> and <TT>pr_loqo.h</TT> in there. Now execute </P><DIR><TT><P>make svm_learn_loqo</P></TT></DIR><P>If the system does not compile properly, check this <A href="svm_light_faq.html" tppabs="http://www.cs.cornell.edu/People/tj/svm%5Flight/svm_light_faq.html">FAQ</A>.</P><H2>How to use</H2><P>This section explains how to use the SVM<I><SUP>light</I></SUP> software. A good introduction to the theory of SVMs is Chris Burges' <a TARGET="_top" HREF="javascript:if(confirm('http://www.kernel-machines.org/papers/Burges98.ps.gz  \n\nThis file was not retrieved by Teleport Pro, because it is addressed on a domain or path outside the boundaries set for its Starting Address.  \n\nDo you want to open it from the server?'))window.location='http://www.kernel-machines.org/papers/Burges98.ps.gz'" tppabs="http://www.kernel-machines.org/papers/Burges98.ps.gz">tutorial</a>. </P><P>SVM<I><SUP>light</I></SUP> consists of a learning module (<TT>svm_learn</TT>) and a classification module (<TT>svm_classify</TT>). The classification module can be used to apply the learned model to new examples. See also the examples below for how to use <TT>svm_learn</TT> and <TT>svm_classify</TT>. </P><TT><P>svm_learn</TT> is called with the following parameters:</P><DIR><TT><P>svm_learn [options] example_file model_file</P></TT></DIR><P>Available options are: </P><DIR><PRE>General options:         -?          - this help         -v [0..3]   - verbosity level (default 1)Learning options:         -z {c,r,p}  - select between classification (c), regression (r), and                        preference ranking (p) (see [<A href="#References">Joachims, 2002c</A>])                       (default classification)                   -c float    - C: trade-off between training error                       and margin (default [avg. x*x]^-1)         -w [0..]    - epsilon width of tube for regression                       (default 0.1)         -j float    - Cost: cost-factor, by which training errors on                       positive examples outweight errors on negative                       examples (default 1) (see [<A href="#References">Morik et al., 1999</A>])         -b [0,1]    - use biased hyperplane (i.e. x*w+b0) instead                       of unbiased hyperplane (i.e. x*w0) (default 1)         -i [0,1]    - remove inconsistent training examples                       and retrain (default 0)Performance estimation options:         -x [0,1]    - compute leave-one-out estimates (default 0)                       (see [5])         -o ]0..2]   - value of rho for XiAlpha-estimator and for pruning                       leave-one-out computation (default 1.0)                        (see [<A href="#References">Joachims, 2002a</A>])         -k [0..100] - search depth for extended XiAlpha-estimator                       (default 0)Transduction options (see [<A href="#References">Joachims, 1999c</A>], [<A href="#References">Joachims, 2002a</A>]):         -p [0..1]   - fraction of unlabeled examples to be classified                       into the positive class (default is the ratio of                       positive and negative examples in the training data)Kernel options:         -t int      - type of kernel function:                        0: linear (default)                        1: polynomial (s a*b+c)^d                        2: radial basis function exp(-gamma ||a-b||^2)                        3: sigmoid tanh(s a*b + c)                        4: user defined kernel from kernel.h         -d int      - parameter d in polynomial kernel         -g float    - parameter gamma in rbf kernel         -s float    - parameter s in sigmoid/poly kernel         -r float    - parameter c in sigmoid/poly kernel         -u string   - parameter of user defined kernelOptimization options (see [<A href="#References">Joachims, 1999a</A>], [<A href="#References">Joachims, 2002a</A>]):         -q [2..]    - maximum size of QP-subproblems (default 10)         -n [2..q]   - number of new variables entering the working set                       in each iteration (default n = q). Set n&lt;q to prevent                       zig-zagging.         -m [5..]    - size of cache for kernel evaluations in MB (default 40)                       The larger the faster...         -e float    - eps: Allow that error for termination criterion                       [y [w*x+b] - 1] = eps (default 0.001)          -h [5..]    - number of iterations a variable needs to be                       optimal before considered for shrinking (default 100)          -f [0,1]    - do final optimality check for variables removed by                       shrinking. Although this test is usually positive, there                       is no guarantee that the optimum was found if the test is                       omitted. (default 1)          -y string   -> if option is given, reads alphas from file with given                        and uses them as starting point. (default 'disabled')         -# int      -> terminate optimization, if no progress after this                        number of iterations. (default 100000)Output options:          -l char     - file to write predicted labels of unlabeled examples                        into after transductive learning          -a char     - write all alphas to this file after learning (in the 

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -