📄 index.html
字号:
<dd>Evaluate a kernel function on two feature vectors.<p>The kernel function <var>K</var> is defined by <var>kp</var>, which may have attributes corresponding to the kernel attributes defined for structure models. The associated default values are provided in case the object passed in does not have these attributes: <var>kp.kernel_type</var>=0, <var>kp.poly_degree</var>=3, <var>kp.rbf_gamma</var>=1.0, <var>kp.coef_lin</var>=1.0, <var>kp.coef_const</var>=1.0, <var>kp.custom</var>=<code>'empty'</code>. Note that since a structure model contains these attributes, you can just pass in the structure model to use the kernel function for the model. The float value for the evaluation of the kernel on the two feature vectors <var>sv1</var> and <var>sv2</var> is returned.</dd></dl><a class="bookmark" name="parameters"><h2>Special Parameters</h2></a>It is possible to define some special parameters that control how SVM<sup><i>python</i></sup> interacts with the Python module. The parameters are defined in a dictionary <var>svmpython_parameters</var> defined at the top level of the Python module. Parameters in this map are strings that map to values. If they are not defined, the parameters have some default values as indicated in this list. If <var>svmpython_parameters</var> is completely absent, none of the parameters are changed from their default values.<dl><dt><code><b>index_from_one</b></code> = <code>True</code></dt><dd>Normally, as in the C code, the first feature word index is assumed to start at 1, and the first useful value of <var>sm.w</var> is 1 (so the first 0-th entry is useless). This parameter controls how the <var>sm.w</var> and feature words vectors are indexed. If this parameter is instead set to <code>False</code>, then all these values are instead indexed from 0, which may be a more natural setting for some applications.</dd></dl><a class="bookmark" name="example"><h2>Example Module <code>multiclass</code></h2></a>What follows is the listing for a Python module "multiclass" contained in the file <code>multiclass.py</code>. This code run under SVM<sup><i>python</i></sup> is more or less equivalent to SVM<sup><i>multiclass</i></sup>.<pre>"""<font class="cstring">A module for SVM^python for multiclass learning.</font>"""<font class="ccomment"># The svmlight package lets us use some useful portions of the C code.</font><font class="creserved">import</font> svmlight<font class="ccomment"># These parameters are set to their default values so this declaration</font><font class="ccomment"># is technically unnecessary.</font>svmpython_parameters = {"<font class="cstring">index_from_one</font>":True}<font class="creserved">def</font> <font class="creserved">read_struct_examples</font>(filename, sparm): <font class="ccomment"># This reads example files of the type read by SVM^multiclass.</font> examples = [] sparm.num_features = sparm.num_classes = 0 <font class="ccomment"># Open the file and read each example.</font> <font class="creserved">for</font> line <font class="creserved">in</font> <font class="creserved">file</font>(filename): <font class="ccomment"># Get rid of comments.</font> <font class="creserved">if</font> line.find("<font class="cstring">#</font>"): line = line[:line.find("<font class="cstring">#</font>")] tokens = line.split() <font class="ccomment"># If the line is empty, who cares?</font> <font class="creserved">if</font> <font class="creserved">not</font> tokens: continue <font class="ccomment"># Get the target.</font> target = <font class="creserved">int</font>(tokens[0]) sparm.num_classes = <font class="creserved">max</font>(target, sparm.num_classes) <font class="ccomment"># Get the features.</font> tokens = [<font class="creserved">tuple</font>(t.split("<font class="cstring">:</font>")) <font class="creserved">for</font> t <font class="creserved">in</font> tokens[1:]] features = [(<font class="creserved">int</font>(k),<font class="creserved">float</font>(v)) <font class="creserved">for</font> k,v <font class="creserved">in</font> tokens] if features: sparm.num_features = <font class="creserved">max</font>(features[-1][0], sparm.num_features) <font class="ccomment"># Add the example to the list</font> examples.append((features, target)) <font class="ccomment"># Print out some very useful statistics.</font> <font class="creserved">print</font> <font class="creserved">len</font>(examples),"<font class="cstring">examples read with</font>",sparm.num_features, <font class="creserved">print</font> "<font class="cstring">features and</font>",sparm.num_classes,"<font class="cstring">classes</font>" <font class="creserved">return</font> examples<font class="creserved">def</font> <font class="creserved">loss</font>(y, ybar, sparm): <font class="ccomment"># We use zero-one loss.</font> <font class="creserved">if</font> y==ybar: <font class="creserved">return</font> 0 <font class="creserved">return</font> 1<font class="creserved">def</font> <font class="creserved">init_struct_model</font>(sample, sm, sparm): <font class="ccomment"># In the corresponding C code, the counting of features and</font> <font class="ccomment"># classes was done in the model initialization, not here.</font> sm.size_psi = sparm.num_features * sparm.num_classes <font class="creserved">print</font> "<font class="cstring">size_psi set to</font>",sm.size_psi<font class="creserved">def</font> <font class="creserved">classify_struct_example</font>(x, sm, sparm): <font class="ccomment"># I am a very bad man. There is no class 0, of course.</font> <font class="creserved">return</font> find_most_violated_constraint(x, 0, sm, sparm)<font class="creserved">def</font> <font class="creserved">find_most_violated_constraint</font>(x, y, sm, sparm): <font class="ccomment"># Get all the wrong classes.</font> classes = [c+1 <font class="creserved">for</font> c <font class="creserved">in</font> <font class="creserved">range</font>(sparm.num_classes) <font class="creserved">if</font> c+1 <font class="creserved">is</font> <font class="creserved">not</font> y] <font class="ccomment"># Get the psi vectors for each example in each class.</font> vectors = [(psi(x,c,sm,sparm),c) <font class="creserved">for</font> c <font class="creserved">in</font> classes] <font class="ccomment"># Get the predictions for each psi vector.</font> predictions = [(svmlight.classify_example(sm, p),c) <font class="creserved">for</font> p,c <font class="creserved">in</font> vectors] <font class="ccomment"># Return the class associated with the maximum prediction!</font> <font class="creserved">return</font> <font class="creserved">max</font>(predictions)[1]<font class="creserved">def</font> <font class="creserved">psi</font>(x, y, sm, sparm): <font class="ccomment"># Just increment the feature index to the appropriate stack position.</font> <font class="creserved">return</font> svmlight.create_svector([(f+(y-1)*sparm.num_features,v) <font class="creserved">for</font> f,v <font class="creserved">in</font> x])<font class="ccomment"># The default action of printing out all the losses or labels is</font><font class="ccomment"># irritating for the 300 training examples and 2200 testing examples</font><font class="ccomment"># in the sample task.</font><font class="creserved">def</font> <font class="creserved">print_struct_learning_stats</font>(sample, sm, cset, alpha, sparm): predictions = [classify_struct_example(x,sm,sparm) <font class="creserved">for</font> x,y <font class="creserved">in</font> sample] losses = [loss(y,ybar,sparm) <font class="creserved">for</font> (x,y),ybar <font class="creserved">in</font> <font class="creserved">zip</font>(sample,predictions)] <font class="creserved">print</font> "<font class="cstring">Average loss:</font>",<font class="creserved">float</font>(<font class="creserved">sum</font>(losses))/<font class="creserved">len</font>(losses)<font class="creserved">def</font> <font class="creserved">print_struct_testing_stats</font>(sample, sm, sparm, teststats): <font class="creserved">pass</font></pre><hr>Thomas Finley, 2005</body></html>
⌨️ 快捷键说明
复制代码
Ctrl + C
搜索代码
Ctrl + F
全屏模式
F11
切换主题
Ctrl + Shift + D
显示快捷键
?
增大字号
Ctrl + =
减小字号
Ctrl + -