⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 613.txt

📁 This complete matlab for neural network
💻 TXT
字号:
发信人: GzLi (笑梨), 信区: DataMining
标  题: 上载svmlight4.0
发信站: 南京大学小百合站 (Tue May  7 18:15:31 2002), 站内信件

SVMlight 
Support Vector Machine
Author: Thorsten Joachims <thorsten@joachims.org> 
Cornell University 
Department of Computer Science 

Developed at: 
University of Dortmund, Informatik, AI-Unit 
Collaborative Research Center on 'Complexity Reduction 
in Multivariate Data' (SFB475) 

Version: 4.00 
Date: 11.02.2002
  

Overview
SVMlight is an implementation of Support Vector Machines (SVMs) in C. 
The main features of the program are the following: 

fast optimization algorithm 
working set selection based on steepest feasible descent 
"shrinking" heuristic 
caching of kernel evaluations 
use of folding in the linear case 
solves both classification and regression problems 
computes XiAlpha-estimates of the error rate, the precision, and the recall 
efficiently computes Leave-One-Out estimates of the error rate, the 
precision, and the recall 
includes algorithm for approximately training large transductive SVMs 
(TSVMs) 
can train SVMs with cost models 
handles many thousands of support vectors 
handles several ten-thousands of training examples 
supports standard kernel functions and lets you define your own 
uses sparse vector representation 
There is also another regression support vector machine based on SVMlight 
available at the AI-Unit: mySVM. 

Description
SVMlight is an implementation of Vapnik's Support Vector Machine [Vapnik, 
1995] for the problem of pattern recognition and for the problem of 
regression. The optimization algorithm used in SVMlight is described in 
[Joachims, 1999a]. The algorithm has 
scalable memory requirements and can handle problems with many thousands 
of support vectors efficiently. 

This version also provides methods for assessing the generalization 
performance efficiently. It includes two efficient estimation methods 
for both error rate and precision/recall. XiAlpha-estimates [Joachims, 
2000a, Joachims, 2000b] can be computed at 
essentially no computational expense, but they are conservatively biased. 
Almost unbiased estimates provides leave-one-out testing. SVMlight exploits 
that the results of most leave-one-outs (often more than 99%) are 
predetermined and need not be 
computed [Joachims, 2000b].

Futhermore, this version includes an algorithm for training large-scale 
transductive SVMs. The algorithm proceeds by solving a sequence of 
optimization problems lower-bounding the solution using a form of local 
search. A detailed description of the 
algorithm can be found in [Joachims, 1999c]. 

SVMlight can also train SVMs with cost models (see [Morik et al., 1999]). 

The code has been used on a large range of problems, including text 
classification [Joachims, 1999c][Joachims, 1998a], several image 
recognition tasks, and medical applications. Many tasks have the 
property of sparse instance vectors. This 
implementation makes use of this property which leads to a very compact 
and efficient representation.
 
 -- GzLi如是说:      
Joy and pain are coming and going both      
Be kind to yourself and others. 
welcome to DataMining  http://DataMining.bbs.lilybbs.net
welcome to Matlab http://bbs.sjtu.edu.cn/cgi-bin/bbsdoc?board=Matlab

※ 来源:.南京大学小百合站 bbs.nju.edu.cn.[FROM: 211.80.38.29]

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -