Deprecated: The each() function is deprecated. This message will be suppressed on further calls in /home/zhenxiangba/zhenxiangba.com/public_html/phproxy-improved-master/index.php on line 456 TinySVM: Support Vector Machines
TinySVM: Support Vector Machines
Last update: $Date: 2002/08/20 06:14:22 $
TinySVM
TinySVM is an implementation of Support Vector Machines (SVMs) [Vapnik 95], [Vapnik 98]
for the problem of pattern recognition. Support Vector Machines is a new generation learning algorithms
based on recent advances in statistical learning theory, and applied to
large number of real-world applications, such as text categorization, hand-written
character recognition.
Support Intel C++ compiler for Linux 6.0 (e.g. %env CC=icc CXX=icc ./configure --disable-shared; make )
Support Borland C++ (use src/Makefile.bcc32)
Support Microsoft Visual Studio .NET/C++ (use src/Makefile.msvc)
Change extention of source files from .cc to .cpp
2002-03-08
Support Mac OS X
2002-01-05
Fix fatal bug in cache.h. (Thanks to Hideki Isozaki)
2001-12-07
Support One-Class-SVM, (experimental) use -s option.
2001-09-03
Support RBF, Neural, and ANOVA kernels
Script bindings for perl/ruby are rewritten with SWIG
python and java interfaces are available
2001-08-25: 0.04
Fix memory leak bug in Claassify::classify().
Implement a simple Garbage Collector (reference count) to handle training data effectively.
The following new options are added:
-I: svm_learn creates the optional file (MODEL.idx)
which lists up alpha and G (gradient) of all training
examples. Each line of MODEL.idx corresponds to the each training instance.
-M model_file: carry out incremental training with model_file
and model_file.idx as a initial condition. You can reduce
computational cost with this option.
Sample:
% svm_learn -I train model
% ls
model model.idx
% cat new_train >> train
% svm_learn -M model train model2
-W : When linear kernel is used, single vector w (w * x + b)
is directly obtained instead of lists of alpha.
The following new API functions are added:
BaseExample::readSVindex()
BaseExample::writeSVindex()
BaseExample::set()
Example::rebuildSVindex()
Model::compress()
Add new API functions to perl/ruby interface, each of which
corresponds to the original C++ new API.
TinySVM accepts the same representation of training data as SVM_light uses.
This format has potential to handle large sparse feature vectors.
The format of training and test data file is:
"svm_learn" accepts two arguments --- file name of training data
and model file in which the SVs and their weights (alpha) are
stored after training. Try --help option for finding out other options.
% svm_learn -t 1 -d 2 -c 1 train.svmdata model
TinySVM - tiny SVM package
Copyright (C) 2000 Taku Kudoh All rights reserved.
1000 examples, cache size: 1524
.................... 1000 1000 0.0165 1286/ 64.3% 1286/ 64.3%
............
Checking optimality of inactive variables re-activated: 0
Done! 1627 iterations
Number of SVs (BSVs) 719 (4)
Empirical Risk: 0.002 (2/1000)
L1 Loss: 4.22975
CPU Time: 0:00:01
% ls -al model
-rw-r--r-- 1 taku-ku is-stude 26455 Dec 7 13:40
TinySVM prints the following messages during the learning iterations,
2nd column: Represents the total iterations processed
3rd column: Represents the size of current working set. It will
become smaller during the shrinking process.
4th column: Represents the current cache size.
5th column: Max KKT violation value. If this value reaches
termination-criterion (default 0.001), it means that the 1st stage of
the learning process has completed.
7th column: Cache hit rate during last 1000 iterations.
"svm_classify" accepts two arguments --- file name of test data and model file generated by svm_learn.
"svm_classify" simply displays the accuracy of given test data.
You can also employ interactive classification by giving "-" as file name of test example.
Try --help option for finding out other options.
"svm_model" displays the estimated margin, VC dimension and number of SVs
of given some model file.
Try --help option for finding out other options.
% svm_model model
File Name: model
Margin: 0.181666
Number of SVs: 719
Number of BSVs: 4
Size of training data: 1000
L1 Loss (Empirical Risk): 4.16917
Estimated VC dimension: 728.219
Estimated xi-alpha(2): 573