Username Remember Me?
Password   forgot password?
   
   
Weights and bias
Posted: 03 October 2017 04:25 PM   [ Ignore ]  
Novice
Rank
Total Posts:  5
Joined  2016-11-04

Dear reader,

I am working with a linear support vector machine. I would like to know the weights and bias of the support vectors. In a previous post from Jean-Michel (29 july 2010) I saw that it was possible to retrieve this information from the pipeline. However, this does not seem to be possible anymore. Is there any way to extract this information?

Regards,

Esther

Profile
 
 
Posted: 04 October 2017 07:12 AM   [ Ignore ]   [ # 1 ]  
Administrator
Avatar
RankRankRankRank
Total Posts:  367
Joined  2008-04-26

Dear Esther,

in 2012 perClass 3.0 release, accessing low-level model internals became impossible. Why do you need this information?

Kind Regards,

Pavel

Profile
 
 
Posted: 04 October 2017 08:47 AM   [ Ignore ]   [ # 2 ]  
Novice
Rank
Total Posts:  5
Joined  2016-11-04

Dear Pavel,

I’m interested in which feature contributes most to discriminating two classes with a linear support vector machine. For that purpose I would like to know the weights and biases. I use the perClass Acedemic 5.0 release.

I’ve attached the script I tried (based on the post I mentioned in my previous post) but that gives me the following error.

Error using sdppl/subsref (line 28)
Pipeline {} operator obsolete. Use () to access pipeline parts

Error in Weights (line 12)
p{2}

When I use p(2) instead I can see the options of the SVM but weights and offset are not present in the list.

Thank you very much for your help!

Kind regards,

Esther

File Attachments
Weights.m  (File Size: 1KB - Downloads: 4)
Profile
 
 
Posted: 04 October 2017 10:18 AM   [ Ignore ]   [ # 3 ]  
Administrator
Avatar
RankRankRankRank
Total Posts:  367
Joined  2008-04-26

Dear Esther,

if you’re interested in the most contributing features, internals of SVM model would not help you. It is because SVM expresses outputs in terms of (support) objects, not features. What you can do, however, for a linear SVM is to approximate it with an affine projection and look at the relative importance of features:

>> load medical
>> a
'medical D/ND' 6400 by 11 sddata3 classes'disease'(1495'no-disease'(4267'noise'(638
>> 
b=a(:,1:10,1:2)
'medical D/ND' 5762 by 10 sddata2 classes'disease'(1495'no-disease'(4267
>> 
c=randsubset(b,300)
'medical D/ND' 600 by 10 sddata2 classes'disease'(300'no-disease'(300

>> 
p=sdsvc(c,'linear')
....................
C=1e+03 err=0.140 SVs=264
sequential pipeline       10x1 
'Support vector machine+Decision'
 
1 Support vector machine   10x1  linear
 2 Decision                1x1  threshold on 
'disease'

>> pa=sdconvert(p,'affine',b)
Approximation error5.29233e-05
Affine projection pipeline 10x1  

>> pa.weights

ans 
=

    
0.0482
    0.0012
    0.0014
    0.0071
    0.0013
    0.0640
    0.3699
    0.2329
    0.2701
    0.0039

>> P2=pa*p(2)
sequential pipeline       10x1 'Affine projection+Decision'
 
1 Affine projection      10x1 
 2 Decision                1x1  threshold on 
'disease'

>> sdconfmat(b.lab,b*p,'norm')

ans =

 
True        Decisions
 Labels      
|  diseas  no-dis  Totals
-----------------------------------------
 
disease     |  0.807   0.193   1.00
 no
-disease  |  0.204   0.796   1.00
-----------------------------------------

>> 
sdconfmat(b.lab,b*P2,'norm')

ans =

 
True        Decisions
 Labels      
|  diseas  no-dis  Totals
-----------------------------------------
 
disease     |  0.807   0.193   1.00
 no
-disease  |  0.204   0.796   1.00
-----------------------------------------

I use here the medical data set as it has more features than fruit.

BTW, subset selection in your code example:

subset(a,'lab','apple','lab','banana');

can be shortened to

b=a(:,:,'/apple|banana')

Does it help?

Pavel

Profile
 
 
Posted: 04 October 2017 02:40 PM   [ Ignore ]   [ # 4 ]  
Novice
Rank
Total Posts:  5
Joined  2016-11-04

Yes, thank you!

Profile