⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 neuralnet.java

📁 一个很好的LIBSVM的JAVA源码。对于要研究和改进SVM算法的学者。可以参考。来自数据挖掘工具YALE工具包。
💻 JAVA
字号:
/*
 *  YALE - Yet Another Learning Environment
 *  Copyright (C) 2001-2004
 *      Simon Fischer, Ralf Klinkenberg, Ingo Mierswa, 
 *          Katharina Morik, Oliver Ritthoff
 *      Artificial Intelligence Unit
 *      Computer Science Department
 *      University of Dortmund
 *      44221 Dortmund,  Germany
 *  email: yale-team@lists.sourceforge.net
 *  web:   http://yale.cs.uni-dortmund.de/
 *
 *  This program is free software; you can redistribute it and/or
 *  modify it under the terms of the GNU General Public License as 
 *  published by the Free Software Foundation; either version 2 of the
 *  License, or (at your option) any later version. 
 *
 *  This program is distributed in the hope that it will be useful, but
 *  WITHOUT ANY WARRANTY; without even the implied warranty of
 *  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
 *  General Public License for more details.
 *
 *  You should have received a copy of the GNU General Public License
 *  along with this program; if not, write to the Free Software
 *  Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307
 *  USA.
 */
package edu.udo.cs.yale.operator.learner.nn;

import edu.udo.cs.yale.example.Example;
import edu.udo.cs.yale.example.Attribute;
import edu.udo.cs.yale.operator.learner.Model;
import edu.udo.cs.yale.operator.learner.SimpleModel;

/** This neural net consists of an input layer, a hidden layer and an output layer.
 *  The output layer does nothing except (as all layers do) append an additional 1
 *  to the input vector. <br>
 *  For some unknown reason the sigmoid transformation does not work. Hence, only linear
 *  functions can be learned.
 *  @author simon
 *  @version 22.06.2001
 */
public class NeuralNet extends SimpleModel {
    
    private Layer inputLayer;
    private WeightedLayer[] layer;
    private double lambda;

    protected NeuralNet() { 
	super(); 
    }

    public NeuralNet(Attribute label, int input, int hidden, int output, double lambda) {
	super(label);
	layer = new WeightedLayer[2];
	layer[1] = new WeightedLayer(hidden+1, output, new IdTrafo(), null);
	layer[0] = new WeightedLayer(input+1, hidden, new IdTrafo(), layer[1]);
	//layer[1] = new WeightedLayer(hidden+1, output, new SigmoidTrafo(0.5), null);
	//layer[0] = new WeightedLayer(input+1, hidden, new SigmoidTrafo(0.5), layer[1]);
	inputLayer = new InputLayer(input, null, layer[0]);
	this.lambda = lambda;
    }

    /** Applies all layers to the input and returns the output layer's output. Remember
     *  that if the actual output vector's dimension is <i>n</i>, this method will return
     *  a vector of dimension <i>n+1</i>. You can ignore <tt>o[n]</tt> for it is a constant 1. */
    public double[] o(double[] in) {
	inputLayer.input(in);
	return layer[1].e();
    }

    public static double error(double[] t, double[] e) {
	double sum = 0;
	for (int i = 0; i < t.length; i++) {
	    double dif = e[i] - t[i];
	    sum += dif*dif;
	}
	return sum/2;
    }

    /** Train the net with a given pair of input <tt>in</tt> and output <tt>t</tt>. */
    public void learn(double[] in, double[] t) {
	inputLayer.input(in);

	// adjust the output layer
	double[]   e     = layer[1].e();                      // output with transformation
	double[]   u     = layer[1].u();                      // output without transformation
	double[][] w     = layer[1].weights();                // weights
	double[]   s     = layer[0].e();                      // input vector
	double[]   delta = new double[layer[1].neurons()-1];  // discrepancy; not for the last output (which is const=1)
	// for all output layer neurons
	for (int p = 0; p < delta.length; p++) {
	    delta[p] = -(e[p] - t[p]) * layer[1].getTrafo().phiDerivation(u[p]);
	    // for all weights
	    for (int q = 0; q < layer[0].neurons(); q++) {
		w[p][q] += lambda * delta[p] * s[q];
	    }
	}

	// adjust the hidden layer
	double[]   uSchlange = layer[0].u();
	double[][] wSchlange = layer[0].weights();
	double[]   sSchlange = inputLayer.e();
	double[]   deltaSchlange = new double[layer[0].neurons()-1];
	for (int p = 0; p < deltaSchlange.length; p++) {
	    deltaSchlange[p] = 0;
	    for (int i = 0; i < layer[1].neurons()-1; i++) {
		deltaSchlange[p] += delta[i] * w[i][p] * layer[1].getTrafo().phiDerivation(uSchlange[p]);
	    }
	    for (int q = 0; q < sSchlange.length; q++) {
		wSchlange[p][q] += lambda * deltaSchlange[p] * sSchlange[q];
	    }
	}

    }

    public String toString() {
	//return layer[1].toString() + "\n" + layer[0].toString();
	String str = super.toString();
	if (inputLayer != null)
	    str += " [NeuralNet: "+inputLayer.neurons()+"/"+layer[0].neurons()+"/"+layer[1].neurons()+" neurons]";
	return str;
    }

    public void writeModel(String filename) {
	throw new UnsupportedOperationException("writeModel(String) in NeuralNet not supported!");
    }

    public double predict(Example example) {
	double[] s = new double[example.getNumberOfAttributes()];
	for (int j = 0; j < s.length; j++)
	    s[j] = example.getValue(j);
	return o(s)[0];
    }

}

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -