⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 regressionsplitevaluator.java

📁 Java 编写的多种数据挖掘算法 包括聚类、分类、预处理等
💻 JAVA
📖 第 1 页 / 共 2 页
字号:
  public Object [] getKey(){    Object [] key = new Object[KEY_SIZE];    key[0] = m_Template.getClass().getName();    key[1] = m_ClassifierOptions;    key[2] = m_ClassifierVersion;    return key;  }  /**   * Gets the data types of each of the result columns produced for a    * single run. The number of result fields must be constant   * for a given SplitEvaluator.   *   * @return an array containing objects of the type of each result column.    * The objects should be Strings, or Doubles.   */  public Object [] getResultTypes() {    int addm = (m_AdditionalMeasures != null)       ? m_AdditionalMeasures.length       : 0;    Object [] resultTypes = new Object[RESULT_SIZE+addm];    Double doub = new Double(0);    int current = 0;    resultTypes[current++] = doub;    resultTypes[current++] = doub;    resultTypes[current++] = doub;    resultTypes[current++] = doub;    resultTypes[current++] = doub;    resultTypes[current++] = doub;    resultTypes[current++] = doub;    resultTypes[current++] = doub;    resultTypes[current++] = doub;    resultTypes[current++] = doub;    resultTypes[current++] = doub;    resultTypes[current++] = doub;    // Timing stats    resultTypes[current++] = doub;    resultTypes[current++] = doub;    resultTypes[current++] = doub;    resultTypes[current++] = doub;    resultTypes[current++] = "";    // add any additional measures    for (int i=0;i<addm;i++) {      resultTypes[current++] = doub;    }    if (current != RESULT_SIZE+addm) {      throw new Error("ResultTypes didn't fit RESULT_SIZE");    }    return resultTypes;  }  /**   * Gets the names of each of the result columns produced for a single run.   * The number of result fields must be constant   * for a given SplitEvaluator.   *   * @return an array containing the name of each result column   */  public String [] getResultNames() {    int addm = (m_AdditionalMeasures != null)       ? m_AdditionalMeasures.length       : 0;    String [] resultNames = new String[RESULT_SIZE+addm];    int current = 0;    resultNames[current++] = "Number_of_instances";    // Sensitive stats - certainty of predictions    resultNames[current++] = "Mean_absolute_error";    resultNames[current++] = "Root_mean_squared_error";    resultNames[current++] = "Relative_absolute_error";    resultNames[current++] = "Root_relative_squared_error";    resultNames[current++] = "Correlation_coefficient";    // SF stats    resultNames[current++] = "SF_prior_entropy";    resultNames[current++] = "SF_scheme_entropy";    resultNames[current++] = "SF_entropy_gain";    resultNames[current++] = "SF_mean_prior_entropy";    resultNames[current++] = "SF_mean_scheme_entropy";    resultNames[current++] = "SF_mean_entropy_gain";    // Timing stats    resultNames[current++] = "Elapsed_Time_training";    resultNames[current++] = "Elapsed_Time_testing";    resultNames[current++] = "UserCPU_Time_training";    resultNames[current++] = "UserCPU_Time_testing";        // Classifier defined extras    resultNames[current++] = "Summary";    // add any additional measures    for (int i=0;i<addm;i++) {      resultNames[current++] = m_AdditionalMeasures[i];    }    if (current != RESULT_SIZE+addm) {      throw new Error("ResultNames didn't fit RESULT_SIZE");    }    return resultNames;  }  /**   * Gets the results for the supplied train and test datasets. Now performs   * a deep copy of the classifier before it is built and evaluated (just in case   * the classifier is not initialized properly in buildClassifier()).   *   * @param train the training Instances.   * @param test the testing Instances.   * @return the results stored in an array. The objects stored in   * the array may be Strings, Doubles, or null (for the missing value).   * @throws Exception if a problem occurs while getting the results   */  public Object [] getResult(Instances train, Instances test)     throws Exception {    if (train.classAttribute().type() != Attribute.NUMERIC) {      throw new Exception("Class attribute is not numeric!");    }    if (m_Template == null) {      throw new Exception("No classifier has been specified");    }    ThreadMXBean thMonitor = ManagementFactory.getThreadMXBean();    boolean canMeasureCPUTime = thMonitor.isThreadCpuTimeSupported();    if(!thMonitor.isThreadCpuTimeEnabled())      thMonitor.setThreadCpuTimeEnabled(true);        int addm = (m_AdditionalMeasures != null) ? m_AdditionalMeasures.length : 0;    Object [] result = new Object[RESULT_SIZE+addm];    long thID = Thread.currentThread().getId();    long CPUStartTime=-1, trainCPUTimeElapsed=-1, testCPUTimeElapsed=-1,         trainTimeStart, trainTimeElapsed, testTimeStart, testTimeElapsed;        Evaluation eval = new Evaluation(train);    m_Classifier = Classifier.makeCopy(m_Template);    trainTimeStart = System.currentTimeMillis();    if(canMeasureCPUTime)      CPUStartTime = thMonitor.getThreadUserTime(thID);    m_Classifier.buildClassifier(train);    if(canMeasureCPUTime)      trainCPUTimeElapsed = thMonitor.getThreadUserTime(thID) - CPUStartTime;    trainTimeElapsed = System.currentTimeMillis() - trainTimeStart;    testTimeStart = System.currentTimeMillis();    if(canMeasureCPUTime)      CPUStartTime = thMonitor.getThreadUserTime(thID);    eval.evaluateModel(m_Classifier, test);    if(canMeasureCPUTime)      testCPUTimeElapsed = thMonitor.getThreadUserTime(thID) - CPUStartTime;    testTimeElapsed = System.currentTimeMillis() - testTimeStart;    thMonitor = null;        m_result = eval.toSummaryString();    // The results stored are all per instance -- can be multiplied by the    // number of instances to get absolute numbers    int current = 0;    result[current++] = new Double(eval.numInstances());    result[current++] = new Double(eval.meanAbsoluteError());    result[current++] = new Double(eval.rootMeanSquaredError());    result[current++] = new Double(eval.relativeAbsoluteError());    result[current++] = new Double(eval.rootRelativeSquaredError());    result[current++] = new Double(eval.correlationCoefficient());    result[current++] = new Double(eval.SFPriorEntropy());    result[current++] = new Double(eval.SFSchemeEntropy());    result[current++] = new Double(eval.SFEntropyGain());    result[current++] = new Double(eval.SFMeanPriorEntropy());    result[current++] = new Double(eval.SFMeanSchemeEntropy());    result[current++] = new Double(eval.SFMeanEntropyGain());        // Timing stats    result[current++] = new Double(trainTimeElapsed / 1000.0);    result[current++] = new Double(testTimeElapsed / 1000.0);    if(canMeasureCPUTime) {      result[current++] = new Double((trainCPUTimeElapsed/1000000.0) / 1000.0);      result[current++] = new Double((testCPUTimeElapsed /1000000.0) / 1000.0);    }    else {      result[current++] = new Double(Instance.missingValue());      result[current++] = new Double(Instance.missingValue());    }        if (m_Classifier instanceof Summarizable) {      result[current++] = ((Summarizable)m_Classifier).toSummaryString();    } else {      result[current++] = null;    }        for (int i=0;i<addm;i++) {      if (m_doesProduce[i]) {        try {          double dv = ((AdditionalMeasureProducer)m_Classifier).          getMeasure(m_AdditionalMeasures[i]);          if (!Instance.isMissingValue(dv)) {            Double value = new Double(dv);            result[current++] = value;          } else {            result[current++] = null;          }        } catch (Exception ex) {          System.err.println(ex);        }      } else {        result[current++] = null;      }    }        if (current != RESULT_SIZE+addm) {      throw new Error("Results didn't fit RESULT_SIZE");    }    return result;  }  /**   * Returns the tip text for this property   * @return tip text for this property suitable for   * displaying in the explorer/experimenter gui   */  public String classifierTipText() {    return "The classifier to use.";  }  /**   * Get the value of Classifier.   *   * @return Value of Classifier.   */  public Classifier getClassifier() {        return m_Template;  }    /**   * Sets the classifier.   *   * @param newClassifier the new classifier to use.   */  public void setClassifier(Classifier newClassifier) {        m_Template = newClassifier;    updateOptions();    System.err.println("RegressionSplitEvaluator: In set classifier");  }  /**   * Updates the options that the current classifier is using.   */  protected void updateOptions() {        if (m_Template instanceof OptionHandler) {      m_ClassifierOptions = Utils.joinOptions(((OptionHandler)m_Template)					      .getOptions());    } else {      m_ClassifierOptions = "";    }    if (m_Template instanceof Serializable) {      ObjectStreamClass obs = ObjectStreamClass.lookup(m_Template						       .getClass());      m_ClassifierVersion = "" + obs.getSerialVersionUID();    } else {      m_ClassifierVersion = "";    }  }  /**   * Set the Classifier to use, given it's class name. A new classifier will be   * instantiated.   *   * @param newClassifierName the Classifier class name.   * @throws Exception if the class name is invalid.   */  public void setClassifierName(String newClassifierName) throws Exception {    try {      setClassifier((Classifier)Class.forName(newClassifierName)		    .newInstance());    } catch (Exception ex) {      throw new Exception("Can't find Classifier with class name: "			  + newClassifierName);    }  }  /**   * Gets the raw output from the classifier   * @return the raw output from the classifier   */  public String getRawResultOutput() {    StringBuffer result = new StringBuffer();    if (m_Classifier == null) {      return "<null> classifier";    }    result.append(toString());    result.append("Classifier model: \n"+m_Classifier.toString()+'\n');    // append the performance statistics    if (m_result != null) {      result.append(m_result);            if (m_doesProduce != null) {	for (int i=0;i<m_doesProduce.length;i++) {	  if (m_doesProduce[i]) {	    try {	      double dv = ((AdditionalMeasureProducer)m_Classifier).		getMeasure(m_AdditionalMeasures[i]);	      if (!Instance.isMissingValue(dv)) {		Double value = new Double(dv);		result.append(m_AdditionalMeasures[i]+" : "+value+'\n');	      } else {		result.append(m_AdditionalMeasures[i]+" : "+'?'+'\n');	      }	    } catch (Exception ex) {	      System.err.println(ex);	    }	  } 	}      }    }    return result.toString();  }  /**   * Returns a text description of the split evaluator.   *   * @return a text description of the split evaluator.   */  public String toString() {    String result = "RegressionSplitEvaluator: ";    if (m_Template == null) {      return result + "<null> classifier";    }    return result + m_Template.getClass().getName() + " "       + m_ClassifierOptions + "(version " + m_ClassifierVersion + ")";  }} // RegressionSplitEvaluator

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -