⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 precisionrecallevaluation.java

📁 一个自然语言处理的Java开源工具包。LingPipe目前已有很丰富的功能
💻 JAVA
📖 第 1 页 / 共 3 页
字号:
/* * LingPipe v. 3.5 * Copyright (C) 2003-2008 Alias-i * * This program is licensed under the Alias-i Royalty Free License * Version 1 WITHOUT ANY WARRANTY, without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the Alias-i * Royalty Free License Version 1 for more details. *  * You should have received a copy of the Alias-i Royalty Free License * Version 1 along with this program; if not, visit * http://alias-i.com/lingpipe/licenses/lingpipe-license-1.txt or contact * Alias-i, Inc. at 181 North 11th Street, Suite 401, Brooklyn, NY 11211, * +1 (718) 290-9170. */package com.aliasi.classify;import com.aliasi.stats.Statistics;/** * A <code>PrecisionRecallEvaluation</code> collects and reports a * suite of descriptive statistics for binary classification tasks. * The basis of a precision recall evaluation is a matrix of counts * of reference and response classifications.  Each cell in the matrix * corresponds to a method returning a long integer count. * * <blockquote> * <font size='-1'> * <table border='1' cellpadding='10'> * <tr><td colspan='2' rowspan='2' bordercolor='white'>&nbsp;</td> *     <td colspan='2' align='center'><b><i>Response</i></b></td> *     <td rowspan='2' align='center' valign='bottom'><i>Reference Totals</i></td> * </tr> * <tr> *     <td align='center'><i>true</i></td> *     <td align='center'><i>false</i></td></tr> * <tr><td rowspan='2'><i><b>Refer<br>-ence</b></i></td><td align='right'><i>true</i></td> *     <td>{@link #truePositive()} (TP)</td><td>{@link #falseNegative} (FN)</td> *     <td>{@link #positiveReference} (TP+FN)</td> * </tr> * <tr><td align='right'><i>false</i></td> *     <td>{@link #falsePositive()} (FP)</td><td>{@link #trueNegative()} (TN)</td> *     <td>{@link #negativeReference()} (FP+TN)</td> * </tr> * <tr><td colspan='2' align='right'><i>Response Totals</td><td>{@link #positiveResponse()} (TP+FP)</td> *     <td>{@link #negativeResponse()} (FN+TN)</td> *     <td>{@link #total()} (TP+FN+FP+TN)</td> * </tr> * </table> * </font> * </blockquote> * * The most basic statistic is accuracy, which is the number of * correct responses divided by the total number of cases. *  * <blockquote><code> * <b>accuracy</b>()</code> * = correct() / total() * </code></blockquote> *  * This class derives its name from the following four statistics, * which are illustrated in the four tables.   *  * <blockquote><code> * <b>recall</b>() * = truePositive() / positiveReference() * </code></blockquote> *  * <blockquote><code> * <b>precision</b>() *  = truePositive() / positiveResponse() * </code></blockquote> *  * <blockquote><code> * <b>rejectionRecall</b>() * = trueNegative() / negativeReference() * </code></blockquote> *  * <blockquote><code> * <b>rejectionPrecision</b>() *  = trueNegative() / negativeResponse() * </code></blockquote> *  * Each measure is defined to be the green count divided by the green * plus red count in the corresponding table: * * <blockquote> * * <table border='0' cellpadding='10'> * * <tr><td> * * <table border='1' cellpadding='3'> * <tr><td colspan='2' rowspan='2' bordercolor='white' valign='top'> *        <b>Recall</b> *     </td> *     <td colspan='3' align='center'><i>Response</i></td></tr> * <tr> *     <td>True</td> *     <td>False</td></tr> * <tr><td rowspan='3'><i>Refer<br>-ence</i></td><td>True</td> *     <td bgcolor='green'><b><big>+</big></b></td><td bgcolor='red'><b><big>-</big></b></td></tr> * <tr><td>False</td> *     <td>&nbsp;</td><td>&nbsp;</td></tr> * </table> * * </td><td> * * <table border='1' cellpadding='3'> * <tr><td colspan='2' rowspan='2' bordercolor='white' valign='top'> *        <b>Precision</b> *     </td> *     <td colspan='3' align='center'><i>Response</i></td></tr> * <tr> *     <td>True</td> *     <td>False</td></tr> * <tr><td rowspan='3'><i>Refer<br>-ence</i></td><td>True</td> *     <td bgcolor='green'><b><big>+</big></b></td><td>&nbsp;</td></tr> * <tr><td>False</td> *     <td bgcolor='red'><b><big>-</big></b></td><td>&nbsp;</td></tr> * </table> * * </td></tr> * <tr><td> * * <table border='1' cellpadding='3'> * <tr><td colspan='2' rowspan='2' bordercolor='white' valign='top'> *        <b>Rejection <br>Recall</b> *     </td> *     <td colspan='3' align='center'><i>Response</i></td></tr> * <tr> *     <td>True</td> *     <td>False</td></tr> * <tr><td rowspan='3'><i>Refer<br>-ence</i></td><td>True</td> *     <td>&nbsp;</td><td>&nbsp;</td></tr> * <tr><td>False</td> *     <td bgcolor='red'><b><big>-</big></b></td><td bgcolor='green'><b><big>+</big></b></td></tr> * </table> * * </td><td> * * <table border='1' cellpadding='3'> * <tr><td colspan='2' rowspan='2' bordercolor='white' valign='top'> *        <b>Rejection <br>Precision</b> *     </td> *     <td colspan='3' align='center'><i>Response</i></td></tr> * <tr> *     <td>True</td> *     <td>False</td></tr> * <tr><td rowspan='3'><i>Refer<br>-ence</i></td><td>True</td> *     <td>&nbsp;</td><td bgcolor='red'><b><big>-</big></b></td></tr> * <tr><td>False</td> *     <td>&nbsp;</td><td bgcolor='green'><b><big>+</big></b></td></tr> * </table> * * </td></tr></table> * </blockquote> * * This picture clearly illustrates the relevant * dualities.  Precision is the dual to recall if the reference and * response are switched (the matrix is transposed).  Similarly, * rejection recall is dual to recall with true and false labels * switched (reflection around each axis in turn); rejection precision is * similarly dual to precision. *  * <P>Precision and recall may be combined by weighted geometric * averaging by using the f-measure statistic, with * <code>&beta;</code> between 0 and infinity being the relative * weight of precision, with 1 being a neutral value. * <blockquote><code> * <b>fMeasure</b>() = fMeasure(1) * </code></blockquote> *  * <blockquote><code> * <b>fMeasure</b>(&beta;) *  = (1 + &beta;<sup><sup>2</sup></sup>)  *  * {@link #precision()} *  * {@link #recall()} *  / ({@link #recall()} + &beta;<sup><sup>2</sup></sup> * {@link #precision()}) * </code></blockquote> *  * <P>There are four traditional measures of binary classification, * which are as follows. *  * <blockquote><code> * <b>fowlkesMallows</b>() * = truePositive() / (precision() * recall())<sup><sup>(1/2)</sup></sup> * </code></blockquote> * * <blockquote><code> * <b>jaccardCoefficient</b>() * = truePositive() / (total() - trueNegative()) * </code></blockquote> * * <blockquote><code> * <b>yulesQ</b>() * = (truePositive() * trueNegative() - falsePositive() * falseNegative()) * / (truePositive() * trueNegative() + falsePositive() * falsePositive()) * </code></blockquote> * <blockquote><code> * <b>yulesY</b>() * = ((truePositive() * trueNegative())<sup><sup>(1/2)</sup></sup> *    - (falsePositive() * falseNegative())<sup><sup>(1/2)</sup></sup>) * <br>/ ((truePositive() * trueNegative())<sup><sup>(1/2)</sup></sup> + (falsePositive() * falsePositive())<sup><sup>(1/2)</sup></sup>) * </code></blockquote> * * <P>Replacing precision and recall with their definitions, * <code>TP/(TP+FP)</code> and <code>TP/(TP+FN)</code>: * * <font size='-1'> * <pre> *      F<sub><sub>1</sub></sub> *      = 2 * (TP/(TP+FP)) * (TP/(TP+FN))  *        / (TP/(TP+FP) + TP/(TP+FN))      *      = 2 * (TP*TP / (TP+FP)(TP+FN)) *        / (TP*(TP+FN)/(TP+FP)(TP+FN) + TP*(TP+FP)/(TP+FN)(TP+FP)) *      = 2 * (TP / (TP+FP)(TP+FN)) *        / ((TP+FN)/(TP+FP)(TP+FN) + (TP+FP)/(TP+FN)(TP+FP)) *      = 2 * TP /  *        / ((TP+FN) + (TP+FP)) *      = 2*TP / (2*TP + FP + FN)</pre></font> * * Thus the F<sub><sub>1</sub></sub>-measure is very closely related to the Jaccard * coefficient, <code>TP/(TP+FP+FN)</code>.  Like the Jaccard * coefficient, the F measure does not vary with varying true * negative counts.  Rejection precision and recall do vary with * changes in true negative count. * * <P>Basic reference and response likelihoods are computed by * frequency. * * <blockquote><code> * <b>referenceLikelihood</b>() = positiveReference() / total() * </code></blockquote> * * <blockquote><code> * <b>responseLikelihood</b>() = positiveResponse() / total() * </code></blockquote> * * An algorithm that chose responses at random according to the * response likelihood would have the following accuracy against * test cases chosen at random according to the reference likelihood: *  * <blockquote><code> * <b>randomAccuracy</b>() * = referenceLikelihood() * responseLikelihood() * + (1 - referenceLikelihood()) * (1 - responseLikelihood()) * </code></blockquote> * * The two summands arise from the likelihood of true positive and the * likelihood of a true negative.  From random accuracy, the * &kappa;-statistic is defined by dividing out the random accuracy * from the accuracy, in some way giving a measure of performance * above a baseline expectation. *  * <blockquote><code> * <b>kappa</b>() * = <i>kappa</i>(accuracy(),randomAccuracy()) * </code></blockquote> * * <blockquote><code> * <i><b>kappa</b></i>(p,e) * = (p - e) / (1 - e) * </code></blockquote> *  * <P>There are two alternative forms of the &kappa;-statistic, both * of which attempt to correct for putative bias in the estimation of * random accuracy.  The first involves computing the random accuracy * by taking the average of the reference and response likelihoods to * be the baseline reference and response likelihood, and squaring the * result to get the so-called unbiased random accuracy and the * unbiased &kappa;-statistic: * <blockquote><code> * <b>randomAccuracyUnbiased</b>() * = avgLikelihood()<sup><sup>2</sup></sup> * + (1 - avgLikelihood())<sup><sup>2</sup></sup> * <br> * avgLikelihood() = (referenceLikelihood() + responseLikelihood()) / 2 * </code></blockquote> * * <blockquote><code> * <b>kappaUnbiased</b>() * = <i>kappa</i>(accuracy(),randomAccuracyUnbiased()) * </code></blockquote> * * <P>Kappa can also be adjusted for the prevalence of positive * reference cases, which leads to the following simple definition: *  * <blockquote><code> * <b>kappaNoPrevalence</b>() * = (2 * accuracy()) - 1 * </code></blockquote> * *<P>Pearson's C<sup><sup>2</sup></sup> statistic is provided by * the following method: *  * <blockquote><code> * <b>chiSquared</b>()  * = total() * phiSquared() * </code></blockquote> *  * <blockquote><code> * <b>phiSquared</b>() * = ((truePositive()*trueNegative()) * (falsePositive()*falseNegative()))<sup><sup>2</sup></sup> * <br>/ ((truePositive()+falseNegative()) * (falsePositive()+trueNegative()) * (truePositive()+falsePositive()) * (falseNegative()+trueNegative())) * </code></blockquote> *

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -