⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 page_99.html

📁 怎样挖掘你的网站的内容。本领域内唯一的书
💻 HTML
字号:
<HTML>  <HEAD>    <!--SCRIPT LANGUAGE="JavaScript" SRC="http://a1835.g.akamai.net/f/1835/276/3h/www.netlibrary.com/include/js/dictionary_library.js"></SCRIPT>    <SCRIPT LANGUAGE="JavaScript">      if (!opener){document.onkeyup=parent.turnBookPage;}    </SCRIPT!-->    <META HTTP-EQUIV="Cache-Control" CONTENT="no-cache">    <META HTTP-EQUIV="Pragma" CONTENT="no-cache">    <META HTTP-EQUIV="Expires" CONTENT="-1"><META http-equiv="Content-Type" content="text/html; charset=windows-1252"><SCRIPT>var PrevPage="Page_98";var NextPage="Page_100";var CurPage="Page_99";var PageOrder="110";</SCRIPT>  <TITLE>Document</TITLE>  </HEAD>  <BODY BGCOLOR="#FFFFFF"><CENTER><TABLE BORDER=0 WIDTH=100% CELLPADDING=0><TR><TD ALIGN=CENTER>  <TABLE BORDER=0 CELLPADDING=2 CELLSPACING=0 WIDTH=100%>  <TR>  <TD ALIGN=LEFT><A HREF='Page_98.html'>Previous</A></TD>  <TD ALIGN=RIGHT><A HREF='Page_100.html'>Next</A></TD>  </TR>  </TABLE></TD></TR><TR><TD ALIGN=LEFT><P><A NAME='JUMPDEST_Page_99'/><A NAME='{368}'/><TABLE BORDER=0 CELLSPACING=0 CELLPADDING=0 WIDTH='100%'><TR><TD ALIGN=RIGHT><FONT FACE='Times New Roman, Times, Serif' SIZE=2 COLOR=#FF0000>Page 99</FONT></TD></TR></TABLE><A NAME='{369}'/><TABLE BORDER=0 CELLSPACING=0 CELLPADDING=0><TR>  <TD ROWSPAN=5></TD>  <TD COLSPAN=3 HEIGHT=12></TD>  <TD ROWSPAN=5></TD></TR><TR>  <TD COLSPAN=3></TD></TR><TR><TD></TD>  <TD><FONT FACE='Times New Roman, Times, Serif' SIZE=3>automatic integration detection), ID3 (Interactive Dichotomizer), or C4.5 or C5.0 will segment a data set into statistically significant clusters of classes based on a desired output. As noted, some of these tools generate &quot;decision trees&quot; that provide a graphical breakdown of a data set (a sort of map of significant clusters), while others produce IF/THEN rules, which segment a data set into classes that can point out important <I>ranges</I> and <I>features.</I> Such a rule has two parts, a condition (IF) and a result (THEN), and is represented as a statement:</FONT></TD><TD></TD></TR><TR>  <TD COLSPAN=3></TD></TR><TR>  <TD COLSPAN=3 HEIGHT=1></TD></TR></TABLE><A NAME='{36A}'/><TABLE CELLPADDING=0 CELLSPACING=0 BORDER=0 WIDTH='100%'><TR><TD HEIGHT=12></TD></TR><TR><TD><TABLE CELLSPACING=0 RULES=ROWS WIDTH=614 CELLPADDING=4><TR><TD WIDTH=614 VALIGN=TOP><FONT FACE='Times New Roman, Times, Serif' SIZE=2><I>If</I> Customer code <I>is</I> <U>03</U></FONT></TD></TR><TR><TD WIDTH=614 VALIGN=TOP><FONT FACE='Times New Roman, Times, Serif' SIZE=2><I>And</I> Number of purchases made this year <I>is</I> 06</FONT></TD></TR><TR><TD WIDTH=614 VALIGN=TOP><FONT FACE='Times New Roman, Times, Serif' SIZE=2><I>And</I> ZIP Code <I>is</I> 79905</FONT></TD></TR><TR><TD WIDTH=614 VALIGN=TOP><FONT FACE='Times New Roman, Times, Serif' SIZE=2><I>Then</I> PROSPECT <I>will purchase Product X</I></FONT></TD></TR><TR><TD WIDTH=614 VALIGN=TOP><FONT FACE='Times New Roman, Times, Serif' SIZE=2><I>Rule's probability</I>:.88</FONT></TD></TR><TR><TD WIDTH=614 VALIGN=TOP><FONT FACE='Times New Roman, Times, Serif' SIZE=2><I>The rule exists in</I> 13000 <I>records.</I></FONT></TD></TR><TR><TD WIDTH=614 VALIGN=TOP><FONT FACE='Times New Roman, Times, Serif' SIZE=2><I>Significance level: Error probability</I> &lt;0.13</FONT></TD></TR></TABLE></TD></TR></TABLE><BR><A NAME='{36B}'/><TABLE BORDER=0 CELLSPACING=0 CELLPADDING=0><TR>  <TD ROWSPAN=5></TD>  <TD COLSPAN=3 HEIGHT=17></TD>  <TD ROWSPAN=5></TD></TR><TR>  <TD COLSPAN=3></TD></TR><TR><TD></TD>  <TD><FONT FACE='Times New Roman, Times, Serif' SIZE=3><B>A Measure of Information</B></FONT></TD><TD></TD></TR><TR>  <TD COLSPAN=3></TD></TR><TR>  <TD COLSPAN=3 HEIGHT=1></TD></TR></TABLE><A NAME='{36C}'/><TABLE BORDER=0 CELLSPACING=0 CELLPADDING=0><TR>  <TD ROWSPAN=5></TD>  <TD COLSPAN=3 HEIGHT=12></TD>  <TD ROWSPAN=5></TD></TR><TR>  <TD COLSPAN=3></TD></TR><TR><TD></TD>  <TD><FONT FACE='Times New Roman, Times, Serif' SIZE=3>Most of these types of data mining tools evolve from two lines of research and development. The first is from the area of statistical decision trees and the other in machine learning, such as the ID3. Mathematician J. Ross Quinlan, in his ID3 system, introduced and popularized the concept of today's data mining decision tree tools. There are two main types of decision trees: binary and multiple branches. A binary decision tree splits from a node in two directions with each node representing a yes-or-no question. ID3 is a program that can build trees automatically from given positive and negative instances. Each leaf of a decision tree asserts a positive or negative concept.</FONT></TD><TD></TD></TR><TR>  <TD COLSPAN=3></TD></TR><TR>  <TD COLSPAN=3 HEIGHT=1></TD></TR></TABLE><A NAME='{36D}'/><TABLE BORDER=0 CELLSPACING=0 CELLPADDING=0><TR>  <TD ROWSPAN=5></TD>  <TD COLSPAN=3 HEIGHT=12></TD>  <TD ROWSPAN=5></TD></TR><TR>  <TD COLSPAN=3></TD></TR><TR><TD></TD>  <TD><FONT FACE='Times New Roman, Times, Serif' SIZE=3>To classify a particular input we start at the top and follow assertions down until we reach an answer. For example, in the ID3 algorithm, the sequence of fields that are selected as predictors is determined by a measure of entropy. This information measure is best for the attribute whose value, when known, minimizes the departure of your guess, or prediction, from the actual value of the dependent variable. Other fields are then examined for their ability to further differentiate values in the decedent nodes of the resulting tree. The</FONT><FONT FACE='Times New Roman, Times, Serif' SIZE=3 COLOR=#FFFF00><!-- continue --></FONT></TD><TD></TD></TR><TR>  <TD COLSPAN=3></TD></TR><TR>  <TD COLSPAN=3 HEIGHT=1></TD></TR></TABLE><A NAME='{36E}'/></FORM></P></TD></TR></TABLE><P><FONT SIZE=0 COLOR=WHITE></CENTER><A NAME="bottom">&nbsp;</A><!-- netLibrary.com Copyright Notice -->  </BODY></HTML>

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -