⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 revisions.txt

📁 基于稀疏网络的精选机器学习模型
💻 TXT
📖 第 1 页 / 共 2 页
字号:
    regardless of how many labels are placed in an example or where they are    found.  - when evaluating performance, SNoW will now consider its prediction correct    if it finds the target ID it predicted anywhere in the example.  previously,    the label had to be the first feature ID in the example, and additional    labels beyond that were ignored.  - a significant memory consumption optimization affecting the way examples are    stored in memory has been implemented.  it's not very noticeable with '-M -'    (which is the default), but it's very noticeable with '-M +'.  - a small performance optimization affecting training has also been    implemented.  when reading a labeled example, target IDs are automatically    moved to the beginning of the example.  this way, when determining if a    given target ID is active in the example during training, only the beginning    of the example needs to be searched.  - the '-S' thick separator parameter now takes two arguments in the form '-S    <p>[,<n>]' where p and n are floating point values.  a target node's    prediction is correct if its ID is active and its activation is greater than    or equal to threshold + p.  it is also correct if the ID is not active and    the activation is less than threshold - n.  if n is not specified, it is set    equal to p.  - bug fixes:    - the constraint classification algorithm (a.k.a. "ordered targets mode",      '-O +') was not operating correctly in conjunction with '-M +' (when all      examples are stored in memory).  now it is.    - '-test' mode was not handling the '-e percent:<k>' flag correctly (or at      all).  now it is.    - there were various bugs in the network specification parsing algorithm.      these bugs did not affect the way properly structured network      specifications were parsed.  however, now that the bugs are fixed, SNoW      handles improperly structured network specifications more gracefully (ie.      it doesn't seg fault).    - well, maybe it's not really a bug, but i think it's better for percentage      eligibility to declare features 'pending' instead of 'discarded' when      pruning after the first cycle.  that way, setting '-a +' can still have      meaning.2/28/03 (v3.0.3)  - fixed a bug in average example size calculation: the fixed feature should    not be counted, but it was being counted.  - added support for a single target.  testing output will report "positive" to    indicate an activation larger than the threshold, and "negative" to indicate    an activation smaller than the threshold.5/19/03 (v3.0.4)  - got rid of the dynamic casting in Network::TrainingComplete() which was    fussy in MSVC++.  in order to get rid of it, it was necessary to make    ClearTotalPrior() a virtual function and add it to all learning algorithms.  - the '-m' multipleLabels flag was not affecting -test mode correctly.  when    set to '-', it now only counts an example as correct if the predicted    target is the first target found in the example.  the default setting ('+')    was not changed.5/21/03 (v3.0.5)  - fixed a bug in the '-m' bug fix from v3.0.4.  this bug could potentially    cause an out of bounds array access when testing an example that contains    only target IDs.  examples that contained at least one feature ID that is    not a target were unaffected.1/6/04 (v3.1.0)  - the suffix added to the conjunction output file has been changed from    '.blowup' to '.conjunctions'.  (see the -g command line parameter.)  - added the '-L' command line parameter for limiting the amount of target    IDs displayed during output with various '-o' settings.  - algorithmic additions and modifications:    - the gradient descent algorithm has been added.  specify '-G +' on the      command line to enable it in conjunction with either perceptron or      winnow.    - the Perceptron threshold relative update rule (see the '-t' command line      parameter) has been changed to:        w_i += (learning_rate + threshold - activation)               / #_of_active_features_in_example      which more closely resembles the concept behind the Winnow threshold      relative update rule.    - the naive Bayes sigmoid function (which previously made no sense and      didn't even work as intended anyway because of a bug discussed below)      has been changed to simply return the activation of the target.    - the target confidence update formula has been changed to:        confidence = confidence / (1 + 2 / (100 + mistakes))      we feel this formula is better than the old formula because it does not      depend on the amount of examples seen so far; only the number of      mistakes.    - confidence is no longer normalized after every 1000 examples during      training, since the new confidence formula won't make confidences get      too small too quickly.  - compilation related changes:    - added precompiler directives in Target.h to make SNoW's hash_maps work      with GCC 3.0 and higher.    - rewrote the Makefile to improve its flexibility and ease of maintenance.      a user can now simply type 'gmake CXX=mycplusplus' to force compilation      with a particular compiler and 'gmake SERVER=1' to include server      functionality in the build.  a developer can now simply type 'gmake      dist' to create the SNoW_vX.X.X.tar.gz distribution tarball in the      parent directory.  - bug fixes:    - the Winnow and Perceptron threshold relative update rules were      incorrectly dependent on feature strengths.  now, neither involves a      feature strength (although strengths are still used to calculate dot      products regardless of the update rule).    - the feature discarding routines weren't actually discarding any      features.  now they do it as advertised.    - the "threshold" variable in the LearningAlgorithm class used to be used      by the NaiveBayes class to store the result of a calculation based on a      target's prior probability.  since each target has a different prior      probability, there should have been one NaiveBayes object instantiated      for each target learned by Naive Bayes, but there wasn't.      in case you're interested, the calculation whose result was stored      in that "threshold" variable was related to the Naive Bayes sigmoid      function, which used to be (2 * activation / prior).  it used to be the      case that this "prior" variable took the same setting during every      target's sigmoid activation calculation (which isn't such a big deal,      but it also wasn't the intended behavior).  but it's all a mute point in      this version of the code, since we've finally decided to eliminate      sigmoid activations in naive Bayes (see the algorithmic modification      bullet above).  - performance improvements:    - the target confidence calculation in the Winnow and Perceptron Update()      functions is now only performed on targets that are not alone in their      cloud.  users who use more than one target per cloud will actually see a      performance penalty of one comparison per call to Update(), but we      believe users ignore SNoW's cloud mechanism much more often than not.    - wherever post-increment appeared in the code and was found to be      semantically equivalent to pre-increment, it was replaced with      pre-increment.  this isn't such a big deal for scalars, but it can save      a lot of time for STL iterators.    - removed the ClearTotalPrior() function from all LearningAlgorithm      classes, since it was called in a loop but didn't serve any discernibly      useful purpose.1/30/04 (v3.1.1)  - bug fix: an important line of code in the procedure that reads a target    representation from a network file was mistakenly removed while updating    the code for the prior release.  this caused a segmentation fault when    testing a network that contains more than one target running the same    learning algorithm.3/7/04 (v3.1.2)  - bug fix: an improper use of ostrstream caused some garbage in the error    file output after the prediction of each example.  - by popular demand, the new winnow sigmoid function has been removed, and    the old, more standard sigmoid function is back.4/1/04 (v3.1.3)  - bug fix: the -z "raw mode" command line parameter was not behaving as the    user's manual says it would (which is how it was intended to behave).  now    it works just as advertised.6/21/04 (v3.1.4)  - added the softmax normalization option to snow's output during testing.    there is both a new output mode called '-o softmax' on the command line,    and a new column of output in the '-o allactivations' output mode.  - added a conservative version of the constraint classification algorithm.    in this version of the algorithm, a maximum of one update per example is    made.  if first label in the example does not agree with the target with    highest activation, then the latter will be demoted and the former will be    promoted.  - a new "interactive" mode has been added to snow in which the user is given    precise control over the promotion and demotion decisions for each    example.  - bug fix: when the user does not specify any algorithms on the command    line, a default algorithm is supposed to be instantiated and use for all    targets.  this wasn't working correctly.8/1/04 (v3.2.0)  - SNoW is now a class library: Snow (and other classes) can be used in     c++ code.  For an example, see Main.cpp.  - Makefile still creates an executable that is backward compatible with     older versions of SNoW.8/12/05(v3.2.1)  - added Voted Perceptron option  - made many small changes to match Nick Rizzolo's branch  - fixed bug in server mode (stringstream ctor arg was 'in' instead of 'out)

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -