⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 nbest-optimize.1

📁 这是一款很好用的工具包
💻 1
📖 第 1 页 / 共 2 页
字号:
       --nnoo--rreeoorrddeerr              Do not reorder the hypotheses  for  alignment,  and              start  the alignment with the reference words.  The              default is to first align hypotheses  by  order  of              decreasing  scores  (according to the initial score              weighting) and then the reference,  which  is  more              compatible with how nnbbeesstt--llaattttiiccee(1) operates.       --nnooiissee _n_o_i_s_e_-_t_a_g              Designate _n_o_i_s_e_-_t_a_g as a vocabulary item that is to              be ignored in aligning hypotheses with  each  other              (the  same  as  the -pau- word).  This is typically              used to identify a noise marker.       --nnooiissee--vvooccaabb _f_i_l_e              Read several noise tags from _f_i_l_e, instead  of,  or              in  addition  to, the single noise tag specified by              --nnooiissee.       --hhiiddddeenn--vvooccaabb file              Read a subvocabulary from _f_i_l_e and  constrain  word              alignments  to  only  group  those  words  that are              either all in or outside the  subvocabulary.   This              may  be  used  to  keep  ``hidden event'' tags from              aligning with regular words.       --iinniitt--llaammbbddaass _'_w_1 _w_2 _._._._'              Initialize  the  score  weights   to   the   values              specified (zeros are filled in for missing values).              The default is to set the  initial  acoustic  model              weight   to  1,  the  language  model  weight  from              --rreessccoorree--llmmww,  the  word  transition  weight   from              --rreessccoorree--wwttww,  and  all  remaining  weights to zero              initially.  Prefixing a value with  an  equal  sign              (`=') holds the value constant during optimization.              (All values should be enclosed in quotes to form  a              single command-line argument.)              Hypotheses  are  aligned using the initial weights;              thus, it makes sense  to  reoptimize  with  initial              weights  from  a  previous optimization in order to              obtain alignments closer to the optimimum.       --aallpphhaa _a              Controls the error function smoothness; the sigmoid              slope parameter is set to _a.       --eeppssiilloonn _e              The  step-size used in gradient descent (the multi-              ple of the gradient vector).       --mmiinn--lloossss _x              Sets the loss function for a sample effectively  to              zero when its value falls below _x.       --mmaaxx--ddeellttaa _d              Ignores  the contribution of a sample to the gradi-              ent if the derivative exceeds _d.  This helps  avoid              numerical problems.       --mmaaxxiitteerrss _m              Stops  optimization  after _m iterations.  In Amoeba              search, this limits the total number of  points  in              the parameter space that are evaluated.       --mmaaxx--bbaadd--iitteerrss n              Stops  optimization after _n iterations during which              the actual (non-smoothed) error has not  decreased.       --mmaaxx--aammooeebbaa--rreessttaarrttss r              Perform only up to _r repeated Amoeba searches.  The              default is to search until _D searches give the same              results, where _D is the dimensionality of the prob-              lem.       --mmaaxx--ttiimmee _T              Abort search if new lower-error point  isn't  found              in _T seconds.       --eeppssiilloonn--sstteeppddoowwnn _s       --mmiinn--eeppssiilloonn _m              If  _s  is  a  value greater than zero, the learning              rate will be multiplied by _s every time  the  error              does  not  decrease  after  a  number of iterations              specified by --mmaaxx--bbaadd--iitteerrss.  Training  stops  when              the learning rate falls below _m in this manner.       --ccoonnvveerrggee _x              Stops  optimization  when the (smoothed) loss func-              tion changes relatively by less  than  _x  from  one              iteration to the next.       --qquuiicckkpprroopp              Use  the  approximate  second-order method known as              "QuickProp" (Fahlman 1989).       --iinniitt--aammooeebbaa--ssiimmpplleexx _'_s_1 _s_2 _._._._'              Defines the step size for the initial  Amoeba  sim-              plex.   One  value for each non-fixed search dimen-              sion should be specified, plus optionally  a  value              for  the  posterior  scaling  parameter  (which  is              searched as an added dimension).       --pprriinntt--hhyyppss _f_i_l_e              Write the best word hypotheses to _f_i_l_e after  opti-              mization.       --wwrriittee--rroovveerr--ccoonnttrrooll _f_i_l_e              Writes  a  control  file  for  nnbbeesstt--rroovveerr to _f_i_l_e,              reflecting the names of the input  directories  and              the optimized parameter values.  The format of _f_i_l_e              is described  in  nnbbeesstt--ssccrriippttss(1).   The  file  is              rewritten  for each new minimal error weight combi-              nation found.       ----     Signals the end of  options,  such  that  following              command-line  arguments  are  interpreted  as addi-              tional scorefiles even if they start with `-'.       _s_c_o_r_e_d_i_r...              Any additional arguments name directories  contain-              ing  further score files.  In each directory, there              must exist one file named after the sentence ID  it              corresponds  to  (the  file may also end in ``.gz''              and contain compressed data).  The total number  of              score dimensions is thus 3 (for the standard scores              from the N-best list) plus the number of additional              score directories specified.SSEEEE AALLSSOO       nbest-lattice(1), nbest-scripts(1), nbest-format(5).       S. Katagiri, C.H. Lee, & B.-H. Juang, "A Generalized Prob-       abilistic Descent Method", in _P_r_o_c_e_e_d_i_n_g_s _o_f _t_h_e  _A_c_o_u_s_t_i_-       _c_a_l _S_o_c_i_e_t_y _o_f _J_a_p_a_n_, _F_a_l_l _M_e_e_t_i_n_g, pp. 141-142, 1990.       S.  E. Fahlman, "Faster-Learning Variations on Back-Propa-       gation: An Empirical Study", in D. Touretzky, G. Hinton, &       T. Sejnowski (eds.), _P_r_o_c_e_e_d_i_n_g_s _o_f _t_h_e _1_9_8_8 _C_o_n_n_e_c_t_i_o_n_i_s_t       _M_o_d_e_l_s _S_u_m_m_e_r _S_c_h_o_o_l, pp. 38-51, Morgan Kaufmann, 1989.       W. H. Press, B. P. Flannery, S. A. Teukolsky, & W. T. Vet-       terling,  _N_u_m_e_r_i_c_a_l  _R_e_c_i_p_e_s  _i_n  _C_: _T_h_e _A_r_t _o_f _S_c_i_e_n_t_i_f_i_c       _C_o_m_p_u_t_i_n_g, Cambridge University Press, 1988.BBUUGGSS       Gradient-based optimization  is  not  supported  (yet)  in       1-best  mode or in conjunction with the --ccoommbbiinnee--lliinneeaarr or       --nnoonn--nneeggaattiivvee  options;  use  simplex-search  optimization       instead.       The N-best directory in the control file output by --wwrriittee--       rroovveerr--ccoonnttrrooll is inferred from the first  N-best  filename       specified  with --nnbbeesstt--ffiilleess, and will therefore only work       if all N-best lists are placed in the same directory.       The  --iinnsseerrttiioonn--wweeiigghhtt  and  --wwoorrdd--wweeiigghhttss  options   only       affect the word error computation, not the construction of       hypothesis alignments.  Also, they only apply to  sausage-       based,  not 1-best error optimization.  (1-best errors may       be explicitly specified using the --eerrrroorrss option).AAUUTTHHOORRSS       Andreas Stolcke <stolcke@speech.sri.com>       Dimitra Vergyri <dverg@speech.sri.com>       Copyright 2000-2006 SRI InternationalSRILM Tools        $Date: 2006/11/20 20:39:15 $ nbest-optimize(1)

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -