⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 nn_var_usg.c

📁 快速傅立叶变换程序代码,学信号的同学,可要注意了
💻 C
字号:
  DNT; fprintf( fp, "-nnr report <%d>               (reporting style     )", nc->report);  DNT; fprintf( fp, "-nnv verbose <%d>              (verbosity 0/1/2     )", nc->verbose);  DNT; fprintf( fp, "-nndv decverbose <%d>          (verbosity in decoding)", nc->decverbose);  DNT; fprintf( fp, "-nndwrite verbosity <%d>       (write decoding info to file)", nc->decwrite);  DNT; fprintf( fp, "-nndecfile file               (file for decoding info)");   NLNE; fprintf( fp, " Neural net initialization:");  DNT; fprintf( fp, "-nnread read <%d>              (whether to read wts )", nc->read);  DNT; fprintf( fp, "-nnin infile                  (weights from (instead of default))");  DNT; fprintf( fp, "-nnout outfile                (weights out (instead of default))");  DNT; fprintf( fp, "-nninit rule <%d>              (how to init wts     )", nc->init_rule);  DNT; fprintf( fp, "-nndef_w def_w <%9.3g> (default initial weight)", nc->def_w);  DNT; fprintf( fp, "-nndef_b def_b <%9.3g> (default initial bias)", nc->def_b);  DNT; fprintf( fp, "-nnsigmaw0 sigma <%9.3g>(initial random wts  )", nc->sigma_w0);  DNT; fprintf( fp, "-nnwseed wseed <%ld>        (weight randomization)", nc->wseed);   NLNE; fprintf( fp, " Neural net training:");  DNT; fprintf( fp, "-nntrain train <%d>            (whether to train    )", nc->train);  DNT; fprintf( fp, "-nnn n <%d>                  (training number     )", nc->train_n);  DNT; fprintf( fp, "-nntn n <%d>                (training number     )", nc->test_n);  DNT; fprintf( fp, "-nntrseed trseed <%ld>      (defines training set)", nc->trseed);  DNT; fprintf( fp, "-nnteseed teseed <%ld>    (test set            )", nc->teseed);  DNT; fprintf( fp, "-nnregularize r <%d>           (type of regularization 0/1/2)", nc->regularize);  DNT; fprintf( fp, "-nna1 a1 <%9.3g>       (regularization of bias)", nc->alpha[1]);  DNT; fprintf( fp, "-nna2 a2 <%9.3g>       (regularization of inps)", nc->alpha[2]);  DNT; fprintf( fp, "-nna3 a3 <%9.3g>       (regularization of 2nd type inps)", nc->alpha[3]);  NLNE; fprintf( fp, " Neural net optimizer:");  DNT; fprintf( fp, "-nnopt opt <%d>                (macopt1 or 2        )", nc->opt);  DNT; fprintf( fp, "-nnloops loops <%d>            (Number of macopt runs)", nc->LOOP);  DNT; fprintf( fp, "-nnitmax itmax <%d>          (max no line searches)", nc->itmax);  DNT; fprintf( fp, "-nntolmin tolmin <%9.3g>(final tolerance in training)", nc->tolmin);  DNT; fprintf( fp, "-nntol0 tol0 <%9.3g>   (initial tolerance in training)", nc->tol0);  DNT; fprintf( fp, "-nnrich rich <%d>              (expensive optimizer?)", nc->rich);  DNT; fprintf( fp, "-nneos eos <%d>                (termination condition is that step is small)", nc->end_on_step);  DNT; fprintf( fp, "-cg cg <%d>                    (whether to check gradient, on how many)", nc->CG);  DNT; fprintf( fp, "-nneps epsilon <%9.3g> (epsilon for check gradient)", nc->epsilon);  DNT; fprintf( fp, "-nnevalH evalH <%d>            (evaluate hard performance measures)", nc->evalH);  NLNE; fprintf( fp, " Neural net decoding procedure:");  DNT; fprintf( fp, "-nnhp hp <%d>                  (1=if threshold exceeded; 2=sort)", nc->hitlist_policy);  DNT; fprintf( fp, "-nnhpt t <%9.3g>       (hitlist threshold   )", nc->hitlist_thresh);  DNT; fprintf( fp, "-nnhpn n <%d>                 (number to aim to hit)", nc->hitlist_n);  DNT; fprintf( fp, "-nnhpl l <%9.3g>       (-                   )", nc->hitlist_low);  DNT; fprintf( fp, "-nndecodits its <%d>          (max number of iterations to do when decoding)", nc->decodits);  DNT; fprintf( fp, "-nndecodn n <%d>            (number of examples to try to decode)", nc->decodn);  DNT; fprintf( fp, "-nndecodseed seed <%ld>   (seed for decoding tests)", nc->decodseed);  DNT; fprintf( fp, "-nnthresh thresh <%9.3g>(hard decision boundary)", net->thresh);

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -