书系统地介绍MATLAB 7.0的混合编程方法和技巧。全书共分为13章。第1章和第2章介绍MATLAB的基础知识,第3章简要介绍MATLAB混合编程,第4章至第9章分别介绍几种典型的混合编程方法,包括C-MEX、MATLAB引擎、MAT数据文件共享、Mideva、Matrix和Add-in。第10章、第11章介绍MATLAB与Delphi和Excel的混合编程。第12章介绍MATLAB COM Builder,第13章以图像处理为例介绍了一个综合应用实例。 本书按混合编程的具体方法进行逻辑编排,自始至终用实例描述,每章着重阐述各种混合编程方法的实质和要点,同时穿插了作者多年使用MATLAB的经验和体会。本书既适合初学者自学,也适用于高级MATLAB用户,可作为高等数学、计算机、电子工程、数值分析、信息工程等课程的教学参考书,也可供上述领域的科研工作者参考。 本书所附光盘内容详尽、实例丰富,包含MATLAB实例的源文件、函数/命令和注解以及程序实例。
上传时间: 2013-12-24
上传用户:一诺88
the text file QMLE contains the quasi maximum likelyhood estimating procedure and performing Information Matrix test for a univariate GARCH(1,1) model
标签: estimating likelyhood performing the
上传时间: 2014-11-22
上传用户:zhenyushaw
This toolbox was designed as a teaching aid, which matlab is particularly good for since source code is relatively legible and simple to modify. However, it is still reasonably fast if used with the supplied optimiser. However, if you really want to speed things up you should consider compiling the matrix composition routine for H into a mex function. Then again if you really want to speed things up you probably shouldn t be using matlab anyway... Get hold of a dedicated C program once you understand the algorithm.
标签: particularly designed teaching toolbox
上传时间: 2016-11-25
上传用户:hustfanenze
PRINCIPLE: The UVE algorithm detects and eliminates from a PLS model (including from 1 to A components) those variables that do not carry any relevant information to model Y. The criterion used to trace the un-informative variables is the reliability of the regression coefficients: c_j=mean(b_j)/std(b_j), obtained by jackknifing. The cutoff level, below which c_j is considered to be too small, indicating that the variable j should be removed, is estimated using a matrix of random variables.The predictive power of PLS models built on the retained variables only is evaluated over all 1-a dimensions =(yielding RMSECVnew).
标签: from eliminates PRINCIPLE algorithm
上传时间: 2016-11-27
上传用户:凌云御清风
Batch version of the back-propagation algorithm. % Given a set of corresponding input-output pairs and an initial network % [W1,W2,critvec,iter]=batbp(NetDef,W1,W2,PHI,Y,trparms) trains the % network with backpropagation. % % The activation functions must be either linear or tanh. The network % architecture is defined by the matrix NetDef consisting of two % rows. The first row specifies the hidden layer while the second % specifies the output layer. %
标签: back-propagation corresponding input-output algorithm
上传时间: 2016-12-27
上传用户:exxxds
% Train a two layer neural network with the Levenberg-Marquardt % method. % % If desired, it is possible to use regularization by % weight decay. Also pruned (ie. not fully connected) networks can % be trained. % % Given a set of corresponding input-output pairs and an initial % network, % [W1,W2,critvec,iteration,lambda]=marq(NetDef,W1,W2,PHI,Y,trparms) % trains the network with the Levenberg-Marquardt method. % % The activation functions can be either linear or tanh. The % network architecture is defined by the matrix NetDef which % has two rows. The first row specifies the hidden layer and the % second row specifies the output layer.
标签: Levenberg-Marquardt desired network neural
上传时间: 2016-12-27
上传用户:jcljkh
Train a two layer neural network with a recursive prediction error % algorithm ("recursive Gauss-Newton"). Also pruned (i.e., not fully % connected) networks can be trained. % % The activation functions can either be linear or tanh. The network % architecture is defined by the matrix NetDef , which has of two % rows. The first row specifies the hidden layer while the second % specifies the output layer.
标签: recursive prediction algorithm Gauss-Ne
上传时间: 2016-12-27
上传用户:ljt101007
Mapack可用来做矩阵运算 Mapack is a .NET class library for basic linear algebra computations. It supports the following matrix operations and properties: Multiplication, Addition, Subtraction, Determinant, Norm1, Norm2, Frobenius Norm, Infinity Norm, Rank, Condition, Trace, Cholesky, LU, QR, Single Value decomposition, Least Squares solver, Eigenproblem solver, Equation System solver. The algorithms were adapted from Mapack for COM, Lapack and the Java Matrix Package.
标签: Mapack computations supports algebra
上传时间: 2017-01-26
上传用户:tb_6877751
The software is capable to simulate space time code [1] for QPSK modulation using different number of state. Examples of generator matrix up to 256 stetes are provided. Variable signal to noise ratio (SNR) might be applied to produce bit error rate (BER) or frame error rate (FER) curves.
标签: modulation different software simulate
上传时间: 2014-01-22
上传用户:qq1604324866
SuperLU is a general purpose library for the direct solution of large, sparse, nonsymmetric systems of linear equations on high performance machines. The library is written in C and is callable from either C or Fortran. The library routines will perform an LU decomposition with partial pivoting and triangular system solves through forward and back substitution. The LU factorization routines can handle non-square matrices but the triangular solves are performed only for square matrices. The matrix columns may be preordered (before factorization) either through library or user supplied routines. This preordering for sparsity is completely separate from the factorization. Working precision iterative refinement subroutines are provided for improved backward stability. Routines are also provided to equilibrate the system, estimate the condition number, calculate the relative backward error, and estimate error bounds for the refined solutions.
标签: nonsymmetric solution SuperLU general
上传时间: 2017-02-20
上传用户:lepoke