⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 profiledata.pm

📁 视频监控网络部分的协议ddns,的模块的实现代码,请大家大胆指正.
💻 PM
📖 第 1 页 / 共 2 页
字号:
package DBI::ProfileData;use strict;=head1 NAMEDBI::ProfileData - manipulate DBI::ProfileDumper data dumps=head1 SYNOPSISThe easiest way to use this module is through the dbiprof frontend(see L<dbiprof> for details):  dbiprof --number 15 --sort countThis module can also be used to roll your own profile analysis:  # load data from dbi.prof  $prof = DBI::ProfileData->new(File => "dbi.prof");  # get a count of the records (unique paths) in the data set  $count = $prof->count();  # sort by longest overall time  $prof->sort(field => "longest");  # sort by longest overall time, least to greatest  $prof->sort(field => "longest", reverse => 1);  # exclude records with key2 eq 'disconnect'  $prof->exclude(key2 => 'disconnect');  # exclude records with key1 matching /^UPDATE/i  $prof->exclude(key1 => qr/^UPDATE/i);  # remove all records except those where key1 matches /^SELECT/i  $prof->match(key1 => qr/^SELECT/i);  # produce a formatted report with the given number of items  $report = $prof->report(number => 10);   # clone the profile data set  $clone = $prof->clone();  # get access to hash of header values  $header = $prof->header();  # get access to sorted array of nodes  $nodes = $prof->nodes();  # format a single node in the same style as report()  $text = $prof->format($nodes->[0]);  # get access to Data hash in DBI::Profile format  $Data = $prof->Data();=head1 DESCRIPTIONThis module offers the ability to read, manipulate and formatDBI::ProfileDumper profile data.  Conceptually, a profile consists of a series of records, or nodes,each of each has a set of statistics and set of keys.  Each recordmust have a unique set of keys, but there is no requirement that everyrecord have the same number of keys.=head1 METHODSThe following methods are supported by DBI::ProfileData objects.=cutour $VERSION = sprintf("2.%06d", q$Revision: 10007 $ =~ /(\d+)/o);use Carp qw(croak);use Symbol;use Fcntl qw(:flock);use DBI::Profile qw(dbi_profile_merge);# some constants for use with node data arrayssub COUNT     () { 0 };sub TOTAL     () { 1 };sub FIRST     () { 2 };sub SHORTEST  () { 3 };sub LONGEST   () { 4 };sub FIRST_AT  () { 5 };sub LAST_AT   () { 6 };sub PATH      () { 7 };my $HAS_FLOCK = (defined $ENV{DBI_PROFILE_FLOCK})    ? $ENV{DBI_PROFILE_FLOCK}    : do { local $@; eval { flock STDOUT, 0; 1 } };=head2 $prof = DBI::ProfileData->new(File => "dbi.prof")=head2 $prof = DBI::ProfileData->new(File => "dbi.prof", Filter => sub { ... })=head2 $prof = DBI::ProfileData->new(Files => [ "dbi.prof.1", "dbi.prof.2" ])Creates a a new DBI::ProfileData object.  Takes either a single filethrough the File option or a list of Files in an array ref.  Ifmultiple files are specified then the header data from the first fileis used.=head3 FilesReference to an array of file names to read.=head3 FileName of file to read. Takes precedence over C<Files>.=head3 DeleteFilesIf true, the files are deleted after being read.Actually the files are renamed with a C.deleteme> suffix before being read,and then, after reading all the files, they're all deleted together.The files are locked while being read which, combined with the rename, makes itsafe to 'consume' files that are still being generated by L<DBI::ProfileDumper>.=head3 FilterThe C<Filter> parameter can be used to supply a code reference that canmanipulate the profile data as it is being read. This is most useful forediting SQL statements so that slightly different statements in the raw datawill be merged and aggregated in the loaded data. For example:  Filter => sub {      my ($path_ref, $data_ref) = @_;      s/foo = '.*?'/foo = '...'/ for @$path_ref;  }Here's an example that performs some normalization on the SQL. It converts allnumbers to C<N> and all quoted strings to C<S>.  It can also convert digits toN within names. Finally, it summarizes long "IN (...)" clauses.It's aggressive and simplistic, but it's often sufficient, and serves as anexample that you can tailor to suit your own needs:  Filter => sub {      my ($path_ref, $data_ref) = @_;      local $_ = $path_ref->[0]; # whichever element contains the SQL Statement      s/\b\d+\b/N/g;             # 42 -> N      s/\b0x[0-9A-Fa-f]+\b/N/g;  # 0xFE -> N      s/'.*?'/'S'/g;             # single quoted strings (doesn't handle escapes)      s/".*?"/"S"/g;             # double quoted strings (doesn't handle escapes)      # convert names like log_20001231 into log_NNNNNNNN, controlled by $opt{n}      s/([a-z_]+)(\d{$opt{n},})/$1.('N' x length($2))/ieg if $opt{n};      # abbreviate massive "in (...)" statements and similar      s!(([NS],){100,})!sprintf("$2,{repeated %d times}",length($1)/2)!eg;  }It's often better to perform this kinds of normalization in the DBI while thedata is being collected, to avoid too much memory being used by storing profiledata for many different SQL statement. See L<DBI::Profile>.=cutsub new {    my $pkg = shift;    my $self = {                                Files        => [ "dbi.prof" ],		Filter       => undef,                DeleteFiles  => 0,                LockFile     => $HAS_FLOCK,                _header      => {},                _nodes       => [],                _node_lookup => {},                _sort        => 'none',                @_               };    bless $self, $pkg;        # File (singular) overrides Files (plural)    $self->{Files} = [ $self->{File} ] if exists $self->{File};    $self->_read_files();    return $self;}# read files into _header and _nodessub _read_files {    my $self = shift;    my $files  = $self->{Files};    my $read_header = 0;    my @files_to_delete;      my $fh = gensym;    foreach (@$files) {        my $filename = $_;        if ($self->{DeleteFiles}) {            my $newfilename = $filename . ".deleteme";	    if ($^O eq 'VMS') {		# VMS default filesystem can only have one period		$newfilename = $filename . 'deleteme';	    }            # will clobber an existing $newfilename            rename($filename, $newfilename)                or croak "Can't rename($filename, $newfilename): $!";	    # On a versioned filesystem we want old versions to be removed	    1 while (unlink $filename);            $filename = $newfilename;        }        open($fh, "<", $filename)          or croak("Unable to read profile file '$filename': $!");        # lock the file in case it's still being written to        # (we'll be foced to wait till the write is complete)        flock($fh, LOCK_SH) if $self->{LockFile};        if (-s $fh) {   # not empty            $self->_read_header($fh, $filename, $read_header ? 0 : 1);            $read_header = 1;            $self->_read_body($fh, $filename);        }        close($fh); # and release lock                push @files_to_delete, $filename            if $self->{DeleteFiles};    }    for (@files_to_delete){	# for versioned file systems	1 while (unlink $_);	if(-e $_){	    warn "Can't delete '$_': $!";	}    }        # discard node_lookup now that all files are read    delete $self->{_node_lookup};}# read the header from the given $fh named $filename.  Discards the# data unless $keep.sub _read_header {    my ($self, $fh, $filename, $keep) = @_;    # get profiler module id    my $first = <$fh>;    chomp $first;    $self->{_profiler} = $first if $keep;    # collect variables from the header    local $_;    while (<$fh>) {        chomp;        last unless length $_;        /^(\S+)\s*=\s*(.*)/          or croak("Syntax error in header in $filename line $.: $_");        # XXX should compare new with existing (from previous file)        # and warn if they differ (diferent program or path)        $self->{_header}{$1} = unescape_key($2) if $keep;    }}sub unescape_key {  # inverse of escape_key() in DBI::ProfileDumper    local $_ = shift;    s/(?<!\\)\\n/\n/g; # expand \n, unless it's a \\n    s/(?<!\\)\\r/\r/g; # expand \r, unless it's a \\r    s/\\\\/\\/g;       # \\ to \    return $_;}# reads the body of the profile datasub _read_body {    my ($self, $fh, $filename) = @_;    my $nodes = $self->{_nodes};    my $lookup = $self->{_node_lookup};    my $filter = $self->{Filter};    # build up node array    my @path = ("");    my (@data, $path_key);    local $_;    while (<$fh>) {        chomp;        if (/^\+\s+(\d+)\s?(.*)/) {            # it's a key            my ($key, $index) = ($2, $1 - 1);            $#path = $index;      # truncate path to new length            $path[$index] = unescape_key($key); # place new key at end        }	elsif (s/^=\s+//) {            # it's data - file in the node array with the path in index 0	    # (the optional minus is to make it more robust against systems	    # with unstable high-res clocks - typically due to poor NTP config	    # of kernel SMP behaviour, i.e. min time may be -0.000008))            @data = split / /, $_;            # corrupt data?            croak("Invalid number of fields in $filename line $.: $_")                unless @data == 7;            croak("Invalid leaf node characters $filename line $.: $_")                unless m/^[-+ 0-9eE\.]+$/;	    # hook to enable pre-processing of the data - such as mangling SQL	    # so that slightly different statements get treated as the same	    # and so merged in the results	    $filter->(\@path, \@data) if $filter;            # elements of @path can't have NULLs in them, so this            # forms a unique string per @path.  If there's some way I            # can get this without arbitrarily stripping out a            # character I'd be happy to hear it!            $path_key = join("\0",@path);            # look for previous entry            if (exists $lookup->{$path_key}) {                # merge in the new data		dbi_profile_merge($nodes->[$lookup->{$path_key}], \@data);            } else {                # insert a new node - nodes are arrays with data in 0-6                # and path data after that                push(@$nodes, [ @data, @path ]);                # record node in %seen                $lookup->{$path_key} = $#$nodes;            }        }	else {            croak("Invalid line type syntax error in $filename line $.: $_");	}    }}=head2 $copy = $prof->clone();Clone a profile data set creating a new object.=cutsub clone {    my $self = shift;    # start with a simple copy    my $clone = bless { %$self }, ref($self);    # deep copy nodes    $clone->{_nodes}  = [ map { [ @$_ ] } @{$self->{_nodes}} ];    # deep copy header    $clone->{_header} = { %{$self->{_header}} };    return $clone;}=head2 $header = $prof->header();Returns a reference to a hash of header values.  These are the keyvalue pairs included in the header section of the DBI::ProfileDumperdata format.  For example:  $header = {    Path    => [ '!Statement', '!MethodName' ],    Program => 't/42profile_data.t',

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -