inviso_lfm.sgml

来自「OTP是开放电信平台的简称」· SGML 代码 · 共 96 行

SGML
96
字号
<!doctype erlref PUBLIC "-//Stork//DTD erlref//EN">
<erlref>
  <header>
    <title>inviso_lfm</title>
    <prepared></prepared>
    <docno></docno>
    <date></date>
    <rev></rev>
  </header>
  <module>inviso_lfm</module>
  <modulesummary>An Inviso Off-Line Logfile Merger</modulesummary>
  <description>
    <p>Implements an off-line logfile merger, merging binary trace-log files from several nodes together in chronological order. The logfile merger can also do pid-to-alias translations.</p>

    <p>The logfile merger is supposed to be called from the Erlang shell or a higher layer trace tool. For it to work, all logfiles and trace information files (containing the pid-alias associations) must be located in the file system accessible from this node and organized according to the API description.</p>

    <p>The logfile merger starts a process, the output process, which in its turn starts one reader process for every node it shall merge logfiles from. Note that the reason for a process for each node is not remote communication, the logfile merger is an off-line utility, it is to sort the logfile entries in chronological order.</p>

    <p>The logfile merger can be customized both when it comes to the implementation of the reader processes and the output the output process shall generate for every logfile entry.</p>
  </description>

  <funcs>
    <func>
      <name>merge(Files, OutFile) -></name>
      <name>merge(Files, WorkHFun, InitHandlerData) -></name>
      <name>merge(Files, BeginHFun, WorkHFun, EndHFun, InitHandlerData)
	-> {ok, Count} | {error, Reason}</name>
      <fsummary>Merge logfiles into one file in chronological order
      </fsummary>
      <type>
	<v>Files = [FileDescription]</v>
	<v>&nbsp;FileDescription = FileSet | {reader,RMod,RFunc,FileSet}
	</v>
	<v>&nbsp;&nbsp;FileSet = {Node,LogFiles} | {Node,[LogFiles]}</v>
	<v>&nbsp;&nbsp;&nbsp;Node = atom()</v>
	<v>&nbsp;&nbsp;&nbsp;LogFiles =
	  [{trace_log,[FileName]}] | [{trace_log,[FileName]},{ti_log,TiFileSpec}]</v>
	<v>&nbsp;&nbsp;&nbsp;&nbsp;TiFileSpec = [string()] - a list of one string.</v>
	<v>&nbsp;&nbsp;&nbsp;&nbsp;FileName = string()</v>
	<v>&nbsp;&nbsp;RMod = RFunc = atom()</v>
	<v>OutFile = string()</v>
	<v>BeginHFun = fun(InitHandlerData) ->
	  {ok, NewHandlerData} | {error, Reason}</v>
	<v>WorkHFun = fun(Node, LogEntry, PidMappings, HandlerData) ->
	  {ok, NewHandlerData}</v>
	<v>&nbsp;LogEntry = tuple()</v>
	<v>&nbsp;PidMappings = term()</v>
	<v>EndHFun = fun(HandlerData) -> ok | {error, Reason}</v>
	<v>Count = int()</v>
	<v>Reason = term()</v>
      </type>
      <desc>
	<p>Merges the logfiles in <c>Files</c> together into one file in chronological order. The logfile merger consists of an output process and one or several reader processes.</p>

	<p>Returns <c>{ok, Count}</c> where <c>Count</c> is the total number of log entries processed, if successful.</p>

	<p>When specifying <c>LogFiles</c>, currently the standard reader-process only supports:</p>
	<list>
	  <item>one single file</item>
	  <item>a list of wraplog files, following the naming convention <c>&lt;Prefix&gt;&lt;Nr&gt;&lt;Suffix&gt;</c>.</item>
	</list>
	<p>Note that (when using the standard reader process) it is possible to give a list of <c>LogFiles</c>. The list must be sorted starting with the oldest. This will cause several trace-logs (from the same node) to be merged together in the same <c>OutFile</c>. The reader process will simply start reading the next file (or wrapset) when the previous is done.</p>

	<p><c>FileDescription == {reader,RMod,RFunc,FileSet}</c> indicates that <c>spawn(RMod, RFunc, [OutputPid,LogFiles])</c> shall create a reader process.</p>

	<p>The output process is customized with <c>BeginHFun</c>, <c>WorkHFun</c> and <c>EndHFun</c>. If using <c>merge/2</c> a default output process configuration is used, basically creating a text file and writing the output line by line. <c>BeginHFun</c> is called once before requesting log entries from the reader processes. <c>WorkHFun</c> is called for every log entry (trace message) <c>LogEntry</c>. Here the log entry typically gets written to the output. <c>PidMappings</c> is the translations produced by the reader process. <c>EndHFun</c> is called when all reader processes have terminated.</p>

	<p>Currently the standard reader can only handle one ti-file (per <c>LogFiles</c>). The current inviso meta tracer is further not capable of wrapping ti-files. (This also because a wrapped ti-log will most likely be worthless since alias associations done in the beginning are erased but still used in the trace-log).</p>

	<p>The standard reader process is implemented in the module <c>inviso_lfm_tpreader</c> (trace port reader). It understands Erlang linked in trace-port driver generated trace-logs and <c>inviso_rt_meta</c> generated trace information files.</p>
      </desc>
    </func>
  </funcs>

  <section>
    <title>Writing Your Own Reader Process</title>

    <p>Writing a reader process is not that difficult. It must:</p>
    <list>
      <item>Export an init-like function accepting two arguments, pid of the output process and the <c>LogFiles</c> component. <c>LogFiles</c> is actually only used by the reader processes, making it possible to redefine <c>LogFiles</c> if implementing an own reader process.</item>

      <item>Respond to <c>{get_next_entry, OutputPid}</c> messages with <c>{next_entry, self(), PidMappings, NowTimeStamp, Term}</c> or <c>{next_entry, self(), {error,Reason}}</c>.</item>

      <item>Terminate normally when no more log entries are available.</item>

      <item>Terminate on an incoming EXIT-signal from <c>OutputPid</c>.</item>
    </list>

    <p>The reader process must of course understand the format of a logfile written by the runtime component.</p>
  </section>

  <authors>
    <aname>Lennart &Ouml;hman</aname>
    <email>support@erlang.ericsson.se</email>
  </authors>
</erlref>

⌨️ 快捷键说明

复制代码Ctrl + C
搜索代码Ctrl + F
全屏模式F11
增大字号Ctrl + =
减小字号Ctrl + -
显示快捷键?