📄 perlipc.pod
字号:
may see alarms delivered even after calling C<alarm(0)> as the latterstops the raising of alarms but does not cancel the delivery of alarmsraised but not yet caught. Do not depend on the behaviors described inthis paragraph as they are side effects of the current implementation andmay change in future versions of Perl.=item Interrupting IOWhen a signal is delivered (e.g. INT control-C) the operating systembreaks into IO operations like C<read> (used to implement PerlsE<lt>E<gt> operator). On older Perls the handler was calledimmediately (and as C<read> is not "unsafe" this worked well). Withthe "deferred" scheme the handler is not called immediately, and ifPerl is using system's C<stdio> library that library may re-start theC<read> without returning to Perl and giving it a chance to call the%SIG handler. If this happens on your system the solution is to useC<:perlio> layer to do IO - at least on those handles which you wantto be able to break into with signals. (The C<:perlio> layer checksthe signal flags and calls %SIG handlers before resuming IO operation.)Note that the default in Perl 5.7.3 and later is to automatically usethe C<:perlio> layer.Note that some networking library functions like gethostbyname() areknown to have their own implementations of timeouts which may conflictwith your timeouts. If you are having problems with such functions,you can try using the POSIX sigaction() function, which bypasses thePerl safe signals (note that this means subjecting yourself topossible memory corruption, as described above). Instead of settingC<$SIG{ALRM}>: local $SIG{ALRM} = sub { die "alarm" };try something like the following: use POSIX qw(SIGALRM); POSIX::sigaction(SIGALRM, POSIX::SigAction->new(sub { die "alarm" })) or die "Error setting SIGALRM handler: $!\n";Another way to disable the safe signal behavior locally is to usethe C<Perl::Unsafe::Signals> module from CPAN (which will affectall signals).=item Restartable system callsOn systems that supported it, older versions of Perl used theSA_RESTART flag when installing %SIG handlers. This meant thatrestartable system calls would continue rather than returning whena signal arrived. In order to deliver deferred signals promptly,Perl 5.7.3 and later do I<not> use SA_RESTART. Consequently, restartable system calls can fail (with $! set to C<EINTR>) in placeswhere they previously would have succeeded.Note that the default C<:perlio> layer will retry C<read>, C<write>and C<close> as described above and that interrupted C<wait> and C<waitpid> calls will always be retried.=item Signals as "faults"Certain signals, e.g. SEGV, ILL, and BUS, are generated as a result ofvirtual memory or other "faults". These are normally fatal and there islittle a Perl-level handler can do with them, so Perl now delivers themimmediately rather than attempting to defer them.=item Signals triggered by operating system stateOn some operating systems certain signal handlers are supposed to "dosomething" before returning. One example can be CHLD or CLD whichindicates a child process has completed. On some operating systems thesignal handler is expected to C<wait> for the completed childprocess. On such systems the deferred signal scheme will not work forthose signals (it does not do the C<wait>). Again the failure willlook like a loop as the operating system will re-issue the signal asthere are un-waited-for completed child processes.=backIf you want the old signal behaviour back regardless of possiblememory corruption, set the environment variable C<PERL_SIGNALS> toC<"unsafe"> (a new feature since Perl 5.8.1).=head1 Using open() for IPCPerl's basic open() statement can also be used for unidirectionalinterprocess communication by either appending or prepending a pipesymbol to the second argument to open(). Here's how to startsomething up in a child process you intend to write to: open(SPOOLER, "| cat -v | lpr -h 2>/dev/null") || die "can't fork: $!"; local $SIG{PIPE} = sub { die "spooler pipe broke" }; print SPOOLER "stuff\n"; close SPOOLER || die "bad spool: $! $?";And here's how to start up a child process you intend to read from: open(STATUS, "netstat -an 2>&1 |") || die "can't fork: $!"; while (<STATUS>) { next if /^(tcp|udp)/; print; } close STATUS || die "bad netstat: $! $?";If one can be sure that a particular program is a Perl script that isexpecting filenames in @ARGV, the clever programmer can write somethinglike this: % program f1 "cmd1|" - f2 "cmd2|" f3 < tmpfileand irrespective of which shell it's called from, the Perl program willread from the file F<f1>, the process F<cmd1>, standard input (F<tmpfile>in this case), the F<f2> file, the F<cmd2> command, and finally the F<f3>file. Pretty nifty, eh?You might notice that you could use backticks for much thesame effect as opening a pipe for reading: print grep { !/^(tcp|udp)/ } `netstat -an 2>&1`; die "bad netstat" if $?;While this is true on the surface, it's much more efficient to process thefile one line or record at a time because then you don't have to read thewhole thing into memory at once. It also gives you finer control of thewhole process, letting you to kill off the child process early if you'dlike.Be careful to check both the open() and the close() return values. Ifyou're I<writing> to a pipe, you should also trap SIGPIPE. Otherwise,think of what happens when you start up a pipe to a command that doesn'texist: the open() will in all likelihood succeed (it only reflects thefork()'s success), but then your output will fail--spectacularly. Perlcan't know whether the command worked because your command is actuallyrunning in a separate process whose exec() might have failed. Therefore,while readers of bogus commands return just a quick end of file, writersto bogus command will trigger a signal they'd better be prepared tohandle. Consider: open(FH, "|bogus") or die "can't fork: $!"; print FH "bang\n" or die "can't write: $!"; close FH or die "can't close: $!";That won't blow up until the close, and it will blow up with a SIGPIPE.To catch it, you could use this: $SIG{PIPE} = 'IGNORE'; open(FH, "|bogus") or die "can't fork: $!"; print FH "bang\n" or die "can't write: $!"; close FH or die "can't close: status=$?";=head2 FilehandlesBoth the main process and any child processes it forks share the sameSTDIN, STDOUT, and STDERR filehandles. If both processes try to accessthem at once, strange things can happen. You may also want to closeor reopen the filehandles for the child. You can get around this byopening your pipe with open(), but on some systems this means that thechild process cannot outlive the parent.=head2 Background ProcessesYou can run a command in the background with: system("cmd &");The command's STDOUT and STDERR (and possibly STDIN, depending on yourshell) will be the same as the parent's. You won't need to catchSIGCHLD because of the double-fork taking place (see below for moredetails).=head2 Complete Dissociation of Child from ParentIn some cases (starting server processes, for instance) you'll want tocompletely dissociate the child process from the parent. This isoften called daemonization. A well behaved daemon will also chdir()to the root directory (so it doesn't prevent unmounting the filesystemcontaining the directory from which it was launched) and redirect itsstandard file descriptors from and to F</dev/null> (so that randomoutput doesn't wind up on the user's terminal). use POSIX 'setsid'; sub daemonize { chdir '/' or die "Can't chdir to /: $!"; open STDIN, '/dev/null' or die "Can't read /dev/null: $!"; open STDOUT, '>/dev/null' or die "Can't write to /dev/null: $!"; defined(my $pid = fork) or die "Can't fork: $!"; exit if $pid; setsid or die "Can't start a new session: $!"; open STDERR, '>&STDOUT' or die "Can't dup stdout: $!"; }The fork() has to come before the setsid() to ensure that you aren't aprocess group leader (the setsid() will fail if you are). If yoursystem doesn't have the setsid() function, open F</dev/tty> and use theC<TIOCNOTTY> ioctl() on it instead. See tty(4) for details.Non-Unix users should check their Your_OS::Process module for othersolutions.=head2 Safe Pipe OpensAnother interesting approach to IPC is making your single program gomultiprocess and communicate between (or even amongst) yourselves. Theopen() function will accept a file argument of either C<"-|"> or C<"|-">to do a very interesting thing: it forks a child connected to thefilehandle you've opened. The child is running the same program as theparent. This is useful for safely opening a file when running under anassumed UID or GID, for example. If you open a pipe I<to> minus, you canwrite to the filehandle you opened and your kid will find it in hisSTDIN. If you open a pipe I<from> minus, you can read from the filehandleyou opened whatever your kid writes to his STDOUT. use English '-no_match_vars'; my $sleep_count = 0; do { $pid = open(KID_TO_WRITE, "|-"); unless (defined $pid) { warn "cannot fork: $!"; die "bailing out" if $sleep_count++ > 6; sleep 10; } } until defined $pid; if ($pid) { # parent print KID_TO_WRITE @some_data; close(KID_TO_WRITE) || warn "kid exited $?"; } else { # child ($EUID, $EGID) = ($UID, $GID); # suid progs only open (FILE, "> /safe/file") || die "can't open /safe/file: $!"; while (<STDIN>) { print FILE; # child's STDIN is parent's KID } exit; # don't forget this }Another common use for this construct is when you need to executesomething without the shell's interference. With system(), it'sstraightforward, but you can't use a pipe open or backticks safely.That's because there's no way to stop the shell from getting its hands onyour arguments. Instead, use lower-level control to call exec() directly.Here's a safe backtick or pipe open for read: # add error processing as above $pid = open(KID_TO_READ, "-|"); if ($pid) { # parent while (<KID_TO_READ>) { # do something interesting } close(KID_TO_READ) || warn "kid exited $?"; } else { # child ($EUID, $EGID) = ($UID, $GID); # suid only exec($program, @options, @args) || die "can't exec program: $!"; # NOTREACHED }And here's a safe pipe open for writing: # add error processing as above $pid = open(KID_TO_WRITE, "|-"); $SIG{PIPE} = sub { die "whoops, $program pipe broke" }; if ($pid) { # parent for (@data) { print KID_TO_WRITE; } close(KID_TO_WRITE) || warn "kid exited $?"; } else { # child ($EUID, $EGID) = ($UID, $GID); exec($program, @options, @args) || die "can't exec program: $!"; # NOTREACHED }Since Perl 5.8.0, you can also use the list form of C<open> for pipes :the syntax open KID_PS, "-|", "ps", "aux" or die $!;forks the ps(1) command (without spawning a shell, as there are more thanthree arguments to open()), and reads its standard output via theC<KID_PS> filehandle. The corresponding syntax to write to commandpipes (with C<"|-"> in place of C<"-|">) is also implemented.Note that these operations are full Unix forks, which means they may not becorrectly implemented on alien systems. Additionally, these are not truemultithreading. If you'd like to learn more about threading, see theF<modules> file mentioned below in the SEE ALSO section.=head2 Bidirectional Communication with Another ProcessWhile this works reasonably well for unidirectional communication, whatabout bidirectional communication? The obvious thing you'd like to dodoesn't actually work: open(PROG_FOR_READING_AND_WRITING, "| some program |")and if you forget to use the C<use warnings> pragma or the B<-w> flag,then you'll miss out entirely on the diagnostic message: Can't do bidirectional pipe at -e line 1.If you really want to, you can use the standard open2() library functionto catch both ends. There's also an open3() for tridirectional I/O so youcan also catch your child's STDERR, but doing so would then require anawkward select() loop and wouldn't allow you to use normal Perl inputoperations.If you look at its source, you'll see that open2() uses low-levelprimitives like Unix pipe() and exec() calls to create all the connections.While it might have been slightly more efficient by using socketpair(), itwould have then been even less portable than it already is. The open2()and open3() functions are unlikely to work anywhere except on a Unixsystem or some other one purporting to be POSIX compliant.Here's an example of using open2(): use FileHandle; use IPC::Open2; $pid = open2(*Reader, *Writer, "cat -u -n" ); print Writer "stuff\n"; $got = <Reader>;The problem with this is that Unix buffering is really going toruin your day. Even though your C<Writer> filehandle is auto-flushed,and the process on the other end will get your data in a timely manner,you can't usually do anything to force it to give it back to youin a similarly quick fashion. In this case, we could, because wegave I<cat> a B<-u> flag to make it unbuffered. But very few Unixcommands are designed to operate over pipes, so this seldom worksunless you yourself wrote the program on the other end of thedouble-ended pipe.A solution to this is the nonstandard F<Comm.pl> library. It usespseudo-ttys to make your program behave more reasonably:
⌨️ 快捷键说明
复制代码
Ctrl + C
搜索代码
Ctrl + F
全屏模式
F11
切换主题
Ctrl + Shift + D
显示快捷键
?
增大字号
Ctrl + =
减小字号
Ctrl + -