📄 appendix_a.txt
字号:
loop iteration proceeds without executing later lines in the
block. If the 'break' statement occurs in a 'while' loop, control
passes past the loop without executing later lines (except the
'finally' block if the 'break' occurs in a 'try'). If a 'break'
occurs in a 'while' block, the 'else' block is not executed.
If a 'while' statement's expression is to go from being true
to being false, typically some name in the expression will be
re-bound within the 'while' block. At times an expression will
depend on an external condition, such as a file handle or a
socket, or it may involve a call to a function whose Boolean
value changes over invocations. However, probably the most
common Python idiom for 'while' statements is to rely on a
'break' to terminate a block. Some examples:
>>> command = ''
>>> while command != 'exit':
... command = raw_input('Command > ')
... # if/elif block to dispatch on various commands
...
Command > someaction
Command > exit
>>> while socket.ready():
... socket.getdata() # do something with the socket
... else:
... socket.close() # cleanup (e.g. close socket)
...
>>> while 1:
... command = raw_input('Command > ')
... if command == 'exit': break
... # elif's for other commands
...
Command > someaction
Command > exit
TOPIC -- Functions, Simple Generators, and the 'yield' Statement
--------------------------------------------------------------------
Both functions and object methods allow a kind of nonlocality in
terms of program flow, but one that is quite restrictive. A
function or method is called from another context, enters at its
top, executes any statements encountered, then returns to the
calling context as soon as a 'return' statement is reached (or
the function body ends). The invocation of a function or method
is basically a strictly linear nonlocal flow.
Python 2.2 introduced a flow control construct, called
generators, that enables a new style of nonlocal branching. If a
function or method body contains the statement 'yield', then it
becomes a -generator function-, and invoking the function returns
a -generator iterator- instead of a simple value. A generator
iterator is an object that has a '.next()' method that returns
values. Any instance object can have a '.next()' method, but a
generator iterator's method is special in having "resumable
execution."
In a standard function, once a 'return' statement is encountered,
the Python interpreter discards all information about the
function's flow state and local name bindings. The returned value
might contain some information about local values, but the flow
state is always gone. A generator iterator, in contrast,
"remembers" the entire flow state, and all local bindings,
between each invocation of its '.next()' method. A value is
returned to a calling context each place a 'yield' statement is
encountered in the generator function body, but the calling
context (or any context with access to the generator iterator) is
able to jump back to the flow point where this last 'yield'
occurred.
In the abstract, generators seem complex, but in practice they
prove quite simple. For example:
>>> from __future__ import generators # not needed in 2.3+
>>> def generator_func():
... for n in [1,2]:
... yield n
... print "Two yields in for loop"
... yield 3
...
>>> generator_iter = generator_func()
>>> generator_iter.next()
1
>>> generator_iter.next()
2
>>> generator_iter.next()
Two yields in for loop
3
>>> generator_iter.next()
Traceback (most recent call last):
File "<stdin>", line 1, in ?
StopIteration
The object 'generator_iter' in the example can be bound in
different scopes, and passed to and returned from functions,
just like any other object. Any context invoking
'generator_iter.next()' jumps back into the last flow point
where the generator function body yielded.
In a sense, a generator iterator allows you to perform jumps
similar to the "GOTO" statements of some (older) languages, but
still retains the advantages of structured programming. The most
common usage for generators, however, is simpler than this. Most
of the time, generators are used as "iterators" in a loop
context; for example:
>>> for n in generator_func():
... print n
...
1
2
Two yields in for loop
3
In recent Python versions, the 'StopIteration' exception is used
to signal the end of a 'for' loop. The generator iterator's
'.next()' method is implicitly called as many times as possible
by the 'for' statement. The name indicated in the 'for'
statement is repeatedly re-bound to the values the 'yield'
statement(s) return.
TOPIC -- Raising and Catching Exceptions
--------------------------------------------------------------------
Python uses exceptions quite broadly and probably more naturally
than any other programming language. In fact there are certain
flow control constructs that are awkward to express by means
other than raising and catching exceptions.
There are two general purposes for exceptions in Python. On the
one hand, Python actions can be invalid or disallowed in various
ways. You are not allowed to divide by zero; you cannot open (for
reading) a filename that does not exist; some functions require
arguments of specific types; you cannot use an unbound name on
the right side of an assignment; and so on. The exceptions raised
by these types of occurrences have names of the form
'[A-Z].*Error'. Catching -error- exceptions is often a useful way
to recover from a problem condition and restore an application to
a "happy" state. Even if such error exceptions are not caught in
an application, their occurrence provides debugging clues since
they appear in tracebacks.
The second purpose for exceptions is for circumstances a
programmer wishes to flag as "exceptional." But understand
"exceptional" in a weak sense--not as something that indicates
a programming or computer error, but simply as something
unusual or "not the norm." For example, Python 2.2+ iterators
raise a 'StopIteration' exception when no more items can be
generated. Most such implied sequences are not infinite
length, however; it is merely the case that they contain a
(large) number of items, and they run out only once at the end.
It's not "the norm" for an iterator to run out of items, but it
is often expected that this will happen eventually.
In a sense, raising an exception can be similar to executing a
'break' statement--both cause control flow to leave a block.
For example, compare:
>>> n = 0
>>> while 1:
... n = n+1
... if n > 10: break
...
>>> print n
11
>>> n = 0
>>> try:
... while 1:
... n = n+1
... if n > 10: raise "ExitLoop"
... except:
... print n
...
11
In two closely related ways, exceptions behave differently than
do 'break' statements. In the first place, exceptions could be
described as having "dynamic scope," which in most contexts is
considered a sin akin to "GOTO," but here is quite useful. That
is, you never know at compile time exactly where an exception
might get caught (if not anywhere else, it is caught by the
Python interpreter). It might be caught in the exception's block,
or a containing block, and so on; or it might be in the local
function, or something that called it, or something that called
the caller, and so on. An exception is a -fact- that winds its
way through execution contexts until it finds a place to settle.
The upward propagation of exceptions is quite opposite to the
downward propagation of lexically scoped bindings (or even to the
earlier "three-scope rule").
The corollary of exceptions' dynamic scope is that, unlike
'break', they can be used to exit gracefully from deeply nested
loops. The "Zen of Python" offers a caveat here: "Flat is better
than nested." And indeed it is so, if you find yourself nesting
loops -too- deeply, you should probably refactor (e.g., break
loops into utility functions). But if you are nesting -just
deeply enough-, dynamically scoped exceptions are just the thing
for you. Consider the following small problem: A "Fermat triple"
is here defined as a triple of integers (i,j,k) such that "i**2 +
j**2 == k**2". Suppose that you wish to determine if any Fermat
triples exist with all three integers inside a given numeric
range. An obvious (but entirely nonoptimal) solution is:
>>> def fermat_triple(beg, end):
... class EndLoop(Exception): pass
... range_ = range(beg, end)
... try:
... for i in range_:
... for j in range_:
... for k in range_:
... if i**2 + j**2 == k**2:
... raise EndLoop, (i,j,k)
... except EndLoop, triple:
... # do something with 'triple'
... return i,j,k
...
>>> fermat_triple(1,10)
(3, 4, 5)
>>> fermat_triple(120,150)
>>> fermat_triple(100,150)
(100, 105, 145)
By raising the 'EndLoop' exception in the middle of the nested
loops, it is possible to catch it again outside of all the
loops. A simple 'break' in the inner loop would only break out
of the most deeply nested block, which is pointless. One might
devise some system for setting a "satisfied" flag and testing
for this at every level, but the exception approach is much
simpler. Since the 'except' block does not actually -do-
anything extra with the triple, it could have just been
returned inside the loops; but in the general case, other
actions can be required before a 'return'.
It is not uncommon to want to leave nested loops when something
has "gone wrong" in the sense of a "*Error" exception.
Sometimes you might only be in a position to discover a problem
condition within nested blocks, but recovery still makes better
sense outside the nesting. Some typical examples are problems
in I/O, calculation overflows, missing dictionary keys or list
indices, and so on. Moreover, it is useful to assign 'except'
statements to the calling position that really needs to handle
the problems, then write support functions as if nothing can go
wrong. For example:
>>> try:
... result = complex_file_operation(filename)
... except IOError:
... print "Cannot open file", filename
The function 'complex_file_operation()' should not be burdened
with trying to figure out what to do if a bad 'filename' is given
to it--there is really nothing to be done in that context.
Instead, such support functions can simply propagate their
exceptions upwards, until some caller takes responsibility for
the problem.
The 'try' statement has two forms. The 'try/except/else' form is
more commonly used, but the 'try/finally' form is useful for
"cleanup handlers."
In the first form, a 'try' block must be followed by one or more
'except' blocks. Each 'except' may specify an exception or tuple
of exceptions to catch; the last 'except' block may omit an
exception (tuple), in which case it catches every exception that
is not caught by an earlier 'except' block. After the 'except'
blocks, you may optionally specify an 'else' block. The 'else'
block is run only if no exception occurred in the 'try' block.
For example:
>>> def except_test(n):
... try: x = 1/n
... except IOError: print "IO Error"
... except ZeroDivisionError: print "Zero Division"
... except: print "Some Other Error"
... else: print "All is Happy"
...
>>> except_test(1)
All is Happy
>>> except_test(0)
Zero Division
>>> except_test('x')
Some Other Error
An 'except' test will match either the exception actually
listed or any descendent of that exception. It tends to make
sense, therefore, in defining your own exceptions to inherit
from related ones in the [exceptions] module. For example:
>>> class MyException(IOError): pass
>>> try:
... raise MyException
... except IOError:
... print "got it"
...
got it
In the "try/finally" form of the 'try' statement, the 'finally'
statement acts as general cleanup code. If no exception occurs in
the 'try' block, the 'finally' block runs, and that is that. If
an exception -was- raised in the 'try' bl
⌨️ 快捷键说明
复制代码
Ctrl + C
搜索代码
Ctrl + F
全屏模式
F11
切换主题
Ctrl + Shift + D
显示快捷键
?
增大字号
Ctrl + =
减小字号
Ctrl + -