📄 ec6.htm
字号:
This scheme isn't foolproof (programmers can still copy-and-paste themselves into trouble), but it's more reliable than the original design. As for Airplane::defaultFly, it's protected because it's truly an implementation detail of Airplane and its derived classes. Clients using airplanes should care only that they can be flown, not how the flying is implemented.It's also important that Airplane::defaultFly is a nonvirtual function. This is because no subclass should redefine this function, a truth to which Item 37 is devoted. If defaultFly were virtual, you'd have a circular problem: what if some subclass forgets to redefine defaultFly when it's supposed to?Some people object to the idea of having separate functions for providing interface and default implementation, such as fly and defaultFly above. For one thing, they note, it pollutes the class namespace with a proliferation of closely-related function names. Yet they still agree that interface and default implementation should be separated. How do they resolve this seeming contradiction? By taking advantage of the fact that pure virtual functions must be redeclared in subclasses, but they may also have implementations of their own. Here's how the Airplane hierarchy could take advantage of the ability to define a pure virtual function: class Airplane {public: virtual void fly(const Airport& destination) = 0; ...};void Airplane::fly(const Airport& destination){ default code for flying an airplane to the given destination}class ModelA: public Airplane {public: virtual void fly(const Airport& destination) { Airplane::fly(destination); } ...};class ModelB: public Airplane {public: virtual void fly(const Airport& destination) { Airplane::fly(destination); } ...};class ModelC: public Airplane {public: virtual void fly(const Airport& destination); ...};void ModelC::fly(const Airport& destination){ code for flying a ModelC airplane to the given destination}This is almost exactly the same design as before, except that the body of the pure virtual function Airplane::fly takes the place of the independent function Airplane::defaultFly. In essence, fly has been broken into its two fundamental components. Its declaration specifies its interface (which derived classes must use), while its definition specifies its default behavior (which derived classes may use, but only if they explicitly request it). In merging fly and defaultFly, however, you've lost the ability to give the two functions different protection levels: the code that used to be protected (by being in defaultFly) is now public (because it's in fly).Finally, we come to Shape's nonvirtual function, objectID. When a member function is nonvirtual, it's not supposed to behave differently in derived classes. In fact, a nonvirtual member function specifies an invariant over specialization, because it identifies behavior that is not supposed to change, no matter how specialized a derived class becomes. As such,The purpose of declaring a nonvirtual function is to have derived classes inherit a function interface as well as a mandatory implementation.You can think of the declaration for Shape::objectID as saying, "Every Shape object has a function that yields an object identifier, and that object identifier is always computed in the same way. That way is determined by the definition of Shape::objectID, and no derived class should try to change how it's done." Because a nonvirtual function identifies an invariant over specialization, it should never be redefined in a subclass, a point that is discussed in detail in Item 37.The differences in declarations for pure virtual, simple virtual, and nonvirtual functions allow you to specify with precision what you want derived classes to inherit: interface only, interface and a default implementation, or interface and a mandatory implementation, respectively. Because these different types of declarations mean fundamentally different things, you must choose carefully among them when you declare your member functions. If you do, you should avoid the two most common mistakes made by inexperienced class designers.The first mistake is to declare all functions nonvirtual. That leaves no room for specialization in derived classes; nonvirtual destructors are particularly problematic (see Item 14). Of course, it's perfectly reasonable to design a class that is not intended to be used as a base class. Item M34 gives an example of a case where you might want to. In that case, a set of exclusively nonvirtual member functions is appropriate. Too often, however, such classes are declared either out of ignorance of the differences between virtual and nonvirtual functions or as a result of an unsubstantiated concern over the performance cost of virtual functions (see Item M24). The fact of the matter is that almost any class that's to be used as a base class will have virtual functions (again, see Item 14).If you're concerned about the cost of virtual functions, allow me to bring up the rule of 80-20 (see Item M16), which states that in a typical program, 80 percent of the runtime will be spent executing just 20 percent of the code. This rule is important, because it means that, on average, 80 percent of your function calls can be virtual without having the slightest detectable impact on your program's overall performance. Before you go gray worrying about whether you can afford the cost of a virtual function, then, take the simple precaution of making sure that you're focusing on the 20 percent of your program where the decision might really make a difference.The other common problem is to declare all member functions virtual. Sometimes this is the right thing to do witness Protocol classes (see Item 34), for example. However, it can also be a sign of a class designer who lacks the backbone to take a firm stand. Some functions should not be redefinable in derived classes, and whenever that's the case, you've got to say so by making those functions nonvirtual. It serves no one to pretend that your class can be all things to all people if they'll just take the time to redefine all your functions. Remember that if you have a base class B, a derived class D, and a member function mf, then each of the following calls to mf must work properly: D *pd = new D;B *pb = pd;pb->mf(); // call mf through a // pointer-to-basepd->mf(); // call mf through a // pointer-to-derivedSometimes, you must make mf a nonvirtual function to ensure that everything behaves the way it's supposed to (see Item 37). If you have an invariant over specialization, don't be afraid to say so! Back to Item 36: Differentiate between inheritance of interface and inheritance of implementation.Continue to Item 38: Never redefine an inherited default parameter value.Item 37: Never redefine an inherited nonvirtual function.There are two ways of looking at this issue: the theoretical way and the pragmatic way. Let's start with the pragmatic way. After all, theoreticians are used to being patient.Suppose I tell you that a class D is publicly derived from a class B and that there is a public member function mf defined in class B. The parameters and return type of mf are unimportant, so let's just assume they're both void. In other words, I say this: class B {public: void mf(); ...};class D: public B { ... };Even without knowing anything about B, D, or mf, given an object x of type D, D x; // x is an object of type Dyou would probably be quite surprised if this, B *pB = &x; // get pointer to xpB->mf(); // call mf through pointerbehaved differently from this: D *pD = &x; // get pointer to xpD->mf(); // call mf through pointerThat's because in both cases you're invoking the member function mf on the object x. Because it's the same function and the same object in both cases, it should behave the same way, right?Right, it should. But it might not. In particular, it won't if mf is nonvirtual and D has defined its own version of mf: class D: public B {public: void mf(); // hides B::mf; see Item 50 ...};pB->mf(); // calls B::mfpD->mf(); // calls D::mfThe reason for this two-faced behavior is that nonvirtual functions like B::mf and D::mf are statically bound (see Item 38). That means that because pB is declared to be of type pointer-to-B, nonvirtual functions invoked through pB will always be those defined for class B, even if pB points to an object of a class derived from B, as it does in this example.Virtual functions, on the other hand, are dynamically bound (again, see Item 38), so they don't suffer from this problem. If mf were a virtual function, a call to mf through either pB or pD would result in an invocation of D::mf, because what pB and pD really point to is an object of type D.The bottom line, then, is that if you are writing class D and you redefine a nonvirtual function mf that you inherit from class B, D objects will likely exhibit schizophrenic behavior. In particular, any given D object may act like either a B or a D when mf is called, and the determining factor will have nothing to do with the object itself, but with the declared type of the pointer that points to it. References exhibit the same baffling behavior as do pointers.So much for the pragmatic argument. What you want now, I know, is some kind of theoretical justification for not redefining inherited nonvirtual functions. I am pleased to oblige.Item 35 explains that public inheritance means isa, and Item 36 describes why declaring a nonvirtual function in a class establishes an invariant over specialization for that class. If you apply these observations to the classes B and D and to the nonvirtual member function B::mf, then Everything that is applicable to B objects is also applicable to D objects, because every D object isa B object; Subclasses of B must inherit both the interface and the implementation of mf, because mf is nonvirtual in B.Now, if D redefines mf, there is a contradiction in your design. If D really needs to implement mf differently from B, and if every B object no matter how specialized really has to use the B implementation for mf, then it's simply not true that every D isa B. In that case, D shouldn't publicly inherit from B. On the other hand, if D really has to publicly inherit from B, and if D really needs to implement mf differently from B, then it's just not true that mf reflects an invariant over specialization for B. In that case, mf should be virtual. Finally, if every D really isa B, and if mf really corresponds to an invariant over specialization for B, then D can't honestly need to redefine mf, and it shouldn't try to do so.Regardless of which argument applies, something has to give, and under no conditions is it the prohibition on redefining an inherited nonvirtual function. Back to Item 37: Never redefine an inherited nonvirtual function.Continue to Item 39: Avoid casts down the inheritance hierarchy.Item 38: Never redefine an inherited default parameter value.Let's simplify this discussion right from the start. A default parameter can exist only as part of a function, and you can inherit only two kinds of functions: virtual and nonvirtual. Therefore, the only way to redefine a default parameter value is to redefine an inherited function. However, it's always a mistake to redefine an inherited nonvirtual function (see Item 37), so we can safely limit our discussion here to the situation in which you inherit a virtual function with a default parameter value.That being the case, the justification for this Item becomes quite straightforward: virtual functions are dynamically bound, but default parameter values are statically bound.What's that? You say you're not up on the latest object-oriented lingo, or perhaps the difference between static and dynamic binding has slipped your already overburdened mind? Let's review, then.An object's static type is the type you declare it to have in the program text. Consider this class hierarchy: enum ShapeColor { RED, GREEN, BLUE };// a class for geometric shapesclass Shape {public: // all shapes must offer a function to draw themselves virtual void draw(ShapeColor color = RED) const = 0; ...};class Rectangle: public Shape {public: // notice the different default parameter value - bad! virtual void draw(ShapeColor color = GREEN) const; ...};class Circle: public Shape {public: virtual void draw(ShapeColor color) const; ...};Graphically, it looks like this:Now consider these pointers: Shape *ps; // static type = Shape*Shape *pc = new Circle; // static type = Shape*Shape *pr = new Rectangle; // static type = Shape*In this example, ps, pc, and pr are all declared to be of type pointer-to-Shape, so they all have that as their static type. Notice that it makes absolutely no difference what they're really pointing to their static type is Shape* regardless.An object's dynamic type is determined by the type of the object to which it currently refers. That is, its dynamic type indicates how it will behave. In the example above, pc's dynamic type is Circle*, and pr's dynamic type is Rectangle*. As for ps, it doesn't really have a dynamic type, because it doesn't refer to any object (yet).Dynamic types, as their name suggests, can change as a program runs, typically through assignments:
⌨️ 快捷键说明
复制代码
Ctrl + C
搜索代码
Ctrl + F
全屏模式
F11
切换主题
Ctrl + Shift + D
显示快捷键
?
增大字号
Ctrl + =
减小字号
Ctrl + -