Kuro5hin.org: technology and culture, from the trenches
create account | help/FAQ | contact | links | search | IRC | site news
[ Everything | Diaries | Technology | Science | Culture | Politics | Media | News | Internet | Op-Ed | Fiction | Meta | MLP ]
We need your support: buy an ad | premium membership

Another Argument For Choosing Composition Over Inheritance In Object Oriented Programming

By Carnage4Life in Technology
Mon Jul 15, 2002 at 04:17:50 AM EST
Tags: Software (all tags)

The three primary aspects of object oriented programming are (i) encapsulation, (ii) inheritance and (iii) polymorphism. However, in recent years experienced practitioners of object oriented programming have warned against utilizing one of the core aspects of object orientation, inheritance. These practitioners have highlighted the hazards of inheritance and decried the fact that it is overused in most situations it is utilized.

This article gives a brief overview of why inheritance in object oriented applications should be avoided when possible and describes a specific situation which highlights the need to be careful when using inheritance in object oriented programming languages.


Although introductory texts on Object Oriented Programming (OOP) are quick to tout the benefits of inheritance they typically fail to teach the lessons learned outside of academia detailing the shortcomings of this aspect of OOP. Disadvantages of using object inheritance include the following
  1. Large Inheritance Hierarchy: Overuse of inheritance can lead to inheritance hierarchies that are several levels deep. Such large inheritance hierarchies typically become difficult to manage and maintain due to the fact that the derived class is vulnerable to changes made in any of the derived classes which often leads to fragility. There are also performance considerations in that instantiating such classes involves calling constructors across the entire inheritance hierarchy as well above average memory requirements for such objects. An example of such a class is the javax.swing.JFrame class in the Java swing library which has an inheritance depth of six levels.

  2. Fragile Superclasses: Classes that have been subclassed cannot be altered at will in subsequent versions because this may negatively impact derived classes. In C++ this is especially problematic because changes in a superclass typically end up involving a recompile of the child classes. Java utilization of dynamic resolution prevents the need for recompilation but does not entirely lessen the need to avoid making significant changes in base classes.

  3. Breaks Encapsulation: Inheritance in OOP primarily a mechanism for reusing source code as opposed to a mechanism for reusing binary objects. This transparent nature of OOP inheritance relies on the author of the derived class being the author of the base class or having access to its implementation details. This violates one of the other tennets of Object Oriented Programming; encapsulation.

The Problem: Virtual Method Calls In Constructor

Consider the following C++ program

#include <iostream>

class BaseClass{


cout<<"BaseClass Constructor Called"<<endl;<br> doStuff();

virtual void doStuff(){
cout<<"BaseClass doStuff() Called"<<endl; <br> }


class DerivedClass : public BaseClass{

cout<<"DerivedClass Constructor Called"<<endl;<br> }

virtual void doStuff(){
cout<<"DerivedClass doStuff() Called"<<endl;<br> }


void main(void){

DerivedClass dc;
and the equivalent Java program

class BaseClass{

public BaseClass(){

System.out.println("BaseClass Constructor Called");

public void doStuff(){
System.out.println("BaseClass doStuff() Called");


public class DerivedClass extends BaseClass{

public DerivedClass(){

System.out.println("DerivedClass Constructor Called");


public void doStuff(){
System.out.println("DerivedClass doStuff() Called");

public static void main(String[] args){

DerivedClass dc = new DerivedClass();

It may surprise some people to know that both programs produce differing output. The C++ program outputs
BaseClass Constructor Called
BaseClass doStuff() Called
DerivedClass Constructor Called
while the Java program outputs
BaseClass Constructor Called
DerivedClass doStuff() Called
DerivedClass Constructor Called
Note that the Java program was compiled and run using J2SE v1.3 while the C++ program was compiled using GCC 2.95.3-5 both on Windows XP.

Why The Difference?

A good explanation of the C++ behavior can be found in this C++ Q & A on ATL functions and vtables. Basically during construction of a C++ object, the virtual function table for its base class(es) points to functions in the base class and not in the derived class. Thus virtual function calls made in the base class's constructor are actually non-virtual. This kind of makes sense because at the point during construction when the base class's constructor is being invoked, the derived class doesn't exist yet so there shouldn't be a vtable to its methods. However this leads to very inconsistent behavior because in this one special case virtual function calls are treated as non-virtual.

In Java, all method calls are virtual and dynamically bound so virtual calls in a base class's constructor actually execute the derived class's versions. This can lead to problems because as pointed out earlier, the derived class doesn't yet exist as when the base constructor is being executed. Unfortunately without access to the source code of a base class or extremely diligent documentation there is no way to tell if a base class calls virtual methods in its constructor which can lead to complications in languages like Java (and C# which displays the same behavior).


Favor object composition when designing classes. Object composition is not only more resistant to changes in super classes but enhances encapsulation instead of diminishing it. In fact several design patterns utilize object composition to good effect without needing inheritance.

This article was inspired by a mail thread initiated by Paul Tallett


Voxel dot net
o Managed Hosting
o VoxCAST Content Delivery
o Raw Infrastructure


Favorite Object Oriented Programming Language
o C++ 22%
o Java 28%
o Python 17%
o C# 5%
o Objective C 6%
o Perl 4%
o CLOS 8%
o Smalltalk 8%

Votes: 149
Results | Other Polls

Related Links
o encapsulat ion
o inheritanc e
o polymorphi sm
o javax.swin g.JFrame
o C++ Q &amp; A on ATL functions and vtables
o object composition
o design patterns
o Also by Carnage4Life

Display: Sort:
Another Argument For Choosing Composition Over Inheritance In Object Oriented Programming | 121 comments (94 topical, 27 editorial, 1 hidden)
Large Inheritance Hierarchy.... (3.66 / 3) (#5)
by morkeleb on Sun Jul 14, 2002 at 09:36:58 AM EST

That's it! I would have been satisfied with that argument alone - but found the other stuff interesting as well.

I'm happy I'm not alone in thinking inheritance isn't the greatest thing to come along since velcro replaced shoe laces. In C++ courses I took in college, it seems like the coolest thing in the world. Or perhaps that's the way it was presented by my professors I didn't start to have a problem with it until I started working with the MFC on my own and at work. Although if you don't understand it - learning how to use Java or using the MFC can be a nightmare. But I almost never use it now in programs that I write.
"If I read a book and it makes my whole body so cold no fire can ever warm me, I know that is poetry." - Emily Dickinson
that's stupid (none / 0) (#7)
by boxed on Sun Jul 14, 2002 at 09:52:10 AM EST

MFC is the crappiest piece of OO code ever written. If you think that's what OO is about of course you don't like it! Poor design is always poor design though, and avoiding inheritance when it's the logical thing just because you've seen MFC code is stupid, and will come back to haunt you.

[ Parent ]
I've actually gotten along fine without using it.. (none / 0) (#11)
by morkeleb on Sun Jul 14, 2002 at 10:08:56 AM EST

I haven't actually encountered a situation yet were given a choice between using inheritance and object composition in software design - object composition would not work as well as inheritance. Since it's far easier to document and maintain classes that use object composition - I use that instead.

And of course the MFC sucks! However - it's used everywhere in companies that I have worked for - so I use it. The true test of any language feature such as inheritance should not be how well it performs in a well-designed, well-thought-out, well-documented projected. It should be how well it holds up in a train wreck like the MFC. Because I have discovered that is generally the kind of situation you're going to be faced with at least 70-80% of the time in a real-work situation.
"If I read a book and it makes my whole body so cold no fire can ever warm me, I know that is poetry." - Emily Dickinson
[ Parent ]
you just said C++ sucks (none / 0) (#12)
by boxed on Sun Jul 14, 2002 at 10:15:00 AM EST

The true test of any language feature such as inheritance should not be how well it performs in a well-designed, well-thought-out, well-documented projected. It should be how well it holds up in a train wreck like the MFC.
EVERY feature in C++ has a large potential to totally fuck up a project if used incorrectly. You're basically saying C++ as a whole is crap. You should read The Design and Evolution of C++ by Bjarne Stroustrup. C++ was design to enable programmers to write good code. It was not created to stop them from fucking up.

[ Parent ]
An argument could be made that.... (4.50 / 2) (#21)
by morkeleb on Sun Jul 14, 2002 at 10:55:20 AM EST

C++ really does suck, and that the freedom it gives to programmers is a mistake. Maybe languages should be built from the ground up to keep programmers from fucking up (Ada, Pascal...). Why is software that controls pre-natal heart monitors and aircraft control written in languages like Ada? I don't know if you've programmed in Ada - but compared to C++ it is a lot like wearing a strait jacket. The compiler will not let you get away with things that C++ compilers do regularly - such as bounds checking on arrays, automatically type converting integers to floats, using uninitialized pointers. And I have actually read the book to which you refer. I've also worked with other C++ programmers who were in the process of learning the language on the fly from some book like "Mastering C++ in 21 Days" (which incidentally isn't a bad reference if you already know what your're doing), and whose knowledge of the reasons for doing things in the language was not as deep as other more experienced programmers. Doesn't matter - they've got just as much power and freedom to totally fuck up as a C++ guru does.

C++ wasn't designed to take into account clueless programmers? Or overworked programmers? Or tired programmers? Or lazy programmers? Well maybe it should have been - because a lot of important software is being written by them all the time. Other programming languages have been written to take into account that.

I'm not going to stop using it though. It's fast. There are tons of development libraries and compilers available for C++. It's been around forever. But I don't have to use every blessed feature of the language.
"If I read a book and it makes my whole body so cold no fire can ever warm me, I know that is poetry." - Emily Dickinson
[ Parent ]
your points are moot (none / 0) (#89)
by mitch0 on Tue Jul 16, 2002 at 05:27:30 AM EST

I'm really depressed by your views...

Creating languages to enable clueless programmers to write "important" applications is just plain stupid. First of all, clueless programmers should not be called programmers at all... It's like having clueless doctors: would you be happy to take medication from one of those?

Besides, "safe" languages (like Ada) won't save you from stupidity, and even a good programmer can screw up in those languages as well. One of the space probes exploded like 10 seconds after ignition because of a bug in the control software written in Ada.

oh well

[ Parent ]

Myth. (none / 0) (#90)
by i on Tue Jul 16, 2002 at 05:46:26 AM EST

The bug was not in the Ada program. The bug was in the hardware specifications.

and we have a contradicton according to our assumptions and the factor theorem

[ Parent ]
all programmers make mistakes (5.00 / 1) (#91)
by kubalaa on Tue Jul 16, 2002 at 09:47:52 AM EST

If you're hunting rabbits, which would you prefer:
  1. A gun. Don't shoot yourself in the foot.
  2. Dynamite with an automatic timer set on 30 seconds. You can't turn the timer off, but you can reset it. So if you don't want to die, you have to remember to reset the timer every 29 seconds. And don't stand too close to the rabbits when you throw it.
  1. is Scheme (or Python, or Smalltalk, or any other cool high-level language).
  2. is C++.
The point is, you can have a powerful language which doesn't make it easy to
fuck things up, you can have a language with some safety features which doesn't
get in your way, and everybody makes mistakes.

[ Parent ]
to be fair (none / 0) (#106)
by kubalaa on Wed Jul 17, 2002 at 10:18:56 AM EST

Sometimes you really need dynamite. But only when absolutely necessary.

[ Parent ]
Yeps, I agree with that (3.40 / 5) (#8)
by marcos on Sun Jul 14, 2002 at 09:57:16 AM EST

I've always disliked inheritance. When I first started programming, I was told to learn C++ in about 2 months. After that, I was put in a large project, and told to work on the UI.

The project was designed with multiple and deep inheritance, and it was very difficult to figure out exactly the relationship of one class object to another, particularly for me, a novice programmer. Here, it would be instanciated in the class, there, it would be inherited from from the class.

And the real problems came when the design changed. Because certain large classes were inherited by other classes, it was difficult to change any function without breaking something else. So we kept adding functions to classes. The result was a bunch of very large classes that were inherited here and there, almost randomly, it seemed.

I left the company after a while, and my succesor inherited some very messy code.

Polymorphism (4.00 / 4) (#20)
by RQuinn on Sun Jul 14, 2002 at 10:51:24 AM EST

I admit I have not spent much time using OOP/Inheritance/Polymorphism, but doesn't polymorphism require inheritance? Are you saying OOP is only good for encapsulation?

Polymorphism only requires (4.50 / 4) (#22)
by i on Sun Jul 14, 2002 at 10:55:51 AM EST

interface inheritance which is free of many problems that plague implementation inheritance (like this one).

and we have a contradicton according to our assumptions and the factor theorem

[ Parent ]
Polymorphism (5.00 / 1) (#81)
by Ghost Ganz on Mon Jul 15, 2002 at 05:12:42 PM EST

P. is, in essence, that you can treat different kinds of objects in the same way. It requires that these objects have the same interface.

"Same interface" in Java means they have the same superclass or implement the same Interface.

In dynamically typed languages like Smalltalk "same interface" simply means "has the same methods". So polymorphism doesn't require inheritance. It doesn't even require pre-defined interfaces.

[Bottle 'B' is for the monkeys only]
[ Parent ]

Language type dependent (none / 0) (#107)
by Chris Rathman on Wed Jul 17, 2002 at 03:18:27 PM EST

I've tinkered with polymorphism in a number of different languages. For static languages like Java and Eiffel, type polymorphism is enforced via inheritance. For dynamic languages like Smalltalk, the type of variable is dependent upon the messages that it responds to. As long as a variable responds to a requested message, it is considered to be type compatible. In Smalltalk, inheritance is simply a convenience mechanism for sharing code and dispatching.

Since the discussion centers on the use of inheritance, I should also note that inheritance in many languages serves dual purposes. One is the definition of types; the other is code inclusion. An interesting persepective along these lines would be Sather, where the concepts are wholly segregated (unlike in languages like Java). Type Inheritance in Sather acts more like Interfaces in Java with multiple inheritance with no implementation defined. Unlike Java interfaces, though, Sather allows you to include code from other classes, independently of the Type Inheritance.

[ Parent ]
Nope, I don't believe it. (4.66 / 6) (#32)
by mjfgates on Sun Jul 14, 2002 at 11:40:07 AM EST

Your first point isn't an argument against OO design, it's an argument against BAD OO design. Yes, it's possible to turn a program into a maze of teeny little objects that don't do anything but pass parameters back and forth... but you can do that just as well with regular old functions. The performance worries you bring up are similarly not limited to designs that use inheritance; if I have one object, and it contains another by value, both objects' constructors still have to get called.

Your second point has nothing to do with OO design in particular. Yes, changing a class may break its subclasses-- but changing a non-OO function may break its callers, and changing a PODS will almost certainly break its users.

Your third point is simply an unsubstantiated assertion. As counterexample, I offer my frequent use of classes that inherit from JFrame, a class whose source I hope never to see.

The C++ and Java programs you present show an interesting difference between the two languages, and to my mind, a design flaw in Java that I'll have to watch for. However, this does not seem to support the idea that object composition is somehow superior to inheritance; if you contained the "base" class in the "derived" class by value, you'd get the same sequence of calls that your current C++ program makes.

In short, I think the argument you're making is simply incorrect. Object composition can be a useful tool, but it should not be used as a replacement for inheritance in all or even most cases.

Rules of thumb? (4.85 / 7) (#33)
by bodrius on Sun Jul 14, 2002 at 11:41:09 AM EST

The problem, I think, is figuring out decent rules-of-thumb to decide when to use inheritance and when to use composition.

I don't think "favor composition" is a good rule-of-thumb, at least not any more than "favor inheritance", which apparently causes so many problems. Rather, I think that composition is easier to justify than inheritance, but should be left to justify itself in every situation.

Personally, I decide based on the semantics of the class. If it's really obvious and natural to say that X is-a Y object, then X should inherit Y object. This usually keeps things simple and manageable for me, because I'm not inheriting just for the sake of code reuse.

I think that avoids most of the problems:

a) Large Hierarchies: Hierarchies don't have to be large because is-a relationships tend to be hard to justify.
b) Fragility: When Y changes, X changes because it SHOULD change... when code reuse is a side-effect of the is-a relationship, the fragility problem is less severe.
c) Encapsulation: There is no breaking of encapsulation since the derived object is dealing with "itself" as a type. A through knowledge of its own interface is to be expected, and since the superclass functionality would typically have to be duplicated (in a proper is-a relationship), it makes sense to be familiar with the interface to that functionality.

Code reuse seems to me an overrated advantage of inheritance; overrated because it comes at a high price (confusion, maintainace headaches, etc), and because it has been put over other advantages of inheritance.

You claim that inheritance in OOP is primarily a way to reuse source code. I disagree with that; I think that inheritance is primarily a way to enforce is-a relationships that percolate changes in source code to the lower hierarchies.

Therefore, I don't think it contradicts encapsulation when properly used.

When people think of inheritance as "code reuse", it becomes normal to inherit just to have access to two or three methods, usually with no effect on the state of the current object.

Then they tend to do silly things like inheriting a Tree data structure from the GUI that uses it, or vice versa.
Freedom is the freedom to say 2+2=4, everything else follows...

Well said..... (5.00 / 2) (#45)
by daystar on Sun Jul 14, 2002 at 05:30:51 PM EST

My attitude is along the lines of "Use inheritance if you've got a reason for it, otherwise, composition is fine." I use inheritance when I want polymorphism, but I've hardly ever had "inheritance for code reuse" work out. It SEEMS like a good idea, I've just never seen good results in my professional life when it's done (by me or anybody else).

I think that to inherit for the sake of code reuse requires a lot of planning on the base class writers part, and noone ever does that, so when you need it you wind up modifying the base class to do what your derived class wants, and then you've got a horribly over-coupled mess.

Although, now that I think about it, I'm a c++ guy, and I've never worked with java (well, c# recently, which is pretty close...) and in java if you use inheritance you're getting polymorphism whether you want it or not. I suppose that would make is much more likely that poorly used inheritance would give you surprising results. Just a thought.

There is no God, and I am his prophet.
[ Parent ]

Java and its effects (none / 0) (#48)
by bodrius on Sun Jul 14, 2002 at 07:14:03 PM EST

I don't know if the results of compulsory polymorphism would be that surprising to a Java programmer or not. The behavior is, after all, the expected behavior in the language. He/she would have to program aware of the polymorphic behavior, perhaps with the advantage that there is no question of whether a method is polymorphic.

Perhaps that means Java/C# programmers have to be more aware of the class hierarchy they operate in than other programmers, but in pure OO languages everything is in a hierarchy anyway.

An interesting side effect of certain Java limitations is that, IMHO, it discourages the obsession with code re-use and forces the programmer to be more aware of the "is-a" relationships.

Specifically, single-inheritance forces programmers to select their inheritance relationships very carefully.

Interfaces force the programmer to rewrite code (even if it's just delegating the call to an internal object) for the sake of clarity. In order to reuse code, they have to get used to object-composition because inheritance is expensive (you only get one).

This also presents some obstacles to legitimate code reuse. Maybe an argument could be made that if it makes the programmer think twice before making a design decision for the wrong reasons it's worth it.
Freedom is the freedom to say 2+2=4, everything else follows...
[ Parent ]

Code reuse vs. Interfaces (5.00 / 1) (#49)
by daystar on Sun Jul 14, 2002 at 07:50:59 PM EST

Java definitely has a bias against code reuse, for better or worse. Needing to reimplement the same code every time you used an interface seemed like madness to me. All of my java friends say it's never bothered them, and insist that multiple inheritance is an impenetrable quagmire. I say that MI has never bothered me.... You can see how quickly THAT discussion degenerates.

Still, c++ allows you to inherit for code reuse and/or polymorphism, wheras java gives you polymorphism by default (I wonder: do java programmers ever make methods "final" to avoid this behaviour? I know they CAN, do they ever feel the need?). It's not a HUGE difference between the two languages (and I think that both languages are "right", given their different aims), but I think it matters in discussions like this.

I think that Herb Sutter's books handle this whole discussion much better than we (or the original poster) have. Bastard. :-)

There is no God, and I am his prophet.
[ Parent ]

Funny you should mention the "final" thi (none / 0) (#70)
by ennui on Mon Jul 15, 2002 at 10:46:18 AM EST

Before the advent of Java2, one of the most useful classes in the Java class library, java.util.Vector, had most of the public methods declared final. Why? Partially because Vector is advertised as thread-safe, and overriding thread-safe methods potentially makes them un-thread-safe. (If memory serves, this was cited as the Official Reason.) The remaining 90% of reason is that the first version of Vector was poorly designed, and relied upon certain behavior of its public methods. The folks who wrote it knew that subclasses of Vector 1.x were potentially trouble, and gave strong hints to that effect by making all those methods final.

Java2 corrected Vector's shortcomings, added it to the collections framework, and somewhat ironically introduced a subclass of Vector, the Stack. Final public methods in the Java class library (with certain exceptions where there's no safe way around it, like in awt) have become something of a rara avis. Final methods to this day are often the designer's way of saying "this class is poorly designed, and things will probably break if you extend it" on some level.

In programs I've written, if I need a collection of some specific object for a specific reason I'll often subclass Vector, override the methods and constructors I don't want, and add methods that throw exceptions if they're not given a reference to what I expect to store. This works amazingly well on many levels, as my subclass can do all the things Vector does without me rewriting all its methods as I'd have to with a composite design (there's the code reuse :), and it's good OO design, 'cause if anything has a is-a relationship with Vector, it's a Vector of a particular object with bell, whistle, and gimcrack methods for the specific objects in the Vector subclass. OO design rocks, I get excited just thinking about it.

"You can get a lot more done with a kind word and a gun, than with a kind word alone." -- Al Capone
[ Parent ]

Favorite OO Language (4.00 / 2) (#36)
by X-Nc on Sun Jul 14, 2002 at 01:42:29 PM EST

I think that Eiffel is a hair nicer than Smalltalk (based on what little I know of the two). I just wish ISC wasn't dropping everything to only support MS .NET. For favorite language, though, I'll go with OO COBOL.

Ok, stop laughing.

No, I'm serious. Stop laughing!

There is such a thing as OO COBOL and it really does kick ass. There's something that funny about computer technologies and predefined assumptions... Did you know that the language that really is ideal for the web is COBOL? It was designed for just this kind of work? It does it better than any other language. But everyone assumes that it's an old, dead language.

It's funny how when MS was calling Linux "30 year old technology" the reality was soon presented that it was a "technology that was growing and evolving for 30 years." If you spend 5 minutes looking you'll see that COBOL is just the same situation.

Ah, what does it matter... No one's stopped laughing long enough to read this anyway.

Aaahhhh!!!! My K5 subscription expired. Now I can't spell anymore.

References? (none / 0) (#39)
by bodrius on Sun Jul 14, 2002 at 03:35:04 PM EST

Any good, modern references to OO COBOL?

I don't think I'm going to change languages anytime soon (I'm using mostly Java, some C++, and now .NET), but I'm curious.

I was pleasantly surprised with FORTRAN 90 (I had to for a class), and I wonder if it's a similar case.
Freedom is the freedom to say 2+2=4, everything else follows...
[ Parent ]

COBOL References (none / 0) (#43)
by X-Nc on Sun Jul 14, 2002 at 04:58:31 PM EST

I have the Draft Standard for COBOL 2002 but can't remember the URL I got it from. There is a decent book on OO COBOL, Introduction to Object COBOL. Some good starting points on the web are - If you're interested in taking a look at what can be done with the current state of the language, theKompany recently released a product called Kobol which is a very nice compiler that runs on WinXX and Linux. It's kinda like the old Turbo Pascal or Turbo C.

Aaahhhh!!!! My K5 subscription expired. Now I can't spell anymore.
[ Parent ]
COBOL References found (none / 0) (#50)
by X-Nc on Sun Jul 14, 2002 at 08:13:00 PM EST

Ok, found some of the actual references. Hope this inspires more investigation into this great and highly under appreciated language.

Aaahhhh!!!! My K5 subscription expired. Now I can't spell anymore.
[ Parent ]
OO COBOL... (none / 0) (#42)
by Danse on Sun Jul 14, 2002 at 04:30:08 PM EST

It would have to be so much different from COBOL in order for me to like it, that it probably wouldn't have COBOL in the name. I'd heard of OO COBOL, but after going through the trauma of learning COBOL, I couldn't bring myself to investigate it. I like the languages I use for web development. PHP is very nice. ASP works well too. If I had to go back to COBOL syntax, I'd probably go insane.

An honest debate between Bush and Kerry
[ Parent ]
Conclusion misses larger picture (4.63 / 11) (#37)
by tmoertel on Sun Jul 14, 2002 at 02:10:42 PM EST

I don't think that a golden rule like, "Favor object composition when designing classes," does justice to the reality that choosing whether to use inheritance to solve a particular problem depends in large part on the circumstances surrounding the problem. In some circumstances, by no means inconsiderable, inheritance results in lower costs and complexity. In particular, this rule ignores the difference between inheriting for interface purposes and inheriting for implementation purposes. The former kind of inheritance is usually beneficial, and I would hate to the see the application that didn't take advantage of it; and the second kind can be beneficial in circumstances were a fixed behavior is part of a class's contract with the outside world.
Inheritance in OOP [is] primarily a mechanism for reusing source code as opposed to a mechanism for reusing binary objects.
No, it isn't. Inheritance is primarily a mechanism for expressing "is-a" relationships. Programmers who use it for something else, such as a cheap way of borrowing source code from other classes, are usually fundamentally confused. Claiming that this fundamentally confused practice is the primary may of using inheritance seems contrary to fact and a poor argument against inheritance.
The Problem: Virtual Method Calls In Constructor
How is does this problem support your argument against inheritance? It seems more like an argument against the way Java calls methods from constructors. Because Java does a dumb thing in the face of certain uses of inheritance doesn't mean inheritance should be avoided in general. Inheritance ought to be used when it makes sense to do so, and such times do exist.

My blog | LectroTest

[ Disagree? Reply. ]

how is C++ way good? (4.50 / 2) (#40)
by startled on Sun Jul 14, 2002 at 03:58:48 PM EST

"How is does this problem support your argument against inheritance? It seems more like an argument against the way Java calls methods from constructors. Because Java does a dumb thing in the face of certain uses of inheritance doesn't mean inheritance should be avoided in general."

I agree that Java's way of handling it can lead to some serious problems, but the C++ special case of making them non-virtual for the purposes of the constructor also seems poor. Either way, both should usually only cause problems when you're incorrectly using constructors to perform actions beyond what one might consider the proper scope, but it's still a "gotcha" in both languages.

I don't know most other OO languages-- is there one that handles this problem better? I'm not sure how you'd restrict constructor actions to things that don't reference possibly uninitialized member variables (Java problem), but that's probably because I'm thinking only in Java and C++ mindsets.

[ Parent ]
Because it's sane (4.60 / 5) (#51)
by tmoertel on Sun Jul 14, 2002 at 09:14:25 PM EST

... the C++ special case of making [method calls] non-virtual for the purposes of the constructor also seems poor.
The thing to realize is that this behavior isn't bizarre but utterly rational. It's not a "special case" because it's the only choice that makes sense.

If you design an object system where programmers are allowed to define the internal representations of objects, you must provide a means to let programmers initialize their objects. If you further allow programmers to derive classes of objects from others, you must make some decisions regarding the ordering of object initialization. Do you initialize the base portions first and then the derived portions? Or do you use some other ordering, perhaps most-derived first or parallel initialization -- or even go so far as to specify an indeterminate ordering? I'll argue that the former makes the most sense, leads to the simplest semantics, and most importantly agrees with most peoples' intuition.

Given, then, that you decide on the former, does it make sense to allow virtual method calls during object construction? Nope. To allow it would be to allow interaction with portions of objects whose internal representations are undefined. That's just goofy

Now, as the object system's designer, you could declare that the internal representations of derived portions won't be undefined because, by your decree, all members will be pre-initialized to "safe" nulls, zeroes, and so on. Then, somebody might argue, you could allow for virtual methods calls during object construction and such an arrangement would be sane.

But that would only be faking sanity. Even though the constituent members would be set to some defined initial state, that state wouldn't necessarily correspond to a sensible initial internal representation. You could, as the designer, attempt to close this gap by placing upon programmers the burden of requiring all of their objects to be designed such that their initial internal representations would be satisfied by an all-null, all-zero, etc. member state. But then you would have removed from programmers the freedom to choose whatever internal representations make the most sense for the problems they are trying to solve. You would have placed part of your burden as an object-system designer on your programmers, causing more harm than good.

Thus I contend that C++'s behavior is sensible. People that criticize its behavior in this regard would do well to consider the other options. They are worse.

My blog | LectroTest

[ Disagree? Reply. ]

[ Parent ]
It's a good compromise (none / 0) (#77)
by Carnage4Life on Mon Jul 15, 2002 at 12:58:16 PM EST

I consider the C++ choice the preferrable one to have made but still consider it a special case which creates inconsistency. Of course the alternative, the Java model, favors consistency thus increasing the likelihood of pitfalls occuring in application programming.

[ Parent ]
Reply misses larger picture (none / 0) (#113)
by Robb on Fri Jul 26, 2002 at 11:26:49 AM EST

    Inheritance in OOP [is] primarily a mechanism for reusing source code as opposed to a mechanism for reusing binary objects.
No, it isn't. Inheritance is primarily a mechanism for expressing "is-a" relationships. Programmers who use it for something else, such as a cheap way of borrowing source code from other classes, are usually fundamentally confused.

Inheritance should primarily be a mechanism for expressing "is-a" relationships but it my experience it is frequently misused.

Most programers focus on implementing something that works and in general they do this in the quickest way possible and in the process spare little thought for what other options they might have and what tradeoffs they are making. In other words the majority of code I have seen produced in industry is simply "written" rather than "engineered".

Most code never gets reviewed by really good programmers. Consequently, the original programmers rarely have their assumptions questioned and/or their poor judgement corrected and go on making the same "mistakes" with inheritance over and over.

[ Parent ]

OOP scoop site ? (4.00 / 5) (#41)
by kaltan on Sun Jul 14, 2002 at 04:07:40 PM EST

I lack a bit the technical articles on K5. I'd really like to have more submissions like this one. Is there a K5-like site which addresses more such topics ?

Set the Wayback Machine, Mr. Peabody (5.00 / 6) (#47)
by ucblockhead on Sun Jul 14, 2002 at 07:07:19 PM EST

That site would be Kuro5hin.org before the politcal hacks took it over last September.
This is k5. We're all tools - duxup
[ Parent ]
LTU (4.66 / 3) (#54)
by mlinksva on Mon Jul 15, 2002 at 12:38:57 AM EST

It isn't a scoop site, but you may enjoy Lambda the Ultimate.
imagoodbitizen adobe unisys badcitizens
[ Parent ]
This article is not about OOP... (4.81 / 11) (#44)
by trixx on Sun Jul 14, 2002 at 05:20:38 PM EST

This simply looks like an article about shortcomings of the C++ and Java approaches to OOP, or even in some cases, shortcomings of specific implementations of those languages.

I think a well designed OOP language like Eiffel handles your problems quite well, given a good implementation. If you are more interested in the Eiffel approach to OOP read "Object Oriented Software Construction, 2d ed." from Bertrand Meyer, Prentice Hall. It discusses some of this issues and extends on the analysis I'll do below:

1. About large inheritance hierarchy.

There's nothing inherently wrong with large inheritance structures, if you are solving a problem with a lot of quite but not completely similar object kinds. A well designed hierarchy can be resistant to changes in the base classes. Meyer's "Design by contract" should help a lot to achieve that. The basic idea is to be explicit and clear about the interactions and interfaces between classes and components of a class. In that way, you can change anything you want if you don't touch the interface. And when you touch the interface, you can know exactly what was broke.

In any kind of programming, changing the interfaces break thing, so this is nothing new. Besides, inheritance it's a quite clean form off interfaces that helps in this problem.

The "performance" and "memory requirements" problems are not only language specific, but also implementation specific. There are good OO language implementations where you can have huge inheritance hierarchies without having performance drawbacks.

2. About fragility of superclasses

I don't understand how this differs with what you said in (1), so the above explanation (about keeping explicit interfaces, changing only implementations) applies. About compilation order, that's again an implementation issue. The way that SmallEiffel compiles should surprise you: it compiles almost all the system, but the language design allows doing that extremely fast.

3. About breaking encapsulation

Well done inheritance involves usually inheritance of interface, and sometimes little change of implementation, so you don't have to know the class internals. You can work with class made by someone else, with only a terse interface description (and using DBC, some predicates describing interactions between components of the class). If you need to play too much with the internal class representation, you are either misusing inheritance, or inheriting from a base class with very bad design (i.e., not exporting an abstract way to access it)

my understanding (3.00 / 1) (#52)
by speek on Sun Jul 14, 2002 at 10:26:23 PM EST

I always thought it was generally poor design to call methods of the class from it's constructor because of problems it can cause those trying to inherit from the class.

Of course, that doesn't stop me from doing it :-)

what would be cool, is if there was like a bat signal for tombuck -

Nonsense (4.60 / 5) (#53)
by MSBob on Sun Jul 14, 2002 at 10:38:22 PM EST

You pointed out a single peculiarity of two implementations of inheritance and concluded that inheritance is evil because you get inconsistent behaviour across different programming languages.

Inheritance is not used for code reuse. It is used to express isa relationships. Consider this example:

There is a base class Manager in a company's payroll system which class has fields such as name (probably also inherited btw) and others such as division and salary. There are ProjectManagers in the company, ProductManagers and a bunch of LineManagers. What is common though is the way their bonus is evaluated based on some criteria like seniority level etc. Now if we have our hierarchy right we can have one function such as

static double Accounting::evaluateBonus(const Manager* mgr);

which can act polymorphically to accept any kind of manager as input and evaluate the bonus to award. If you were to design our Manager class with a hasa relationship you would have to equip it with a vector of void* or something equally ugly to design a class that can be 'extended' to accomodate new types of managers at our fictional company.

There are cases where inheritance makes perfect sense and there are cases where composition makes more sense. The difference between a brilliant design and a mediocre one is in how good a job the architects did of assessing when to use one and not the other.

I don't mind paying taxes, they buy me civilization.

Inheritance is not polymorhism (none / 0) (#80)
by Carnage4Life on Mon Jul 15, 2002 at 02:19:36 PM EST

Your entire argument is based on the false premise that inheritance == polymorhism. However COM interfaces, C++ templates and Java interfaces have shown that polymorhism can be obtained without implementation inheritance.

Your argument makes sense if you can argue the pros and cons of your C++ Manager class being a pure abstract class versus being a regular class. The point of my article is that being a regular class that is inherited from has pitfalls that are avoided by using object composition although now I realize I should have also specifically talked about polymorphism seperately.

Thanks for helping me see this.

[ Parent ]
more articles like this, please. (2.33 / 3) (#55)
by SocratesGhost on Mon Jul 15, 2002 at 12:42:15 AM EST

thank you, Carnage. Good write up.

I drank what?

This sort of thing confuses me (4.54 / 11) (#56)
by ennui on Mon Jul 15, 2002 at 01:39:41 AM EST

Okay, we have two snippets of code that demonstrate that poorly-designed classes behave poorly differently in two languages. I can't speak to C++ because it's not my bag, but in general in Java you shouldn't do anything in the constructor you don't need to do, and you don't paint yourself into a corner with what you're describing with the C++-ism of "virtual calls."

The three disadvantages you cite from these "experienced practitioners of object oriented programming" (who?) are only disadvantages if you make them so. So, point by point:

Large Inheritance Hierarchy: This is not a bad thing, and it exists for almost everything that makes life easier for Java programmers (the JFrame is a good example). While if you screwed up in the first place, yes, you might have to change subclasses of whatever you screwed up. However, if you did things right in the first place, often subclasses will inherit improvements you make to superclasses. The memory requirement is not significantly greater than making One Big Class as you seem to advocate (remember the alternative involves a private instance of whatever class you would have been inheriting!) as the end result is One Big Object, but with a logical heirarchy behind it.

Fragile Superclasses: You're pretty much restating the previous item, so I'll restate that if you didn't screw it up in the first place you'd have a more robust superclass. You change the superclass in a positive way and the postive change is inherited by all the subclasses instead of having to apply the change to all your potentially One Big Classes that could have benefited from inheritance.

Breaks Encapsulation: I'm tempted to call this a big, fat, ugly, damnable lie. I've subclassed various collection classes, swing components, and classes in java.sql and javax.sql, and I'm sure as hell not the author of any of those classes, and never once have I had to look at the source code of any of the above. If anything, the way inheritence is implemented (at least in Java) enforces encapsulation. If a class exists, properly documented, that somebody else wrote and I create a subclass that doesn't violate the contract(s) of the superclass and doesn't work as advertised, I'm trying to extend a broken class. That doesn't mean inheritance is bad, it means it's of little value when you're starting out with junk.

Wrapping up with a suggestion to "favor object composition" when designing classes doesn't make sense and really isn't possible even if you wanted to. It's like saying "when trying to drive somewhere, favor right turns." To go back to Intro to OO Design, what you're saying is to "favor 'has a' solutions to problems, instead of 'is a' because it's easier to fix poorly designed classes." Things don't work that way, unfortunately. Inheritance (and encapsulation and polymorphism for that matter) gives you plenty of rope to hang yourself, but it does exactly what it's supposed to, that is, lets you reuse classes and not reinvent the wheel.

"You can get a lot more done with a kind word and a gun, than with a kind word alone." -- Al Capone

You've swung the other way now... (5.00 / 1) (#67)
by jmzero on Mon Jul 15, 2002 at 10:22:46 AM EST

Yes, if you do everything perfectly, you can avoid all the problems of inheritance.  I agree that the article was too hard on inheritance, but he makes a point I wish many programmers around me would get - sometimes it's OK to compose objects.  Or - get this - to just cut and paste the code you need out of an object into your object.

Designing objects well - as you suggest - takes time.  If you know you're going to be reusing an object for inheritance for time eternal, it's probably worth it to design it well for this use.  

If you're just trying to do a quick re-use of that component that makes that one script, perhaps it's best just to create it a call it once.  Then you put a little note on the methods you call like "Hey, if you change this, make sure the client foobar still works - check with Dave".

As with everything, sometimes inheritance makes sense and sometimes it doesn't.

"Let's not stir that bag of worms." - my lovely wife
[ Parent ]

C&P is a horrible practice (5.00 / 1) (#74)
by ennui on Mon Jul 15, 2002 at 11:39:08 AM EST

Any project that requires maintanance, which is anything you're serious about, cannot allow c&p programming, with maybe the exception of an interface implementation or two, but there's better ways to pull that off. I'm amazed at how many Java shops are so committed to a "whatever works" model, so willing to throw away code, and so willing to violate OO principles on the whim of individual programmers.

Java and other OO languages are almost to the point where if you know what you're doing, the code almost writes itself. The last Java course I took was very much that way. For every excercise, even the complex ones, everybody produced classes in one of two categories: source that nearly identical to "the solution," and code that didn't work well or at all.

"You can get a lot more done with a kind word and a gun, than with a kind word alone." -- Al Capone
[ Parent ]

I'm curious... (4.50 / 2) (#75)
by jmzero on Mon Jul 15, 2002 at 12:24:23 PM EST

Do you have a job as a programmer? If so, how long have you been programming? Do you have deadlines? Lots of times, I'll reuse code the C&P way. When I start a new project, I often find myself cutting and pasting in large chunks of another project. Why don't I make objects to handle this sort of thing? Because: 1. The code ends up being quite different for each system. Each system is simple and often follows a fairly similar skeleton, but if I had to word much of the logic to work for all systems, I'd be up the creek real soon. 2. It is absolutely essential that a change to one system has absolutely no effect on another. Clients do not want "improvements" from other systems to suddenly appear unwarned. I don't want to call 100 people (or even test 100 systems) every time I tweak something - and each system changes fairly regularly. 3. C&P works absolutely f'ing great. Could I do all this in a more OO manner? Of course. But replication through C&P has a couple of advantages - 1. Faster. Yes, it really does make a difference for us. And yes, I suppose if I did it some magical way, it would still run as fast. But it's really easy to profile the performance of code that's just "right there". 2. Clearer. What happens when client Bob creates a receipt line? Does it A: call this object with these settings or B: run these 5 lines of code that are in the same module. Which one of these is really easy to debug or tweak if you have never seen any of this code before (and you have to be done by noon)? I use objects all the time when they make sense and reusing them that way will save time. Lots of times I'll use composition instead of inheritance. Sometimes I just cut some code in, and never think about it again.
"Let's not stir that bag of worms." - my lovely wife
[ Parent ]
And sometimes (none / 0) (#76)
by jmzero on Mon Jul 15, 2002 at 12:25:49 PM EST

I'm sure K5 remembers that I always use auto format, and sometimes it forgets.
"Let's not stir that bag of worms." - my lovely wife
[ Parent ]
C&P (none / 0) (#83)
by Skywise on Mon Jul 15, 2002 at 07:00:22 PM EST

Cut & Paste is a time-honed technique... In fact, I remember some study (IEEE Software maybe?) that studied popular forms of code "reuse" and discovered that the majoriry of reuse was C&P.

The problem with C&P isn't with a few lines here and there, it's where you're taking huge swatches of functionality and duplicating it for your routines.
That adds to the code size, and more importantly, it duplicates the errors that may be fixed in the original module, but not in yours.

C&P is still viable, however when you need to make quick and dirty forks in the decision logic, and when you need to add quick an dirty feature additions.  The problem is that, if you do this enough, you'll end up with an unstable code base. And will eventually have to clean everything up back into a centralized system. (refactoring, as it were).

[ Parent ]

C&P (none / 0) (#93)
by jmzero on Tue Jul 16, 2002 at 10:30:09 AM EST

I seldom cut and paste code within the same project.  Like you say, it better be real short bits if you're doing that.  In the case of one project, usually a change to one instance of code will mean you'll want to change similar instances.

In the case of more than one project - which is the real low-hanging fruit of code reuse - C&P is your best friend precisely because you want separation.  Of course you'll also reuse other code in shared objects - but you'll do so based on a decision about what's going to work better, and not on a desire to be OO.
"Let's not stir that bag of worms." - my lovely wife
[ Parent ]

Yeah but... (none / 0) (#94)
by Skywise on Tue Jul 16, 2002 at 11:18:49 AM EST

If you're going to reuse all that code... say for instance, Windows boilerplate startup code...wouldn't it be better to encapsulate that all into a class rather than just grabbing the 100+ lines to setup a basic Windows engine?

I'm not saying OOP is ALWAYS the right thing to do... Sometimes it's far easier to just yank a class and go than architect a solution, and you have the benefit of having a completely separate code tree when doing shared development...

But my point is on that latter benefit.  Assuming the interface between the classes remains the same, you'll lose any benefits of a shared code library (fewer bugs, optimizations, experienced behavior, etc).

[ Parent ]

Fair enough (none / 0) (#95)
by jmzero on Tue Jul 16, 2002 at 11:33:53 AM EST

I think we agree that there's tradeoffs in any plan and that you have to choose the right plan for the job.  
"Let's not stir that bag of worms." - my lovely wife
[ Parent ]
be realistic (5.00 / 2) (#71)
by TheLogician on Mon Jul 15, 2002 at 10:49:57 AM EST

While if you screwed up in the first place, yes, you might have to change subclasses of whatever you screwed up. However, if you did things right in the first place, often subclasses will inherit improvements you make to superclasses
Software design is a continuous process. There is no such thing as "doing it right the first time." Requirements change and programmers make mistakes. Further, many errors occur due to confusion in communication between design teams and coders. Encapsulation was invented to ease the problem of changing code and is one of the best features of OO design, contrary to the "make it right the first time" motto. So be realistic. "If you did things right in the first place" is an assinine comment and hardly a rebuttal worth reading.

[ Parent ]
It's "asinine" if your head's not straig (none / 0) (#73)
by ennui on Mon Jul 15, 2002 at 11:29:33 AM EST

"Do it right the first time" means "don't do anything that will come back to haunt you," not "get it perfect your first go-round." Changing requirements and poor communication are totally outside the scope of doing things right the first time. Programmers making mistakes is within the scope, but a professional programmer should only be making "oops" mistakes, not cranking out poorly-designed classes.

"You can get a lot more done with a kind word and a gun, than with a kind word alone." -- Al Capone
[ Parent ]
Lessons on Inheritance from .Net Framework (3.00 / 4) (#57)
by keenan on Mon Jul 15, 2002 at 01:44:29 AM EST

Just because certain languages have deficiencies in the implementation of certain OOP concepts doesn't mean that those concepts are flawed and should be avoided. Take the .Net framework for example: I have been working with it for over a year now and I have to say that it is probably the best designed [large] hierarchy of classes I have seen, all of which are inherited from a base Object class and following general rules for inheritance.

As an arbitrary example, take the System.Web.UI.WebControls namespace. All the web components defined in here are derived off of the WebControl class, so that each has Height, Width, CssClass and myriad other properties that ensure a consistent interface for the properties and methods for every web control, thereby lessening confusion, increasing the speed to learn the API, facilitating ease of use, the extensibility and the maintainability of the API. Please tell me how this could be better done without the use of a large hierarchy.


deeper problems (5.00 / 1) (#114)
by Robb on Fri Jul 26, 2002 at 11:50:10 AM EST

Just because certain languages have deficiencies in the implementation of certain OOP concepts doesn't mean that those concepts are flawed and should be avoided.

I think the problem is actually much deeper.

The concepts implemented in one language are almost always subtly different that those found in other languages. Consequently, the best use of these concepts is also subtly different. So, even though everyone says "class" and "polymorphism" they are not really talking about precisely the same concepts.

Too many programmers learn the concepts from one language and then believe that they are universally applicable to other languages. If some language is found to violate this belief then the language in question is "broken".

[ Parent ]

implementation inheritance is secondary (4.50 / 4) (#58)
by klash on Mon Jul 15, 2002 at 02:12:53 AM EST

Add me to the list of people who don't think code reuse is the primary use of inheritance. I think the most compelling use of inheritance is the ability to have a list of things and not care exactly what they are, only that they are related and that you can manipulate as if they were all the same type.

It may just so happen that a small part of the interface will have a common implementation across all the types you intend to create. In this case some implementation inheritance is useful, but it is only secondary.

Are you refering to interface inheritance? (3.50 / 2) (#60)
by losang on Mon Jul 15, 2002 at 02:16:09 AM EST

by this...

I think the most compelling use of inheritance is the ability to have a list of things and not care exactly what they are, only that they are related and that you can manipulate as if they were all the same type.

[ Parent ]

yep (4.00 / 1) (#61)
by klash on Mon Jul 15, 2002 at 03:03:26 AM EST

The only reason I didn't mention interface inheritance by name is that one could make the association with a Java "interface" which disallows any implementation to be inherited along with it. I wasn't referring to the idea of disallowing implementation inheritance, but rather the "spirit" of using the inheritance tree primarily for the ability to manipulate similar objects identically.

[ Parent ]
Who decided....... (none / 0) (#119)
by ThreadSafe on Mon Aug 05, 2002 at 04:52:58 AM EST

what the primary use of inheritence is?.. the fact is that it suits itself well to both code reuse and polymorphism. Whichever way you use it 'primarily is up to you'.

[ Parent ]
"IS-A" not the only criteria (4.00 / 2) (#59)
by losang on Mon Jul 15, 2002 at 02:13:53 AM EST

While the "IS-A" method is one way to determine the usefulness of using inheritance, it is not the only one.

The ability to provide an abstract interface for objects is really where inheritance comes into play. A good point was made here on the differences between interface inheritance and implementation inheritance.

The distinction is more semantic than anything else. That is, interface inheritance and implementation inheritance are only differentiated based on a decision of the programmer. The distinction lies in whether the base class is abstract or not.

It is not? (none / 0) (#66)
by bodrius on Mon Jul 15, 2002 at 10:03:23 AM EST

I'm having problems understanding the difference between an "IS-A" relationship and what you describe.

How is abstract inheritance not an "IS-A" relationship?

Aren't abstract concepts the meat of "IS-A" relationships, both in programming and in the real world?
Freedom is the freedom to say 2+2=4, everything else follows...
[ Parent ]

They are the same... (none / 0) (#69)
by losang on Mon Jul 15, 2002 at 10:41:10 AM EST

How is abstract inheritance not an "IS-A" relationship?

I can not think of an example where an sub-class does not have an IS-A relationship with its parent. The point I was making is that from the programmers point of view the motivation to use inheritance may be based more on the desire to provide an abstract interface than simply defining an IS-A relationship.

For example, you could define an IS-A relationship between a base and sub classes without using polymorphism. But, if you use polymorphism you most likely have defined an IS-A relationship implicitly.

IS-A does not imply polymorphism but polymorphism implies IS-A.

[ Parent ]

Pretty interesting (3.00 / 3) (#62)
by ariux on Mon Jul 15, 2002 at 04:21:02 AM EST

Fails only in that it concludes by characterizing a specific technique as a panacea.

It's easy to ignore that part, though, and the rest of it is interesting and meaty. +1 Section

Humbug! (4.36 / 11) (#63)
by mahoney on Mon Jul 15, 2002 at 06:33:28 AM EST

I don't want to be rude (well maybe a little) but this article is in err.

- Large Inheritance Hierarchy

The author makes the assumption that any deep inheritance hierarchy is automatically flawed but the only facts presented for such an assumption are rooted in poor design and language specific issues. Or to be clear, the critique is only valid if you use a language with compile or link time bound inheritance and method dispatch AND your inheritance hierarchy is poorly designed. The example of JFrame is confusing, the author states that the JFrame class is deep in the inheritance tree but fails to point out what the problem is with such a design in JFrame's particular case.

- Fragile Superclasses

This again is a language specific issue and does not reflect on inheritance as a methodology but on C++ (and in a lesser extent Java). Many object oriented languages do not have this issue with changes in the super class propagating down the inheritance tree.

- Breaks Encapsulation

No it does not. I believe the author is confused as to the meaning and intent of encapsulation in object oriented programming. Also the writer seems to lack some fundamental knowledge on how a C++ object file is linked into a binary. Given a header file and a binary library it's perfectly valid both to inherit classes and override method implementations in C++ without ever seeing the implementation.

If this article contained less language specific issues it could be considered as a critique of the inheritance paradigm in object oriented programming. Unfortunately it reads more like a novice programmer trying to blame bad design and poor skills on the tools instead of the user.

Large hierarchy (5.00 / 2) (#68)
by bodrius on Mon Jul 15, 2002 at 10:39:05 AM EST

I think the author might have a valid point that is invalidated by poor exposition and a poor choice of examples.

Large hierarchies per se are not necessarily bad, but they are indications of danger. Since they represent more complexity, large hierarchies are more likely to go wrong and the programmer would be wise to be careful in their presence, perhaps try to avoid them as much as possible.

In that sense, it would not be unlike large procedures, or large classes with lots of functionalities. Maybe they are the best, most elegant and clear solutions to the problem, but it is very likely they are not and breaking them up in a particular way will improve the design. In the case of inheritance, it may be time to reanalyze class relationships.

Basicly, large hierarchies are a good thing when it is actually needed, when lots of functionalities require a myriad of objects which happen to be related, most of which would be very convenient to reuse.

Frameworks would be the quintessential example. An OO framework that does not have a large hierarchy is likely to either provide very little functionality, or to use an inflexible, coarse-grained object model.

That's why JFrame is such a bad example. Not only does it belong to a framework, which is bound to have a large hierarchy. Said hierarchy is not particularly confusing, at least in my experience and that of anyone I know. Many a curse has been thrown in my presence (or by myself) at the Swing/AWT framework and its complexity, but I don't know anyone who had "issues" with its hierarchy's size.
Freedom is the freedom to say 2+2=4, everything else follows...
[ Parent ]

Smackdown time (3.75 / 4) (#78)
by trhurler on Mon Jul 15, 2002 at 01:04:30 PM EST

First, large inheritance hierarchies are inherently bad, not because of any particulars of one situation or another, but because modifying them to do things not originally intended is generally very difficult unless the things not originally intended happen to be logically possible by further inheritance. Real world programs don't always work that way.

Second, fragile superclasses is NOT a language specific issue. It is tied to the semantics of inheritance, true, but in a general enough way that it catches all the languages I've ever looked at. Basically, the problem is that subclasses are free to do anything they want, and the superclass is now and forever obliged to support that the same way it originally did. Why? Well, here in the "Real World[tm]," where we don't write programs as one man teams or semester projects to be thrown away afterward, we have these things called "companies" that write software. They have "teams," and the teams have "members," and the members, the teams, and even the companies change over time. The software is often "libraries," and the "libraries" often ship to "customers" who will then use inheritance to get what they want in THEIR "programs." No matter what language you're writing in, if the semantics of the superclasses change, you just broke software belonging to other members of your team, other teams, possibly other companies, AND your customers. Regardless of whether this is the right thing to do in academic dogma, it is the WRONG thing to do when millions of dollars(or more) are on the line. Interface inheritance can be put to good use. Implementation inheritance is very, very iffy, and should always be carefully thought out and limited in scope.

Third, it is not the author, but rather YOU, who is confused as to the meaning and intent of encapsulation and the issues with binaries. Yes, you can inherit from a binary. However, in order to do so(implementation inheritance,) you need to know more than just the interfaces of the superclass. You need to know what it does inside. Otherwise, you might do something wrong. This is a fundamental semantics issue with implementation inheritance. As for encapsulation, the author is dead on. The purpose of encapsulation is to hide implementation details inside a well defined unit with interfaces. Implementation inheritance often involves relying on knowledge of those very details, even if your code doesn't have to have access directly to the scope they're in.

Finally, a note about C4L, who is one of the few people I really respect who post here(but do note, I am not backing down from my "I hate you all" motto): For a "novice programmer," he's got some interesting credentials. Hired by Microsoft, for one. Have you ever interviewed there? They don't hire a lot of "novices." They don't even interview them, from what I can tell. What are YOUR credentials? Chief masturbator in an ivory tower circle jerk, most likely, or maybe "former CTO of a dot bomb"? :)

'God dammit, your posts make me hard.' --LilDebbie

[ Parent ]
smackbackdown (1.00 / 1) (#108)
by NFW on Thu Jul 18, 2002 at 05:07:09 PM EST

If large inheritance hierarchies are bad, then so are large composition hierarchies. You still have all the same dependencies, just expressed differently in the source code.

Maintainability has not changed. The fragility problem remains, only now its because you have classes with functionality they depend on via composition, rather than via derivation.

1. Large Composition Hierarchy: Overuse of composition can lead to composition hierarchies that are several levels deep. Such large composition hierarchies typically become difficult to manage and maintain due to the fact that the enclosing classes are vulnerable to changes made in any of the contained classes which often leads to fragility.

2. Fragile Uberclasses: Classes that have been added as members to other classes cannot be altered at will in subsequent versions because this may negatively impact the classes that use them.

3. Breaks Encapsulation: Maybe it does, maybe it doesn't. That goes for both inheritance and composition. Reuse via composition and reuse via inheritance both require that the 'user' (the programmer using the existing class, not the end-user) have access to the declaration of the class being reused. Whether the reuse of the class definition (method implementations) is at the source level or the binary level is a completely separate question.

As for method invocation in constructors - that's just something you need to know. Like making destructors virtual to get base class cleanup done right. Like operator precedence. Like when to use * and when to use &. You make mistakes once or twice when you're new at the language, you learn, you move on.

It's apples and oranges, really. There are different problems to be solved, and these are just different techniques for solving them.

  • If you want to use the functionality of an existing class, but you don't want to add it to your public interface, composition is the right tool for the job.
  • If you want the functionality of an existing class, and you want client code to see the associated interface, inheritance is the right tool for the job.
  • If you want client code to see a common interface, but you don't want any pre-existing implementation to go with it, inherit from an abstract base class (like Java's interfaces).

If you want the implementation, and the interface, go ahead and use containment anyway. It's the wrong tool for the job, but at least you'll look busy while you're writing that delegation code to forward calls to your interface down into the implementation in the contained object.

Got birds?

[ Parent ]

_Holy Crap_ (3.50 / 8) (#64)
by radeex on Mon Jul 15, 2002 at 08:35:08 AM EST

Please, learn some more languages. Python, Smalltalk, Lisp, _whatever_. But it's incredibly obvious that your view of programming is extremely limited (Ugh, these generalizations!) -- I was going to point out the flaws in your article, but I see that my k5 kin are doing an excellent job at that. (Good job, guys. ;-))
Oranges and Apples: Static and Dynamic Typing (2.00 / 5) (#65)
by Monag on Mon Jul 15, 2002 at 09:48:22 AM EST

C++ is a statically typed language. That means it checks type-correctness at compile-time. Java is a dynamically-typed language. That means it checks type-correctness at run-time.

You will find that this is a rather fundamental difference between the two languages. C++ is safer, Java is more gung-ho. Granted, I didn't know that construction was different in that way; thanks for pointing it out. But I don't think using that as an argument for inconsistency holds much water.

One reason Java's type hierarchies are so big is that Java does not allow multiple inheritance. IMHO, I think this is Java's biggest flaw. I'm no guru, but I have written a prog or two in Java. Using Java to demonstrate the flaws of OO programming is silly, as Java is half-heartedly OO.

I can see your argument, but I have found inheritance not just a good programming method but a good conceptual method.

What is a 'big hierarchy'? (4.33 / 3) (#72)
by bodrius on Mon Jul 15, 2002 at 10:52:19 AM EST

What would be considered a "larger hierarchy"?:

- A 8-levels deep hierarchy in Java
- A 3-levels deep hierarchy in C++ with abundant use of multiple inheritance

I would think the latter is more likely to have more IS-A relationships. But even if they have the same number of relationships, I would think the latter could be harder to understand, particularly when the lines start crossing each other all over the little diagram.

I understand "large hierarchy" as a level of complexity, but it's a very subjective judgement whenever we start comparing across languages.

Java hierarchies can only grow in complexity in certain directions. C++ has more directions for complexity to grow. Maybe there's some other model that allows even more directions.

How do we count these directions in terms of complexity? How do they compare to each other? Are even the same directions equally complex in both languages?
Freedom is the freedom to say 2+2=4, everything else follows...
[ Parent ]

C++ is safer than Java ... (none / 0) (#92)
by straightforward on Tue Jul 16, 2002 at 10:15:24 AM EST




java.lang.NullPointerException at
java.util.Hashtable at

and the Java server is still running, logging, and notifying of errors while the C++ server is nowhere to be found


[ Parent ]

Inheritance in OOP as Code Reuse (3.75 / 4) (#79)
by Carnage4Life on Mon Jul 15, 2002 at 01:55:24 PM EST

A couple of people have posted responses chiding me for describing inheritance as a mechanism for code reuse. This is interesting to me considering that in practice I have seen many examples of inheritance where the only reason it exists is not for polymorhism as most people who have posted here have pointed out but for code reuse. I'm busy at work so I can't spend too much time looking but here's an example from the Java class library from memory.


java.util.Stack is an example of inheritance for code reuse which could benefit more from object composition and be more logically consistent. The class inherits from java.util.Vector which is a bad mistake because a Stack is not a vector but unfortunately this situation cannot easily be changed in later versions because of the fragile superclass problem. Changes to java.util.Stack as drastic as changing its parent class would do to much damage to base classes in general usage.

Below is an example of why java.util.Stack is such a poorly designed class because it uses inheritance for code reuse instead of as an IS-A relationship

import java.util.*;

public class StackTest{

    public static void vectorShow(Vector v){

    System.out.println("PRINTING VECTOR CONTENTS");

    for(int i = 0, length = v.size(); i < length; i++)


    public static void main(String[] args){

    Stack myStack = new Stack();

    //Treat as a Stack

    //Treat as a vector
    myStack.insertElementAt("5", 1);



In the above code the java.util.Stack class can and is used as if it is a java.util.Vector which is in conflict with the definition of a stack. Interestingly enough, a version of the java.util.Stack class which aims at code reuse via the java.util.Vector class which does not violate the definition of a stack as a FIFO data structure can be built using object composition.

import java.util.Vector;

public class Stack{

    private Vector v;

    v = new Vector();

    public boolean empty(){
    return (v.size() = 0);

    public Object peek(){
    return v.lastElement();

    public Object pop(){
    int lastindex = v.size() - 1;
    Object obj = v.get(lastindex);
    return obj;

    public void push(Object obj){

    public int search(Object obj){

    int index = v.lastIndexOf(obj);

    if(index =
        return index;
        return v.size() - index;    



Just as I finished writing the above example I remembered that java.rmi.server.UnicastRemoteObject is another probably more widely known example of inheritance as a mechanism for code reuse. All RMI server objects have to extend the java.rmi.server.RemoteObject to expose themselves as accessible to remote invokation. The only reason for the existence of UnicastRemoteObject is so that people can inherit from it and reuse the remoting code not polymorphism which could similarly be obtained via interface implementation.

There's NOTHING wrong with the java implementation (3.00 / 1) (#82)
by Skywise on Mon Jul 15, 2002 at 06:11:32 PM EST

Idealistically, a Stack should allow push or pop's ONLY.  But that's a very, very limited use of a Stack... almost to the point of being non-usable. You realize that yourself because your "correct" design allows entire stack searching.

A Stack "is-a" Vector because, in machine language, Stacks can be manipulated in just such a matter as described by your "incorrect" usage.

In terms of processing speed, it is almost always many times faster to resize the stack (push the stack pointer out a few K) and memcpy the elements into the stack, then loop and push.  So long as the stack pointers are kept intact this is not an issue.

The Java implementation allows this.  Your design does not.

Is that bad or good?  It's a design trade-off from stopping programmers from shooting themselves in the foot, and giving the programmers the flexibility to optimize the program as they see fit.

[ Parent ]

Only if you are a poor programmer (none / 0) (#84)
by Carnage4Life on Mon Jul 15, 2002 at 08:05:30 PM EST

Idealistically, a Stack should allow push or pop's ONLY. But that's a very, very limited use of a Stack... almost to the point of being non-usable. You realize that yourself because your "correct" design allows entire stack searching.

My version allows Stack searching because that is in the Java version of the class.

Is that bad or good? It's a design trade-off from stopping programmers from shooting themselves in the foot, and giving the programmers the flexibility to optimize the program as they see fit.

A stack data structure is not a vector. For a class library's architect to make such a decision shows bad software design skills.

[ Parent ]
Enlighten me... please... (none / 0) (#86)
by Skywise on Tue Jul 16, 2002 at 12:30:19 AM EST

I realize that Java wasn't invented at Microsoft, so I can understand dissing a Sun product... But to classify the Java Librarian's architect as having "bad software design skills"?

Yes, a Stack is only a FIFO data structure in its purest form...  But in practice, stacks have been imaginatively used by many programmers and program language designers as a temporary memory array.  This is because, on most machines, the stack is ALWAYS there, has memory readily available for access (don't have to go through a heap call, possible OS allocation) and is thus FASTER to use for scratch memory...

Look at the assembly output of any C program when you pass a char array of 255 bytes around.  (Yeah, it's not practical to do that, but it's SYNTACTICALLY LEGAL).  The assembly code will NOT call push 255 times.  It'll set the stack pointer out to current position + 255, and then MEMCPY the  array to the stack.  On the receiving end of the function, the data is MEMCPY'd back out and the stack pointer is set to current position - 255.  Pop is NOT called, 255 times...

That's becase 2 pointer assignments, plus 2 memcpy's is usually faster than 512 push and pop's.

So the stack can be, and has been used, like a quick array... and what kind of data structure best represents an array?  A VECTOR.

That kind of flexibility is why the Java stack was designed that way.  Because that's how stacks have been USED for over the last 20 years!

But, if you can show me where I'm wrong in that assumption, please... show me.

[ Parent ]

you are on crack (3.66 / 3) (#88)
by klash on Tue Jul 16, 2002 at 01:57:09 AM EST

Where to begin? Perhaps with the assertion that since a C compiler might use the system stack to optimize passing large arrays of data around, that stack objects should double as arrays. This is such a bizzare claim that I don't really know what to say about it. Maybe something about understanding the difference between a low-level optimization and a high-level abstraction?

If programmers are using java.util.Stack as a chunk of fast scratch memory since they are used to doing this with the system stack in a lower-level language:

  • they have absolutely no clue what they are doing.
  • they don't know a thing about abstraction
  • they don't know a thing about object-oriented design.
It makes absolutely no sense to coerce a stack into being a vector if what you really want is a vector. If you want a vector, you could perhaps <gasp> instantize a vector!

[ Parent ]
>sigh< (none / 0) (#97)
by Skywise on Tue Jul 16, 2002 at 12:39:21 PM EST

It's not bizarre it's COMMON PRACTICE.

I don't understand the complete anality here... "My GOD... YOU'RE USING A STACK LIKE AN ARRAY... WHAT KIND OF MORON ARE YOU."

Once again a stack is a FIFO buffer.


FIFO (push/pop)

BUFFER (a contiguous array of bytes)

Let's hit your points backwards:

-It makes no sense to coerce a stack into being a vector.

You're not "coercing" a stack into being a vector.  The java stack, like a REAL CPU STACK, is a memory buffer that has contiguous bytes, and a pointer tracking system to the current top of stack.  A stack "is a" vector.

-they don't know a thing about OOD.
A stack "is a" vector.  "is a" relationships denote inheritence.  Stacks could therefore derive from vectors.

-they don't know a thing about abstraction.
I don't think that means what you think it means.  Abstraction is the ability to USE an object through a base interface.  So long as you pass the object around by it's stack handle, abstraction is NOT affected.  Now, if you're stupid enough to blindly send a stack into a method that needs a vector, you're asking for trouble.  (Because sometimes it's prudent to preload the stack as a vector).  I'm not denying the "shooting your foot" principle.  But I *am* saying that THAT'S what we programmers get paid for.  To make those types of decisions.  If you need a language to protect you from yourself, go use Logo or something...  But this doesn't affect "Abstraction" at all.

-They have absolutely no clue what they are doing.
Yes, Virginia... they do.

[ Parent ]

you've really gone off the deep end here (5.00 / 2) (#100)
by klash on Tue Jul 16, 2002 at 02:33:54 PM EST

1. A stack is not FIFO. That is a queue. A stack is LIFO. But that is a minor nit.

2. A stack is an abstract idea. It is not inherently a buffer or a vector. A stack is an abstract data structure that allows you to push data onto it, and retrieve data in the opposite order that you pushed it. That is all.

3. This abstract idea has two natural implementations on computers as we know them. One uses an array, another uses a linked list. The main tradeoff is speed and simplicity (array) vs. flexibility (linked list). Neither a linked list nor an array is inherently a stack, but either can be used to implement the abstract idea of a stack.

4. It so happens that all modern processors have a system stack, since it is a simple and efficient way to pass parameters and allocate local variables in a way that will correctly handle recursive function calls. System stacks are implemented by using a sort of "infinitely growable array," meaning you can push the stack pointer back as far as you need.

5. Object-oriented languages encapsulate implementations of common data structures into classes, so that programmers can use whatever data structure best suits any particular problem. They provide the most efficient implementation possible for the data structure's interface. The interface is written to match the capabilities of the abstract idea of what the data structure can do.

6. Vectors, stacks, queues, priority queues, hash tables, and sets are all orthogonal data structures that are commonly implemented as objects. A stack is not a vector any more than a queue is a hash table. There is no sort of inheritance relationship between these; they are just a group of the fundamental data structures of computer science.

7. Stacks so implemented have no relation to the system stack mentioned in (4). System stacks are infinitely growable arrays implemented in hardware, stack objects are software constructs that provide a stack interface while using some other language primitive to actually provide the storage.

8. The abstraction in this case is treating these data structures as black boxes, not caring how they are implemented. The ability to use a derived object through a base interface is called polymorphism.

9. When you allocate a Stack object in Java, you are not getting the Java system stack, you are getting a dynamically allocated object on the Java heap.

10. Given this set of data structure classes, good design is using each where it is appropriate. If you want a vector, use a vector. If you want a stack, use a stack. Using a stack as a vector because you think of the system stack which is implemented in a way that resembles a vector displays a gross misunderstanding of 1-9 above.

[ Parent ]

The problem is perception... (none / 0) (#102)
by Skywise on Tue Jul 16, 2002 at 03:08:15 PM EST

LIFO point conceeded, but that's what you get when you write posts at 3 am...

Data structures are symbolic representations of whatever abstract concept we want to convey.

Stacks can be vectors.  Stacks can also be lists, and they can also be hash tables (if you're masochistic enough).  I've never said that a pure Stack implementation with only push/pop methods is bad.  In fact, it's necessary if you want your Stack object implementation to be swappable between the various incarnations (list, vector, etc).

What I AM arguing, is that Stacks hard implemented as derivations of Vectors is not bad design, has real world needs, and has rational decisions behind doing so.  You might be happier if it was called VectorStack.

Point 5 is well taken.  The interface is written to match the capabilities of the abstract idea of what the data structure can DO.  Well, if your concept of a Stack is a continguous array of memory with LIFO capabilities than the Java implementation is a fairly strong interface.  If your concept is more ideological and that Stacks can only have push's and pop's then it isn't.

But how the code is going to be used is far more important than ideological purity.  And that cuts both ways... If lists are needed to accelerate additions to the stack, than my implementation is screwed.

[ Parent ]

A Stack is not a Vector (none / 0) (#96)
by Carnage4Life on Tue Jul 16, 2002 at 12:29:03 PM EST

Yes, a Stack is only a FIFO data structure in its purest form... But in practice, stacks have been imaginatively used by many programmers and program language designers as a temporary memory array.

If you want a temporary memory array, the Java class library provides both the thread-safe java.util.Vector and the thread-unsafe java.util.ArrayList which both can be used in a FIFO manner just like a stack. Implementing a stack that is the same as a vector with a some shortcut methods is not only redundant but prevents people who need only a stack from using that class but who instead now have to write their own.

That kind of flexibility is why the Java stack was designed that way. Because that's how stacks have been USED for over the last 20 years!

I didn't know you worked as Sun as the architect of the java.util package. No wonder you are so defensive of what is obviously a poor design. Seriously though, that class is a mistake worthy of a freshman computer science major and is probably the work of some intern so you shouldn't take it as a smear on the Java software architects as you have.

[ Parent ]
Now THAT'S freshman design... (none / 0) (#98)
by Skywise on Tue Jul 16, 2002 at 12:45:11 PM EST

"If you want a temporary memory array, the Java class library provides both the thread-safe java.util.Vector and the thread-unsafe java.util.ArrayList which both can be used in a FIFO manner just like a stack. Implementing a stack that is the same as a vector with a some shortcut methods is not only redundant but prevents people who need only a stack from using that class but who instead now have to write their own."

Neither Vector, nor ArrayList implement FIFO handling (Push/Pop).  So you've just stated that the best way to deal with that is to...implement your own stack.

I've not worked for Sun.  It's not poor design, and all your name calling won't change that.

[ Parent ]

Are you trolling or just plain clueless? (1.00 / 1) (#99)
by Carnage4Life on Tue Jul 16, 2002 at 01:42:29 PM EST

Neither Vector, nor ArrayList implement FIFO handling (Push/Pop).

Vector and ArayList don't have methods called push() and pop() but only a blatant troll or a complete imbecile would say they don't have FIFO capabilities especially in a thread that begins with a post that shows how to wrap a FIFO interface over a Vector.

[ Parent ]
I could ask the same of you. (5.00 / 1) (#101)
by Skywise on Tue Jul 16, 2002 at 02:38:15 PM EST

Do you realize that you said that the socket design is poor because they built the stack (added push and pop methods) on top of a vector, and then turned around and said that ArrayList and Vector can be perfectly used as FIFO buffers.

And how would you do that?  Inherit the class and add the push and pop methods... ta da... the stack class you just said was poor design.

Wrapping the vector inside of a class will isolate the users from the vector/array and defeat the purpose of using the buffer in a FIFO manner, so you'll have to expose the vector interface ANYWAY.  So either you override every friggin method to point back to your internal array storage, or just inherit its functionality.

[ Parent ]

you're the one who made too broad a claim (5.00 / 1) (#85)
by klash on Tue Jul 16, 2002 at 12:13:46 AM EST

You made the general statement that object composition is better than inheritance, giving several reasons why inheritance is bad.

Then a bunch of people said "no, inheritance is good if you use it for X."

And your retort is "but look, it's bad if you use it for Y."

Fine. If you had said from the beginning, "it is bad to use inheritance for Y," far fewer people would have felt compelled to "defend" inheritance.

(X=polymorphism/interface inheritance, Y=code reuse/implementation inheritance)

One interesting point is that there is no need to use language-supported interface inheritance in dynamically typed languages like python, since a variable can always reference an object of any type. Polymorphism doesn't require language support. So does this make inheritance intrinsically worse in languages like this, since all inheritance is essentially implementation inheritance?

[ Parent ]

Missing the point (5.00 / 2) (#103)
by bodrius on Tue Jul 16, 2002 at 07:13:26 PM EST

I think you're missing the point of the criticisms.

Of course inheritance has been used many times for "code reuse". But that's a secondary advantage, and if its the main reason to use inheritance then it is a bad idea, period.

Of course inheritance is, in practice, used for the wrong reason, in the wrong places, all over the world.

Arguing that one should favor object composition over inheritance based on that is like saying that Procedural Programming is better than OO because so many people make bad OO designs.

Or perhaps more clearly, "most programmers misuse pointers and object references" would be correct, "passing objects by value is safer" would be correct, but that does not mean "always favor passing objects by value" is a good idea. What it means is "most programmers should be more careful about their use of pointers/object refs".

Programmers all over the world are doing the wrong things, constantly, and paying for it. We're still closely coupling our classes, forgetting to clean up our references, using nested linear searches through sorted tables of sorted arrays, etc. The problem is, we're bad programmers.

The problem with the article is that you presented inheritance as a bad programming practice because it failed to do properly what it was not its main purpose to do, what is essentially a "side-effect".

I agree with you there is a common misunderstanding of when to use inheritance, but your article does not explain the correct problem (or if it does, its not well presented), because it seems to share and perpetuate that misunderstanding.

The problem, it seems to me, is:

- Inheritance IS overrated in certain programming circles.
- Object composition IS underrated and underused in certain programming circles.
- Computer Science training typically makes a big deal of inheritance, without always clarifying exactly why. This gives the impression that "INHERITANCE IS GOOD", instead of "INHERITANCE IS GOOD FOR X,Y,Z".
- Computer science training typically presents "code reuse" along with "working with the language of the problem" as an ideal, and then polymorphism, inheritance, and encapsulation as practical techniques.
   Students get the impression that all techniques work, without contradictions or priorities, for all goals, and since most of them will not see a software engineering class for some time, if ever, "code reuse" is the only thing that affects them in the class. Typing is typically not a big deal at the level of instruction where they are thaugth inheritance.
   Students then confuse the respective goals of inheritance, polymorphism, and encapsulation. They will only be cured of misperceptions by experience (or if they're very fortunate, a decent teacher).
- Computer Science training should extensively cover object composition. I think most professors do not because they think it is too obvious (too similary to reusing a procedure), but covering the proper (and improper) mixed use of inheritance and composition would cure many maladies.
Freedom is the freedom to say 2+2=4, everything else follows...
[ Parent ]

Shooting fish in a barrel (5.00 / 1) (#104)
by bodrius on Tue Jul 16, 2002 at 08:04:40 PM EST

Looking for bad design in the Java 1.0 data structures is like shooting fish in a barrel.

Yes, Stack is a badly designed class. It should not have been a class in the first place, it should have been an interface. I'm dissapointed it has not been deprecated and replaced.

I have yet to meet someone using that class. The fact that you suggest UnicastRemoteObject as a "more widely known example", a class specific to distributed application development versus a basic data structure, says a lot about it.

But that's why the data structures framework was completely redesigned in Java2, to correct many of those mistakes.

There is no Java2 equivalent for the Stack, but we could figure how one would look from the general pattern of the other Java2 Collections: List and Set.

public interface Stack extends Collection
  public Object pop();
  public Object push(Object item);
  public Object peek();

public class AbstractStack extends AbstractCollection implements Stack

public class ArrayStack extend AbstractStack
  private ArrayList array = new ArrayList();


//If you really want to use Vector for thread safety

public class VectorStack extends AbstractStack
  private Vector array = new Vector();


public class LinkedListStack extends AbstractStack
  private LinkedList array = new LinkedList();


There it goes: inheritance used for IS-A (with code reuse when it makes sense and is part of an IS-A relationship), object composition for code reuse in the implementation.

Now, with respect to the UnicastRemoteObject, from the Javadocs:

The UnicastRemoteObject class defines a non-replicated remote object whose references are valid only while the server process is alive. The UnicastRemoteObject class provides support for point-to-point active object references (invocations, parameters, and results) using TCP streams.

Objects that require remote behavior should extend RemoteObject, typically via UnicastRemoteObject. If UnicastRemoteObject is not extended, the implementation class must then assume the responsibility for the correct semantics of the hashCode, equals, and toString methods inherited from the Object class, so that they behave appropriately for remote objects.

I'm not a distributed computing guru, but it would seem to me there are some conceptual restrictions to this UnicastRemoteObject: when are the references valid, how does it communicate (only TCP), and the fact that it's a Server.

These are limitations you don't want in your basic abstract type. Implementation issues that should not percolate to every RemoteObject in the system, especially not to Client objects (that's why RemoteServer is there).

It exists not just for programmers to extend it, but to provide a type in the hierarchy at the correct abstraction level: it IS-A RemoteServer, and therefore IS-A RemoteObject, but it is an specialization for TCP communication of these types. It exists to remove communication-specific details (TCP) and other implementation issues from the abstract data type where they do not belong, but which may be a common need for a lot of subclasses... not just the code, the functionality per se (the interface).

This is a default implementation, of course. But as an specialization of the RemoteObject and RemoteServer types, with the additional methods and contract assumptions, it deserves its own place as a type.

I think a better example, and perhaps more widely known, of a purely pragmatic use of inheritance would be javax.servlet.GenericServlet.

That class seems to have no reason to exist except to provide implementation for already defined methods (I don't see why its extra methods could not have been in the original Servlet interface).

But it's a safe use of inheritance for code reuse because it is not forcing the "IS-A" relationship. The GenericServlet "IS-A" Servlet still.

Code reuse is not BAD, nor is inheritance independent of it. The point is that code reuse is a SIDE EFFECT of inheritance. Sometimes you need the side effect, sometimes you depend on it, and sometimes it's the only thing you wnat.

But if it's the only thing you want, there are better ways of achieving it, and you don't need inheritance. Just like in normal coding, programming based on side-effects when you have direct, better means of achieving what you want can be seen as either stupid or a clever way of getting in trouble.

Freedom is the freedom to say 2+2=4, everything else follows...
[ Parent ]

Stack as Interface would've been better... (none / 0) (#105)
by Skywise on Wed Jul 17, 2002 at 12:23:48 AM EST

C++ STL Stack seems to strike a happy balance between pure OOP theorists and practical guys (like me).

It turns the Stack into a base class that has to be loaded with a container type that has such members like size(), front(), and back() methods (like deque, list, vector, etc).

The container is protected so users of the class can't touch it... but inheritors of the stack CAN.

So classes like VectorStack become possible.

I'll concede that the stack interface should be the lowest base class in the chain.  But I'll still argue that the java1 implementation is not "poor coding design".

[ Parent ]

Inheritance for Code Reuse (5.00 / 1) (#112)
by Robb on Fri Jul 26, 2002 at 11:04:27 AM EST

I have reengineering lots of code written in C++, Ada and Java (by developers other than myself) and my personal experience is that about 80% of the time inheritance is done to provide code reuse and that polymorphism is rarely required or if so usually in some trivial way. Note I am not saying this is how inheritance should be used; it is how inheritance is being used in my experience.

My opinion is that most of the problems with OO languages and inheritance comes from people who do not appreciate the problems they are creating when they use inheritance for code reuse. If the implementation hierarchy corresponds to the type hierarchy then the potential problems are reduced but even then you can still run into problems.

My personal guidelines are

    All superclasses are "abstract", i.e. they have no direct instances. In other words all instances come only from classes at the leaf level of a class hierarchy.
    "public" inheritance always expresses the type hierarchy. If I need code reuse that doesn't correspond to the type hierarchy then any modern language provides several other solutions to factor it out.

[ Parent ]
Inheritence, Code reuse, Polymorphism, Composition (4.66 / 6) (#87)
by Skywise on Tue Jul 16, 2002 at 01:22:56 AM EST

You hit on about 15 gazillion problems that programmers face with OOP and lay them at the feet of inheritance and then claim that "Composition" will save the day.

First off, "Composition" is nothing more than a purty word to describe what we do...plain old CODING.

There's no difference between:

void FredFunction()
   AClass a;
   BClass b;



class CClass
     AClass a;
     BClass b;


Except for some variable lifetime issues.

Fact:  Merely altering 1 line of code in a program has been shown to introduce errors into programs.

Fact:  Errors in programs, are the most expensive cost of programs.

Ideally, if you can cut down the errors in a program, its development cost will drop.

So, logically, what you want to do is write the code once, test it, work out the bugs... and then never touch the code again.
This is where the idea of "Code Reuse" comes in.  Not to cut down on your typing time... But to develop well used code that has been PROVEN to be bug free and works reliably.  You don't REDESIGN, REIMPLEMENT, or even TOUCH the base modules unless you absolutely have to.  You design your NEW code to conform to the existing modules.  If you can't conform the new design to the old, then your existing code is not malleable enough (or too brittle) to take the changes, and you're going to have to modify the EXISTING DESIGN.  Here's where OOP kicks.  Because OOP forces you to consolidate your code into chunks, rather than long strings of functionality woven throughout the program, you can minimize the ripple effect of your change to the immediate code area and not throughout the entire program and unrelated modules.  Essentially cutting down the cost of the change. (if your change is drastic enough will ripple through all of the modules  then you're really looking at a complete rewrite, ANYWAY).  How bad this change is will depend upon your object "granularity" (my term).  If you have large honkin' objects you're going to have to rewrite alot more than if you had lots of smaller objects.  On the flip side, lots of smaller objects tend to be harder to keep track of and quickly complicate the code.

What's this have to do with inheritance?  Glad you asked...

Inheritance *IS* a primary method of code reuse, and a key feature of polymorphism.  But not code reuse in the top down approach (I'll just make my new class by deriving it from this one that has pretty close functionality...)  Inheritance gets its code reuse ability from the class that USES the BASE CLASS.

Take the std::istream class in the C++ STL for example.  By itself, this class is useless and needs to be inherited.  And the Martha Stewarts' of programming have obliged by showing making fstreams, and stringstreams, and socketstreams, and printerstreams (from ostreams), etc;

But all of the functionality for those streams is in their DERIVED CLASSES.  Not in the lowly std::istream at all. Where's the code reuse there?  Answer?  0.  There isn't any.

But now make an XML reader that takes a std::istream pointer. (I did, from Expat on Sourceforge)...  INSTANTLY, you now have an XML reader that will read from file I/O, String buffers, and sockets from networks or even sockets from other applications... without ever touching a line of code in the XML reader!

Composition will not give you that, because it's the power of the lowly base class that makes it possible.  That's the power of inheritance and its abilities for reuse.  But you've got to be able to look at your code in 2 dimensions... See the class for itself, on its own level, but also to be able to see how the class will react from its fully implemented object scope (the top) and how it will react and be used from its base class (the bottom).

Now, you're correct in that inheritance should never ever be used as a replacement for composition.  That's why Java whacked the multiple inheritance ability.  Because multiple inheritance is very rarely necessary, almost always used in gross manners (I myself am guilty of this...) and the few times it is needed, that functionality can generally be replaced by compositing with interfaces and single inheritance (to avoid the diamond nightmare...)

Your talking about polymorphism... (5.00 / 1) (#120)
by Alhazred on Mon Aug 12, 2002 at 01:47:22 PM EST

Essentially what your discussion does is confabulate inheritance and polymorphism. In other words the various iostream's are polymorphic, they implement different functionality behind a single virtual class interface.

There are technically OTHER ways to achieve this sort of polymorphism. One is simply loose typing. For instance in perl you would not need the iostream base class because the language does not enforce any notion of a reference refering to a particular type. This cuts down a LOT on the use of base classes (though some might argue other problems arise, thats another discussion).
That is not dead which may eternal lie And with strange aeons death itself may die.
[ Parent ]

I think you're restating Scott Meyers Items 35/40 (4.00 / 1) (#109)
by ClasDee on Tue Jul 23, 2002 at 06:01:45 PM EST

The general concerns - while important - are a little to general to use in an actual design situation IMHO. None of the concerns are necessarily true.

Large class hierarchies: Well if the hierarcy is sophisticated, but does not mix a large number of different intentions for inheritance, this can work just fine. When you're mixing different intentions, e.g. subclassing GUI widgets both to change the feature set and to adapt to different platforms, you get all the contorted designs that are adressed and discareded in the GOF book (Design Patterns).

Fragile superclass: Not if it is well written. Any behaviour defined in the superclass and implemented in the superclass should rely only, and explictly so on design contracts defined in the superclass. This does not make inheritance useless in any way, it just means that inheritance is not a catch-all.

Breaks encapsulation: Absolutely not. You'll have to violate other design principles to do that ;-). If you expose private member data of a superclass to a subclass then you are breaking encapsulation, but don't blame it on inheritance. This is a bad design decision in itself.

The example you give only presents a partial argument for your conclusion. I think it is a bad design by the way. The contract for the member functions of a class should have the privilege of assuming a fully constructed object, so any functionality used in a constructor should be implemented outside the class hierarchy or in a base class (of the super class here).

But disregarding that the argument is still only partial and I think you're just partially restating the good advice in items 35 and 40 from Scott Meyers' Effective C++: You should only use inheritance when you really mean that the superclass and derived class are in an 'isa' relationship (that's Item 35). To get in trouble with code like your example, you would have to change implementation of DoStuff in the base class - making the class relation ship a case of 'is-implemented-in-terms-of', which is better dealt with through layering/composition (that's Item 40)

I meant to say... (none / 0) (#110)
by ClasDee on Wed Jul 24, 2002 at 03:27:14 PM EST

"To get into trouble you would have to change implementation of DoStuff in the derived class."

[ Parent ]
OO mischaracterizations (4.00 / 2) (#111)
by esap on Thu Jul 25, 2002 at 10:21:09 PM EST

Ok, let me suggest that the article and its conclusions are all based on a (common) misunderstanding of OOP. First, the characterization of OOP as "encapsulation", "inheritance" and "polymorphism" is a common simplification for describing properties of OOP. Now I think there is not a single system built using OO that uses just these mechanisms, even if these have been chosen as "definitions" of OOP. If I wrote such a list for OOP, it would also have these:
  • composition
  • abstraction
  • traversal
  • indirection
  • lifetime
  • object (=identity, state, behaviour)
  • dynamic (late) binding
  • self-reference
Now, in light of this, I'll consider the main thesis of the article that inheritance should be avoided. This is a very C++-centric view of the world. In reality, there are no problems with inheritance, it solves very neatly the problem it was intended to solve: intrusive extensions to implementations. Nothing else.

What about the apparently unavoidable problems "Large inheritance hierarchy", "Fragile base classes" or "Breaks encapsulation"? If inheritance was not confused with subtyping or composition, there would be no large inheritance hierarchies (because it's really hard to build a "mix-in" implementation hierarchy with more than two levels!). The fragile base class problem is a dependency problem that can be solved by separating method implementations from object representations [see below]. And the supposed problem with inheritance breaking encapsulation will be solved by the same mechanism (although I consider that problem a non-issue, because inheritance by its nature is an extension to a representation of an object. And representations are only incidentally connected with method implementations or interfaces).

So why are there then many apparent problems with inheritance in C++ (and Java)? I think it is caused by fundamental conceptual confusions both in the design of the language(s) and the subsequent misunderstanding of inheritance that this causes to the development community at large. In particular:

  1. Constructors should be separated from classes and interfaces. Similarly, method implementations and object representations should be separated. This problem causes most of the "construction problems" associated with inheritance. Each constructor should correspond to a single object representation (constructor arguments could be used as the actual representation of the object!). Each abstraction represented by an interface provides a mechanism that clients can use to access objects. The implementations of methods should link the representations of objects to interfaces that the objects implement [note that "state changes" caused by methods could also change the representation of the object; This would arise naturally, if object representations were decoupled from method implementations!]
  2. There is a difference between inheritance, implements-an-interface, subtyping and "mixin-style-composition" relations. Inheritance is not useful for purposes where the other alternatives can or should be used. These differences are not made explicit by the language, causing developers to mistakenly conflate these concepts.
  3. type compatibility, polymorphic references, slicing and "inheritance hierarchy conversions" are all conceptually very different. C++ treats all as kinds of "type conversions", which is very confusing.
  4. Encapsulation. Public/private/protected access specifiers are sometimes claimed to provide encapsulation. In fact, they do not do that, because they do not reduce dependencies to implementation details. There are standard idioms in C++ for dealing with this defect (such as using pointer to another class only defined in the implementation file). Real encapsulation is really closely related to polymorphism, and they have very similar characteristics. Access control is not similar (and I'm not sure it's even desirable).
  5. Unions (algebraic data types). Everyone should read about Haskell algebraic data types. This is a major source of bad designs in C++ and Java, since these languages provide NO support for expressing a common design that would be best modelled by alternatives. Since there is no good way of describing data structures with alternative components, OO is often mistakenly used for simulating these kinds of features. (e.g. the State pattern). In fact, OO cannot simulate these accurately, and without native language support, unions cannot be easily simulated. Unions are however symmetric to objects, even to the extent that you can define inheritance (in that context, often called 'extending an union'), supertyping and late binding for unions.
And I haven't even started with memory management or error handling.

In Delphi/Kylix (none / 0) (#115)
by Zer0 on Mon Jul 29, 2002 at 01:40:13 AM EST

The example in Delphi/Kylix, for those who care :).

  TBaseClass = TObject;
  constructor Create(AOwner: TComponent);
  procedure DoStuff();
  destructor  Destroy;

  TDerivedClass = TBaseClass;
  constructor Create(AOwner: TComponent);
  procedure DoStuff(); override;
  destructor Destroy;

constructor TBaseClass.Create(AOwner: TComponent);
  inherited Create;
  ShowMessage('BaseClass Create Called)';

procedure TBaseClass.DoStuff;
  ShowMessage('BaseClass DoStuff Called');

destructor TBaseClass.Destroy;
  inherited Destroy;

constructor TDerivedClass.Create(AOwner: TComponent);
  inherited Create; //Calls base class constructor
  ShowMessage('DerivedClass Create Called)';

procedure TDerivedClass.DoStuff; override;
  inherited DoStuff;
  ShowMessage('DerivedClass DoStuff Called');

contructor TDerivedClass.Destroy;
  // Free you objects etc.. and then..
  inherited Destroy;

Delphi code just seems "pretty" to me heh.

Typos.. (none / 0) (#116)
by Zer0 on Mon Jul 29, 2002 at 01:51:08 AM EST

And twice since i copy/pasted my own typos! ><

[ Parent ]
Your absolutely right... (none / 0) (#117)
by ThreadSafe on Mon Aug 05, 2002 at 04:34:57 AM EST

Delphi is a pretty language. Encourages much more structured code than C++. Easier to read as well, C++ 's syntax is unnessacarily cryptic.

Make a clone of me. And fucking listen to it! - Faik
[ Parent ]

In ruling out inheritence......... (none / 0) (#118)
by ThreadSafe on Mon Aug 05, 2002 at 04:36:44 AM EST

your also ruling out polymorphism. One can not exist in any useful manner without the other.

Make a clone of me. And fucking listen to it! - Faik

Your view is very narrow... (none / 0) (#121)
by Alhazred on Mon Aug 12, 2002 at 02:09:20 PM EST

You blast the inefficiency of deep hierarchies (ie of inheritance in general) yet you never consider the alternative costs of composition.

Supposing that I want a bit of functionality in my class and I can choose to inherit or to compose. In most OO implementations a call to a superclass method requires either exactly the same overhead as a call to a locally defined method,or one level of extra indirection. Calling from a local method to a method in a composed object is guaranteed to result in one extra function call! Now, in a few languages a 2 or 3 or more level deep hierarchy might involve less efficiency than composition (maybe such as in OO perl for instance) but those are pretty weak examples...

Encapsulation you treat as some sort of holy mantra. Yes, its a good thing, but you know what they say about too much of a good thing... Having done a LOT of OO perl development (which eschews formal encapsulation entirely) I can tell you that nothing galls me more than going to do some Java development, wanting to subclass a perfectly good existing class, and finding out that the method I need to override is marked FINAL. As if the guy that wrote it KNOWS what I might need to override? Bah Humbug. If you fear lack of encapsulation, then hire better programmers.

I think enough other people have already discussed the differences between interface and implementation polymorphism... Though one point seems lacking. You can achieve implementation polymorphism via composition, but not interface polymorphism (which is really the more important type since it allows you to write generalized utility functions and do different things with them later on, which is good code reuse).

Remember also that when you use composition to get implementation polymorphism you are paying a cost in code complexity because later on if you want to compose in a different class to change your implementation you now need to either SUBCLASS the enclosing class (so you might as well not have bothered) or you will need some sort of SWITCH in either the constructor or the constructing code. This is usually an example of breaking the principle of "make decisions at compile time if you can".

More people should read Leo Brodie... :o).

That is not dead which may eternal lie And with strange aeons death itself may die.

Another Argument For Choosing Composition Over Inheritance In Object Oriented Programming | 121 comments (94 topical, 27 editorial, 1 hidden)
Display: Sort:


All trademarks and copyrights on this page are owned by their respective companies. The Rest 2000 - Present Kuro5hin.org Inc.
See our legalese page for copyright policies. Please also read our Privacy Policy.
Kuro5hin.org is powered by Free Software, including Apache, Perl, and Linux, The Scoop Engine that runs this site is freely available, under the terms of the GPL.
Need some help? Email help@kuro5hin.org.
My heart's the long stairs.

Powered by Scoop create account | help/FAQ | mission | links | search | IRC | YOU choose the stories!