If I had been able to find some, I'd tell you about them. There was a 1-page article in Communications of the ACM a couple of months ago ("Hello World Considered Harmful") that was right on. There is an exceedingly brief section in the Cocal Objective C description, and some of the Java API documentation hints about it. Most books on Smalltalk have it pretty good, but all of these tend to be about the language, which is the least important thing. The philosophy is the most important thing, and although it is easier to apply to some languages, it can be applied to any language, even vanilla procedural and functional languages, as long at it's powerful enough. At least the first two projections I mentioned were in plain, vanilla C but were O-O to the very bone, including constant storage mark-and-sweep garbage collection. Also, each was about 250,000 lines of code but would probably have been a million or more if we had not designed it properly.
I can suggest some of the books to avoid. Any book that uses the word "classes" when "objects" would do as well is to be avoided. Any book that uses the phrase "business classes" is bad. That is because the scalable philosophy causes one to focus on objects, and a class is just some syntactic sugar to get objects made. People who have the right philosophy, therefore, tend to think "object." Any book that starts off with an example of a bank account as an object with a "withdrawal" method is to be avoided. Any book that builds around UML or a similar notation is to be avoided.
I can also describe as well as I can the philosophy that works, but obviously I can't write a whole book here. Objects should be thought of as "smart data," not just a trick for "encapsulation." The program is primarily "in" the interaction between objects, not "in" the implementation of the methods. The decisions about how to set up inheritence and which object does what are extremely important, and at least at first, you will have to refactor almost on a daily basis.
I can also, with an example, try to give you an example of how much fun it is when it works right, based on the first one of these that I worked on, a scientific visualization package called SciAn (now, sadly, history unless I win the lottery and can sit down and rewrite it as Open Source). So, imagine this. There's a thunderstorm on the screen. It's generated dynamically using GL. It's based on several visualization objects, maybe a deformed sheet for the terrain, some isosurfaces, volume visualization, whatever. The image depends on lighting, surface properties, transparency, and also on the data. The data is cooked umpteen different ways, and in this case may even be under control of a different machine (as can anything). There are also some color tables that are fed in. Each of these has umpteen different panels with zillions of little controls. Change the transparency, exaggerate the terrain, put in grid lines and shadows, etc. If any of these changes, the system has to change the image, doing as little work as possible. Question: if you want to add another control to some fiddly bit in the system to change something, how much extra work do you have to do to make sure it comes out in the wash? Answer: essentially none. That's because essentially all of the design is in the current spacetime linkages between objects. It is far too complicated for me to understand at all, let alone sit down and write some dumb UML drawing, but fortunately I don't have to, because it's there and it always works right.
When we decided to do some network distribution of the objects, most of this had already been written. It wasn't "we're going to make a distributed visualization," but rather, "hey, cool, we could make this distributed. Let's do it and get a demo." The same things that made it easy to add a new control made it easy to put an object somewhere else. Most of the work involved doing the IP layer, and we had to kludge around some things so I learned how to do it better next time, and we had to come up with a clever way of synchronizing the garbage collectors (which turned out already to have been thought of for incremental collectors), but the damn thing worked.
One day, when those stupid "this is your brain on drugs" commercials were very popular, John and I sat down to make a little geeky joke. We took some EEG data that we had and put it in the middle of a thunderstorm, deforming it by some of the thunderstorm data and coloring it by others. Then we put "This is Your Brain on SciAn" as a title. It took us about 10 minutes. Try that with your Adler-inspired UML Real Enterprise Professional Design Process.
Part of the trick of using objects properly is to do the opposite of what you are supposed to do: trust. It's a bit like learning how to do recursion. Remember the first time you learned the recursive solution to the Towers of Hanoi and grokked that it really would work even if you didn't trace out what was happening to the stack every millisecond? It's like that. You have to have trust that as long as you make the pairwise or tripletwise interactions between objects even more solid than the rock of Gibraltar, the thing is going to scale. If you aren't sure about the complexity of a certain topology, you can sit down and whip out the graph theory book and prove it for sure, but you also have to feel it in your bones. It seems mystical, but it's true. The magic is in the connections between the objects, and the magic really pays off when there are far too many to draw on a piece of paper. It's also a bit like the game of life; you can write the rules down on a 3 by 5 card, but a glider gun feels magical.
Everything else people talk about in objects seems to be a way to pretend that O-O design is going to make incompetent developers competent. Some of this does have some importance, such as encapsulation and inheritance. Some of it is a complete joke, such as get and set methods and arguments about what language is better. But all of it pales in comparison to the true magic.
One more thing: Don't assume that following the object syntax of a language will get you there. If you're using C++, it definitely won't. You can use C++ (or even C, for that matter), but you have to think around the syntax. The map is not the terrain. You're better off developing the skills with Java, Objective C, or SmallTalk and only then applying them to C++ or C or Visual Bloody Basic or whatever else you need to use. Also, a scalable piece of code doesn't look like a piece of school assignment C or C++. It may use C or C++, but only as a means of expressing high-level abstractions. You will be thinking at a level of abstraction much higher than the normal program; you will just encode it. At least 90% of your code is going to be almost the same no matter what language it is in; this is good, no matter what the books tell you.
Thank you for giving me the opportunity to engage in some useless but pleasant nostalgia.
The truth may be out there, but lies are inside your head.--Terry Pratchett
[ Parent ]