Kuro5hin.org: technology and culture, from the trenches
create account | help/FAQ | contact | links | search | IRC | site news
[ Everything | Diaries | Technology | Science | Culture | Politics | Media | News | Internet | Op-Ed | Fiction | Meta | MLP ]
We need your support: buy an ad | premium membership

[P]
What I Learned About Project Development

By Arkaein in Op-Ed
Fri Mar 24, 2006 at 03:40:37 AM EST
Tags: Software (all tags)
Software

As a sophomore in college I got my first real programming job developing software for a fairly large research project at my University, which at the time was being written by a pair of talented PhD candidates. However, less than one year after I joined the team these students (who happened to be brothers) left school to start their own software development company, and the job of developing the software for the project was left to myself and two other undergrads hired at about the same time as me. Other grad and undergrad students would join the team over the next five years, but the few of us who had been there from the near beginning led the way.

The following is some of what I learned in my on the job education in (co)leading a medium size software development project.


Background

First, a brief overview of what this project was all about. It's name was the VDP, for Virtual Dental Patient. The idea is simple: use a high resolution 3D scanner to digitize stone replicas of a patient's teeth (these replicas are made from those bite impressions you may have had made at the dentist's office at some point). Dentists make these replicas to allow them to make reliable comparisons of a persons teeth and gums as they might change over periods of many years. This analysis is high qualitative, and is concerned with things like "is this tooth wearing" or "are these teeth moving". However, by using 3D models of scanned replicas analysis can be done quantitatively, we can measure how much volume a tooth has lost due to wear, or how far a specific tooth has moved. Sophisticated visualization was also available. I can't say a lot more about what a dentist can do with such tools in terms of specific diagnosis or treatment planning because I'm no dentist myself, but suffice it to say that such tools would definitely be useful. The VDP project in a nutshell was concerned with developing these tools, and equally important, proving their reliability and accuracy in making measurements.

The project was developed by a small team ranging from about three to eight people at a time, mostly undergrad and graduate students, but also a couple non-student developers (including myself after graduation, hence the five years working on the project). Of the two other undergrads that worked long term on the project, one stay on for about four years total, getting his Masters degree after graduation, the other guy who handled more of the hardware and networking for the department was there for about three years I believe (I think he had a double major) but didn't participate as much in direct VDP software development or project design. We developed on Windows using Visual C++/Visual Studio 6 and the Microsoft Foundation Classes (MFC) for the interface. 3D graphics used OpenGL, though Windows GDI (Graphics Device Interface) were used for limited 2D graphics capabilities. The code we wrote was mostly spread across two major applications and totaled at least 100,000 lines total (probably not more than twice that), which I would guess is medium sized by industrial standards. The two major applications were Stratus, which integrated multiple 3D scans of an object into a single model, and Cumulus, which was used to build and analyze virtual dental patients by assembling multiple models representing upper and lower jaws, as well as various bite records.

One of the biggest problems from the get go was that the original grad students developing the project were skilled coders but didn't seem to believe in comments, and there were no design docs to aid us. This eventually lead to much code duplication in Stratus because we didn't know how to make use of most of what was in there. Cumulus hadn't been started yet, so we made bad design decisions from scratch there. Eventually a complete replacement for Stratus was developed using a custom built VDP class library. An upgrade for Cumulus (which was functional, but much larger than Stratus and ugly in terms of architecture and design) was started in the hopes of improving future research and development, but never really got very far. In the end our class library, which was built from the ground up using lessons learned from early mistakes in Cumulus was only used to replace the less significant application of the two. By the time the new Stratus was complete the development team was a shell of itself, I would soon be let go as the grant funding my position expired, and so the rewrite ended as a definite net loss of productivity.

Though the project in the end did not really fulfill all of its research goals, the software that was built did fulfill most of the original goals purely in terms of capabilities. The problem is that it took the better part of five years, and I freely admit that much of this was not due to lack of programming ability but to project design decisions for which I was in large part responsible. This process has made me a better developer in the end, and I'd like to share a few nuggets of wisdom that I've learned to help present and future young developers who get put in charge of development projects for which they lack the experience to guide effectively.

Don'ts

Don't reinvent (big) wheels. If you have a need for some general utility package (math, GUI, etc.) someone out there has already written what you need, except theirs is mature, complete, and optimized while yours will be buggy and slow (though maybe a bit leaner). There's probably an open source wheel too, and under a license favorable to even proprietary projects.

In the VDP we had two clear instances of this behavior that I can think of, one that was sort of bad, and another that was really bad. The sort of bad example was our math and geometric primitives library. In any 3D graphics application there are going to be a lot of vectors and matrices tossed around, and significant use of lines, line segments, rays, planes, and other primitives. Probably 80% of the operations used (remember the 80/20 rule?) are just vector arithmetic, dot products, cross products, rotations, translations, and scaling, but the operations in the other 20% can keep you busy for a long time in planning, coding, and debugging. Moreover, these operations are so fundamental to 3D applications and might be repeated so many times that performance is essential.

Such libraries are fairly ubiquitous as they are essential components of games, simulations, and visualization systems, but for the VDP we wrote our own. The process was made more tedious by the fact that we decided template-based primitives were important even though all data storage and the majority of the calculations were done using single precision floats. In a few cases double precision may have been warranted (mainly transformation matrices), but in the vast majority of cases single precision was completely sufficient given the precision of our input data.

The really bad instance of wheel reinvention was creating our own string type. This might have been justified for a word processing application, but remember this is a 3D modeling and visualization app. The entire justification for this approach was the primitive string features in the standard C library, which we felt were insufficient for our relatively modest parsing needs, and the desire to make the software portable (e.g. not rely on Win32 or MFC string types). I'm not sure what string libraries are out there, but surely the number dwarfs the number of geometry libraries. In any case I'm fairly certain at this point that a few customized parsing commands to augment the basics in the C/C++ libraries would have been sufficient for our needs. Either a 3rd party library or a small set of custom commands (or even a very simple custom string class) would have been a much better choice than what we actually did, which was to implement in C++ the unholy child of the union of the Java String and StringBuffer classes. No, I did not participate much in the coding of this beast, yes, at some point I actually thought this was an at least okay idea.

Runner up wheel reinvention: implementing clones of Java style nestable stream classes. Yes we, had the ability to write a 3D model object to a binary output stream, which passed it to a text writer stream, which in turn could encode this in UTF-8. That is, if anyone had ever wanted to do such a thing, which I'm pretty sure no one ever did.

Don't write your own scripting language. Whatever you make, it will take a long time and be buggy and primitive compared to anything else you can use. The headaches of getting a scripting engine inserted into your code only seem big, they will be small compared to rolling your own.

For the original Cumulus we wrote our own language (we called it a macro language instead of a scripting language, but same idea). It was a simple object oriented language which reflected the types of objects available in the software: 3D surfaces and transformation matrices were the most important components, but other objects were also available. The idea was nice enough, but it took a fair amount of time and the syntax was primitive (designed for parseability to ease implementation, instead of writeability). Because it wasn't very effective it wasn't used enough to justify the work that went into creating it. The problem was compounded by the fact that it was bolted on a little late in the game, so many aspects of the application were not available to its interface. Many algorithm options were available only in the dialogs designed as the original interface to those algorithms. On the plus side, the realization that this was a problem did lead to much better design in the new VDP framework to ensure that all elements were scriptable, and that interface was kept separate from core logic.

In the rewrite we switched to TCL as our scripting language. This might have been a success had we ever rewritten Cumulus to use the new libraries since Cumulus is really where scripting might have shined, but since only Stratus (which had little need for scripting) was rewritten there was never much incentive to make any use of scripting features. I guess the lesson here is to build an application in a scripting friendly way, but don't actually add a scripting engine until you have a need that justifies the headaches involved with incorporating it into your software.

Don't create large platform/toolkit independent abstraction layers. Yours will be limited, buggy and slow. See the Dos for better alternatives.

To future proof our software against new platforms (which the three of us software development leaders all secretly wished for, because none of us had any great love for Windows) we decided to ensure that all platform dependent code was safely isolated behind platform independent abstraction layers. The main culprits we determined were GUI, 2D drawing, threads and locking, and application initialization. Fortunately we came to our senses before implementing our own cross-platform GUI toolkit, though this was a serious consideration for a while. On that front we decided that only non-GUI portions of our code had to be truly platform independent (well, sort of, we ended up with platform independent code for setting up menus and toolbars) and that the rest could be wrapped fairly easily.

This actually didn't turn out too bad, but we really would have saved a lot of time if we had just picked out a platform independent toolkit and been done with it. If we would have used Qt all these issues would have been taken care of, with much greater flexibility and completeness than the very simplistic, though functional wrapper code we came up with. Plus, we would have almost certainly had large boosts in productivity using a modern GUI toolkit like Qt over the awful MFC (itself a crummy object oriented wrapper for the decidedly not object oriented Win32 GUI libraries). Alas, our projects were not open source, and it's difficult convincing your boss to pony up a few grand a year for the Qt developers license.

Don't create your own handles for objects unless you have an actual, absolute need for object memory relocation, and then only use it for objects that actually need it. Big, professional toolkits might have reasons for doing these things universally, you almost certainly do not.

MFC and Win32 use handles to reference pretty much all objects. We didn't really understand how object relocation was used in Windows (heck, I still only have a vague notion) but that didn't stop us from implementing out own handle table to track every persistent object. We never moved these objects around in memory, so these handles created an absolutely worthless layer of indirection that infested a large amount of our code, mostly in the nebulous area where interfaces and algorithms came together. If we had just mapped object name strings (which facilitated scripting capabilities) to object pointers we would have had all the benefits of our actual implementation, minus a lot of boilerplate code needed to actually penetrate the abstractions and reach an object.

Don't mix 2D and 3D graphics toolkits. If you need mixed 2D and 3D graphics use OpenGL and try to do the 2D stuff in OpenGL. If you absolutely need to overlay toolkit specific 2D graphics over 3D (e.g., OpenGL does not support fonts in a nice way, and you need good fonts to overlay 3D graphics), find a toolkit that is meant to work within your 3D system, or failing that use a cross platform GUI toolkit with good integration with OpenGL. I suppose a lot of this holds for Direct3D as well as OpenGL, but with OpenGL you get a library that is supported on every major platform as opposed to only one platform.

This one is pretty application specific, but there are probably similar examples from other problem domains as well. If you need to mix libraries/toolkits with related functionality, first make sure that neither library is actually capable of doing everything, and failing that try to find libraries that are designed to work together. We used Win32 GDI commands to overlay 3D rendering with 2D lines and text, and compounded the matter by actually using our own custom wrapper for 2D graphics in the name of portability. Since OpenGL was already the most portable part of our software (other than the C and C++ libraries themselves) doing the 2D stuff in OpenGL would have simplified things greatly, eliminating both abstract platform independent wrapper classes and the derived, platform dependent implementation classes. For our simple 2D drawing needs OpenGL was certainly up to the task, and I'm sure we could have found a 3rd party font library for the few cases where we actually wanted to use text in a 3D window.

Don't create interesting but ultimately irrelevant side projects that lead your team astray from the true goals of the project.

I think that this may be a particularly large danger in a University research setting, where projects have the dual goals of educating students as well as developing worthwhile software. In our case the side project was a small cluster intended to quickly perform processor intensive algorithms on 1999's hardware that would be more reasonably executed on a high end desktop 3-5 years later. I didn't have much to do with this particular endeavor, and I'm not really sure whose idea it was, but in hindsight it is obvious to me that even if the cluster had ended up working perfectly it never would have been worth the effort that was put into it. Most of our most intensive algorithms ran just fine on a desktop machine, though some experimental algorithms might take an hour or so (a few brain dead algorithms that one or two developers cooked up took over a day to run, but that was because the algorithms were naively implemented, not because the problem was inherently that difficult). Even if we really did have a large number of algorithms that really required days to execute the sensible solution would be to set aside a few powerful desktops for this task, and if necessary make modifications to the algorithms to work on small chunks of the data one at a time (i.e. by swapping the rest out to disk). This would have been much simpler than building a cluster, developing software to communicate with the cluster, and developing fully parallel versions of algorithms.

Suffice it to say the most useful thing the cluster did for us was teach our hardware/networking guy about building clusters and parallel programming. And it meant that when we hired more developers we had some dual CPU workstations ready to go after a simple video card upgrade.

Dos

If the program will definitely (or at least very likely) be cross platform start with a good cross platform GUI toolkit.

I've already mentioned Qt, though there are plenty of others. This isn't an easy decision, as feature set, maturity, support, stability, cost, and licensing must all be taken into consideration. But considering that huge parts of your application will depend on this decision, parts that might take months to code, a few weeks spent carefully evaluating the alternatives and doing some costs/benefits analysis is worth it.

Whether or not the program will be made cross platform in the future, isolate core logic from interface and other platform dependent code sections, i.e., use a Model-View-Controller (MVC) architecture. Deal with the porting obstacle when (and if) you come to it.

An oldie, but a goodie. When you know exactly what the finished product will look like any design will probably do as long as all the details are thought out, but for the other 99.9% of projects that can change directions several times during development, modular design is crucial. Using a MVC architecture is the fundamental step, but further isolation of components is a good idea if the project is big enough.

Use standard GUI elements. This will help future proof your application against difficult porting issues and prevent developers spending a lot of time coming up with "cute" solutions to simple problems.

This is something we did well for the most part. Standard menus, toolbars and dialogs are generally easy to implement in any toolkit. If your application does get ported to multiple platforms this means it will probably have a different feel on each platform, but it will be that platform's native feel, which many users will appreciate. Some people would prefer that the app is consistent across all platforms (a.k.a. the Mozilla approach), but there is no consensus as to which approach is better, and using native interfaces will be less work except in the largest projects. As long as you keep the distinction between interface and logic clear, porting the interface should be relatively painless.

Maybes

Create a string mapping table to organize persistent objects. This gives a lot of freedom to change program structure in the future, and allows global access to objects without global namespace pollution.

We did this, and other than using handles as an unnecessary layer of indirection between the string identifier and the object it worked out well. By encoding object hierarchy directly into the string identifiers it was easy to assemble important objects belonging to classes spanning multiple VDP applications in organizations customized to each application without incurring additional class overhead. This also made it easy to reorganize the object hierarchy without much code modification. It should be noted that the benefits of this method of organization would have come virtually for free if this high level application structuring was done in a language with dynamic binding and built in hash table/dictionary types rather than in C++.

Create platform/toolkit abstractions for small, essential components that must necessarily interact with core logic. Threads which must support any level of communication are a prime example, as thread interaction is tightly coupled with algorithm implementation (the C in MVC).

Our thread wrapper class was nice because it was a fairly simple class which allowed a clean way for 100% portable algorithmic code to interface with platform dependent threads. If we ever did port the code to another platform it would have been easily to get basic thread support working, and even for a single platform it kept the messy parts of threading out of the algorithmic code.

Conclusion

I want to repeat that in spite of all these issues we did develop some pretty good software. The problem was not with quality but with development time, and I believe that if we knew then what we know now the project could have been done just as well in half the time, maybe less. While the anecdotes I've provided touch on some fairly specific issues I believe they can be distilled into a few key principles: 1) know the critical goals for the project, and 2) for each decision made ask "is this the most effective way of achieving the project's goals?" If we had asked ourselves this question early and often (and been honest with ourselves in our answers) most of our big, time consuming mistakes could have been avoided.

Sponsors

Voxel dot net
o Managed Hosting
o VoxCAST Content Delivery
o Raw Infrastructure

Login

Poll
How does this compare to projects you've worked on?
o Better than my typical death march 4%
o Sounds normal to me 31%
o Yuck! Keep your custom frameworks away from me! 18%
o This poll is prejudiced against non-programmers 45%

Votes: 22
Results | Other Polls

Related Links
o Also by Arkaein


Display: Sort:
What I Learned About Project Development | 62 comments (56 topical, 6 editorial, 0 hidden)
it sounds to me like (3.00 / 2) (#1)
by creativedissonance on Wed Mar 22, 2006 at 06:23:43 PM EST

you had no technical management on this project whatsoever.

in which case, whoever was paying your salary was a moron and deserved what they got.


ay yo i run linux and word on the street
is that this is where i need to be to get my butt stuffed like a turkey - br14n

Lack of technical management (3.00 / 2) (#2)
by Arkaein on Wed Mar 22, 2006 at 06:47:35 PM EST

You're going much too far saying the person running the project is a moron. The Professor in question was a very smart scientist with knowledge of both dentistry and physics, and had developed precursor software to the VDP in the 80's. However, you are correct in your main observation that we didn't have much in the way of technical management.

A lot of the blame can be pinned on the original PhD students who left the project. Remember, they left school early (and without a lot of notice) and so the professor in charge didn't have a lot of options open. Software development positions are generally allocated to graduate and undergraduate students, and I doubt hiring a full time technical project manager was as straightforward an option as it would have been in a commercial setting.

I'd say that the single biggest mistake made from a management perspective was allowing us developers to attempt a complete re-architecting of the software (which didn't start until Cumulus was fairly developed and functional). If we had been told to refactor the software instead, improving components piecemeal, a lot of the waste towards the end of the project would have been avoided.

Even the rewrite might have turned out decently if the project had had a 10 year development/maintenance time rather than closer to a 5 year span. The new VDP libraries were much nicer to code with, it's just that the project dwindled too far in the time it took to code them. With a few more years of project development and further research, we might not have come out more productive than if we had simply avoided the rewrite, but the losses would have been cut.

----
The ultimate plays for Madden 2006
[ Parent ]

you just proved my point for me. (3.00 / 2) (#17)
by creativedissonance on Thu Mar 23, 2006 at 11:13:44 AM EST

"I'd say that the single biggest mistake made from a management perspective was allowing us developers to attempt a complete re-architecting of the software"

this is a surefire example of a lack of technical management leadership.


ay yo i run linux and word on the street
is that this is where i need to be to get my butt stuffed like a turkey - br14n
[ Parent ]

Wordier than skykn-1ght (3.00 / 2) (#3)
by debacle on Wed Mar 22, 2006 at 07:27:13 PM EST

And a blind man could have formatted it better.

It tastes sweet.
Good lord (1.50 / 2) (#5)
by Gibidumb on Wed Mar 22, 2006 at 08:10:12 PM EST

You don't know the difference between qualitative and quantitative???

Yes, I do (3.00 / 3) (#6)
by Arkaein on Wed Mar 22, 2006 at 08:27:25 PM EST

It's called a typo, and now it's fixed. Perhaps now you understand the purpose of a story being in editing (though likely not, as you don't appear to understand how to use editorial comments).

----
The ultimate plays for Madden 2006
[ Parent ]

Platform choice (3.00 / 3) (#7)
by smallstepforman on Wed Mar 22, 2006 at 08:27:29 PM EST

After reading the article, my first though was "He he, they ran into the same issue everyone else developing under Windows runs into (hacked API, hacked scripting, hacked libraries), which results in a lot of time wasted reinventing the wheel.".

We've all run into this issue, and the sad thing is, we cannot get out of this quick sand we're in. Our customers are running Windows, so that's what we develop software for, yet the main reason why they're running Windows is because that's where the software is. A classic example of lockin.

Not wishing to start a platform flame war, but most of these issues you've run into are due to a poor platform choice (technologically poor, but financially sound). Irony at its best.

I feel your pain, brother. For I have walked down the same path, and am stuck you like you (and like millions of our brethen).

Agreed, (none / 1) (#8)
by Arkaein on Wed Mar 22, 2006 at 09:01:28 PM EST

but don't feel too badly for me as I haven't worked on this project since Spring of 2004 (I maybe should have pointed this out in the story, oh well). Now I'm working on my Masters and working part time for a small video game studio. Unfortunately that means I'm still developing on Windows, but on the plus side I don't have to deal with Win32 or MFC.

I actually think Windows was an okay choice of platform for the time, as this project was started in in 1997, and predecessor projects which dealt with teeth one or two at a time were developed a few years earlier, also on Windows. Mac OS X wasn't around yet, and Linux was gaining momentum but not nearly as strong a platform as it is now. The big problem was using MFC as as our GUI toolkit. Anything would have been better than that piece of crap.

Even the STL wasn't quite ready for prime time (at least on Windows, which I know was a bit late to that party, but maybe on other platforms too), which was really unfortunate because I spent a fair amount of time dealing with resizable arrays and hash tables.

----
The ultimate plays for Madden 2006
[ Parent ]

MFC is still around; I can't frikkin believe it (none / 0) (#25)
by padda on Fri Mar 24, 2006 at 02:44:49 AM EST

It's still version 4.2 that I can tell, too. Perhaps ATL is meant to replace it (I haven't used it, to know) but I doubt that somehow, you'd think there'd still be a need for a basic Qt-like class library.

[ Parent ]
MFC isn't going to see a major refresh... (none / 0) (#58)
by ckaminski on Thu Apr 06, 2006 at 09:46:10 AM EST

What you'll end up seeing is linkage of Windows.Forms to Native C++.  I think even MS realizes how badly developing in MFC has gotten compared to writing Windows apps in Java (Even myself, a really bad Java developer can pump out a decent Windows GUI frontend in less than a day).

Like MAPI, kiss MFC goodbye over the long-term.

[ Parent ]

You missed the biggest 'do' (none / 1) (#13)
by wiredog on Thu Mar 23, 2006 at 07:46:19 AM EST

Which you alluded to when you said "didn't seem to believe in comments".

Do comment your code, even the stuff that seems obvious. It may not be that obvious 6 months later.

The first program I worked on out of college (which I took over from another guy), ran 100k lines, about 400kb executable (on a 640k DOS box), with around 100 kb to 150 kb of data. Lots of ugly hacks to squeeze max performance and memory efficiency out of it. And, right before some particularly non-obvious code, the following:
/*Why did I do this?*/

Ever since then I comment everything. As in int i,j;//loop counters levels of commenting.

Wilford Brimley scares my chickens.
Phil the Canuck

commenting story (3.00 / 2) (#14)
by Arkaein on Thu Mar 23, 2006 at 09:33:46 AM EST

I guess I skipped it because it was obvious, and I wanted to stick to more interesting anecdotes.

On our team I was definitely the most comment conscientious developer (though not quite as pedantic as you seem to be), although most of the other did a decent job. I even went so far as to define a specific commenting style for our project and write what I called an "auto-documenter" in Perl which parsed our header files and generated HTML documentation. It was actually pretty decent at what it did, however this was another case of failing to use an existing and surely superior tool that was already out there.

Plus, quite a few of the other developers didn't use the documentation that was generated. I'm fairly sure that one of those bone-headed algorithms that took a day to run could have been avoided if the developer in question had taken advantage of a class already available (which would have reduced his algorithm from O(n^2) or O(n^3) to something like O(n log n) or O(n log^2 n)).

----
The ultimate plays for Madden 2006
[ Parent ]

I skipped it because it was obvious (3.00 / 4) (#16)
by wiredog on Thu Mar 23, 2006 at 10:52:29 AM EST

My point exactly...

Wilford Brimley scares my chickens.
Phil the Canuck

[ Parent ]
That is just plain silly. (none / 1) (#41)
by skyknight on Sat Mar 25, 2006 at 08:16:04 AM EST

That level of commenting tangibly harms code readability. As someone perusing the code, it is as if you have two minstrels standing side-by-side, one yammering on about a story in a form that a computer is supposed to understand, and another telling what is supposedly the same story but in a different language that humans are supposed to understand. The awful truth, of course, is that you, the human, have to understand both stories, to the point that you can re-tell parts of the story in the language of the former minstrel, operating presumably under the assumption that the latter minstrel wasn't outright lying to you, and worse still you have to translate these modifications into the story being told by the latter minstrel. This is insane, and that's precisely why we humans keep on inventing higher and higher level languages and teaching computers to speak them instead of writing binary code and English in parallel. In software development, duplication is the ultimate evil, and there is no more tangible example of duplication than the comment style that you describe.

To that end, I propose an alternate "do":

Always architect your systems elegantly and craft the code legibly, even if you think you won't ever have to look at it again, because most software systems grow organically and you never know when circumstances will force you to re-visit a piece of code and modify its functionality.



It's not much fun at the top. I envy the common people, their hearty meals and Bruce Springsteen and voting. --SIGNOR SPAGHETTI
[ Parent ]
I dunno (none / 0) (#49)
by wiredog on Sat Mar 25, 2006 at 11:08:38 PM EST

Sure, you and I may know that i and j are always loop variables, as x and y are always graphics co-ordinates. Well, unless you're in motion control. Then x and y are axes of movement. Motion control with graphics you may have 2 x's and 2 y's. Fun.

But please put comments aat the top of every function explaining what it takes, returns, and does. Likewise above code blocks within a function.

And, no, "/*Why did I do this*/" is not helpful. Especially not above something like:
While(something){For(i=0;i<j;i++){If(pow(i,2.0){do_stuff}}} <br>

Turned out he was checking to see if bits in an int were set. Made it interesting when we went from 16 to 32 bit...

Wilford Brimley scares my chickens.
Phil the Canuck

[ Parent ]

-1, No one wants to hear about computers. (3.00 / 4) (#15)
by Egil Skallagrimson on Thu Mar 23, 2006 at 09:58:36 AM EST

Please write some fiction.

----------------

Enterobacteria phage T2 is a virulent bacteriophage of the T4-like viruses genus, in the family Myoviridae. It infects E. coli and is the best known of the T-even phages. Its virion contains linear double-stranded DNA, terminally redundant and circularly permuted.

your experiences are non-typical (3.00 / 1) (#19)
by creativedissonance on Thu Mar 23, 2006 at 12:32:00 PM EST

specifically because you didn't have any formal management of the project.

any technical manager worth his salary would have stopped the bullshit you and yours did.

please rewrite when you have actual PROFESSIONAL experience.


ay yo i run linux and word on the street
is that this is where i need to be to get my butt stuffed like a turkey - br14n

You haven't spent much time in the Real World (3.00 / 4) (#20)
by wiredog on Thu Mar 23, 2006 at 02:18:36 PM EST

have you? Or, at least, the Real World of Small Companies.

Wilford Brimley scares my chickens.
Phil the Canuck

[ Parent ]
I gotta second that notion (2.00 / 3) (#21)
by army of phred on Thu Mar 23, 2006 at 04:25:24 PM EST

heh this job I had for a few years, I discussed during the job interview that one of my distinct weaknesses was my poor project management skills.

What does this mean? Oh yeah, when the enterprise app needed replaced, I was assigned project management duties. When I complained that I was specifically unqualified, I was told I had no choice in the matter.

It was really no biggie tho, after I crashed and burned there I found work at a convenience store some months later so it all works out in the end.

"Republicans are evil." lildebbie
"I have no fucking clue what I'm talking about." motormachinemercenary
"my wife is getting a blowjob" ghostoft1ber
[ Parent ]

Regarding code comments... (2.66 / 6) (#22)
by skyknight on Thu Mar 23, 2006 at 09:00:29 PM EST

I am inclined to think that 99% of the time that people write comments that it would have been better to make the actual source code clearer. Comments are inherently duplicative and in danger of drifting out of sync with the code. Proper modularization as well as artful naming of modules, classes, functions and variables takes great skill and a significant investment of time, but the result is worth the trouble. Well designed software reads like a story book and has little need of code-level comments. Mind you, this is a distinct issue from documenting module level interfaces, something that does warrant your attention since people should not have to read your source code to use your library. To me, most heavily commented code looks like a poorly written novel with a ton of author notes scribbled in the margins. It would be better to have it competently written instead of suffering incessant foot-notes of the form "in case you didn't get the point of this chapter because I am a terrible writer..."

It's not much fun at the top. I envy the common people, their hearty meals and Bruce Springsteen and voting. --SIGNOR SPAGHETTI
Commenting vs Refactoring (3.00 / 3) (#23)
by bunk on Fri Mar 24, 2006 at 01:05:49 AM EST

I agree that clean code and a good modular design are far more important than making sure detailed function level comments exist and are synchonized with the code.

In line with this thinking, I tend to spend a lot more time refactoring than creating and maintaining comments.


hunger strike + bong hits = super munchies -- horny smurf
[ Parent ]

Of course, it's easier said than done... (none / 0) (#46)
by skyknight on Sat Mar 25, 2006 at 02:21:02 PM EST

Being able to refactor aggressively implies extreme diligence when it comes to writing automated tests. Without such regression testing capabilities it is too fraught with peril to modify large swaths of your code.

It's not much fun at the top. I envy the common people, their hearty meals and Bruce Springsteen and voting. --SIGNOR SPAGHETTI
[ Parent ]
one of the strengths of extreme programming (none / 0) (#50)
by bunk on Sun Mar 26, 2006 at 07:19:03 PM EST

test driven development (tests written first and used to define the functionality that needs implementing) + emphasis on refactoring


hunger strike + bong hits = super munchies -- horny smurf
[ Parent ]
ive been studying static analysis (none / 0) (#56)
by cunt minded on Wed Mar 29, 2006 at 12:42:44 AM EST

of programs and one of the questions ive been hoping to answer is: is there some way of doing "impact analysis"? basically you would start where the code where the code was changed and calculate how it changes the result of that bit of code, then you would go up to code that uses the result of that bit of code and calculate how they were changed and so on.

i think there may be a way of using this in conjuction with a spec to effect changes in software safely. this may be a way to not have to worry about developing tests.

but of course theres undecidability and complexity. it seems that wherever theres something that could make everything simpler and easier its undecidable or too expensive to be practical. so even though i think something like what im describing is possible, i think it will prolly be less effective than im hoping

so i was wondering what you thought of that.

[ Parent ]

Yeah - frequently I'll feel the need to explain (none / 0) (#24)
by padda on Fri Mar 24, 2006 at 02:26:52 AM EST

what a var means with a comment, and in the process, come up with a simpler way to say it. When I do that I just rename the variable and forget the comment. As you say, comments get out-of-synch so fast.

[ Parent ]
That's a good habit to have... (none / 0) (#45)
by skyknight on Sat Mar 25, 2006 at 02:15:51 PM EST

A proper fix beats kruft any day of the week.

It's not much fun at the top. I envy the common people, their hearty meals and Bruce Springsteen and voting. --SIGNOR SPAGHETTI
[ Parent ]
No no no! (3.00 / 3) (#28)
by ttfkam on Fri Mar 24, 2006 at 10:40:44 AM EST

Good comments and good code are not duplicative. They serve two distinct purposes that should never be interchanged.

Code describes implementation.

Comments describe intention.

Two very different items, but both -- done correctly -- are important. If your code or your comments fail in their primary purpose, the other should provide some assistance in sorting it out. Just because many people write bad comments (or no comments) doesn't mean that comments themselves should be considered optional, expendable, or a waste of time.

Bad comment: for (int i=0; i<10; ++i) { ... } // loop ten times <p> Good comment: for (int i=0; i<10; ++i) { ... } // clear the cache <p> Comments that focus on intention rather than implementation aren't duplicative and are more likely to remain in sync with the code.

And remember, everyone is learning (assuming they are interested in improving). Not everyone will be the coder that you are. Comments on intention can help the coder(s) that come after you understand your idioms, aiding the learning process, and reducing the likelihood of unnecessary code rewrites. They also help you sort out what came before you, reducing the time needed to get up to speed. You may learn a new technique or more easily target those areas that require maintenance.
If I'm made in God's image then God needs to lay off the corn chips and onion dip. Get some exercise, God! - Tatarigami
[ Parent ]

*sigh* (none / 0) (#29)
by ttfkam on Fri Mar 24, 2006 at 10:45:53 AM EST

Sometimes <p> is a paragraph tag, and sometimes it's written out? What a pain.

Bad comment: for (int i=0; i<10; ++i) { ... } // loop ten times

Good comment: for (int i=0; i<10; ++i) { ... } // clear the cache

The point being that the first easily falls out of sync. The second is an intention -- a concept. Whether the cache is cleared by looping, using an STL algorithm, a function call in an external library, etc., it still remains relevant.

If I'm made in God's image then God needs to lay off the corn chips and onion dip. Get some exercise, God! - Tatarigami
[ Parent ]

I dont't think so: (3.00 / 4) (#30)
by hummassa on Fri Mar 24, 2006 at 12:54:16 PM EST

in your example:
for(int i = 0; i < 10; ++i ) { } // loop 10 times

in my code I would do:
inline void Cache::clear() { for(int i; /*...*/ }

and
cache.clear()

my rule of thumb is: the place to describe intent is the method's name. one method, one function.

[ Parent ]

I disagree... (none / 0) (#39)
by skyknight on Fri Mar 24, 2006 at 10:33:48 PM EST

Your code should simultaneously be describing an algorithm to both a programmer and to a piece of hardware. If you think about it, what is an algorithm, if not a precise description of intent? I never said that accomplishing self documenting code was easy. In fact, it is one of the most demanding and creative facets of good software development. Most people probably never get any good at it. All the same, it is possible.

It's not much fun at the top. I envy the common people, their hearty meals and Bruce Springsteen and voting. --SIGNOR SPAGHETTI
[ Parent ]
Some thoughts (none / 0) (#43)
by Arkaein on Sat Mar 25, 2006 at 12:24:28 PM EST

I like the main idea of what you're saying, but I feel that there are a number of situations where using purely self documenting code is simply not practical.

For example, you mention above that library documentation should be good so that people can use your library without reading your code, and that functions should be named appropriately so that their use and intent can be easily recognized. But what about when you need to use someone else's library, and their documentation might be lacking, or their function names not so transparent. Maybe a library has multiple functions which have similar names or overload the same name, but differ in some key aspect that may not be readily apparent.

I also feel that intent is hard to covey purely through code no mater how hard you might try. General purpose functions can be used to achieve a wide variety of ends, many of which will not not be anticipated when they are written. This is especially true when code is added for diagnostic or debugging purposes. Here's an example from a visualization program I'm working on to illustrate this:

glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT);

What this does is obvious (to anyone familiar with OpenGL anyways), it clears the offscreen rendering buffer. However, here is the actual code with a leading comment:

// NOTE: do not clear to see texture in background
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT);

By disabling this code the user can see a texture in the background that will eventually be used in pixel shader, very helpful for debugging. Anyone who might want to modify the shader in the future (myself included) might find this useful in the debugging process. By inserting the "NOTE:" keyword I've provided a standardized target for text search that can bring of developer to bits of code of likely interest. Similarly, I mark code that generates purely diagnostic output with "// DEBUG:" to make it obvious that this code can be disabled without affecting the rest of the application. In most cases the code in question is just one or two lines, so moving the code into separate functions would increase the code size and require a developer to jump around in the code to read it as it would be executed. I'd be interested in alternative methods you would or have used in these types of situations.

----
The ultimate plays for Madden 2006
[ Parent ]

Dig deeper... (none / 0) (#44)
by skyknight on Sat Mar 25, 2006 at 02:14:12 PM EST

If the library that you are using has such a horrendous API, then you should be considering two possible options, only one of which is guaranteed to be available. Firstly, you should consider if there are alternatives available that provide comparable functionality but with a cleaner API. Secondly, you should consider writing an adapter that wraps this library in an API that doesn't suck. The latter route has three-fold benefits. First, it gives you the ability to plaster over what may not only be a bad API but one that is bloated as compared to your requirements. Second, by writing an adapter class and having it implement an interface of your choosing, all of your code can program to that interface, so later down the road you can swap out one implementation for another, the only task being to write a new adaptor class instead of walking through your entire source tree, ripping out the coupling to the old library, and inserting coupling to the new library. Third, by adopting such an architecture you facilitate testing by creating the possibility to swap in mock objects that mimic the functionality of other libraries in a faked-up and controlled way. As a general principle, to the greatest extent possible you should avoid programming to concrete classes, preferring interfaces instead. On a related note, make sure that your architecture makes it easy to plug in different implementations.

Oh, and overloaded functions that have drastically different behaviors from one another are an abomination. They don't exist in dynamically typed languages at all, and should only be used in strongly typed languages to support similar operations on different types, e.g. take a double instead of an int for a C++ method and do a floating point operation of the same flavor. One of the most ridiculous pieces of code I ever saw was a C function that took eight arguments, some of which were of type void*, and the behavior of the function was wildly different depending on the value of various arguments, with the interpretation of the void* arguments being contingent on the values of other arguments. It was written by one of my favorite professors from graduate school who is perhaps God's gift to the network protocol field but a terrible software developer all the same. I guess you can't be everything...

The OpenGL code snippet that you have provided is indicative of a shortcoming in your architecture, though one that you could probably resolve easily enough. Specifically, you need to make your code more data driven and create faculties that make such configuration easier. As you have it now, you're essentially tangling policy and mechanism. Instead, you should have a configuration file or database that holds values that drive code behavior. Then instead of digging through source code, modifying source code, and recompiling, a developer could twiddle a single parameter in a configuration file. Furthermore, all of the configuration stuff that a developer might want to toggle for a given module could be centralized in a single driving file. You could have several different versions of this file for running under different environments, e.g. one for production operation that is lean and mean, another for testing that pushes specific diagnostic information to a log file, another for developers performing specific tasks, etc. You could put explanatory text in each of these files. Then you would no longer be in the business of explaining mechanism, but rather providing a rationale for policy in a given circumstance.

So, you don't need to have your code littered with "NOTE:"s. It would be better to have the ability to drive those interesting pieces of code with data in a configuration file. You shouldn't need "DEBUG:"s throughout your code. It is better to have such code execute conditionally, and dynamically specify the mode of operation of the system. Set the appropriate thresholds for execution for various regions of code, i.e. DEBUG, INFO, WARN, ERROR, and FATAL, and then configure the current mode of operation for your modules.

I hope that is helpful...



It's not much fun at the top. I envy the common people, their hearty meals and Bruce Springsteen and voting. --SIGNOR SPAGHETTI
[ Parent ]
Interesting response (none / 0) (#47)
by Arkaein on Sat Mar 25, 2006 at 02:51:47 PM EST

Thanks for taking the time to go into detail. In general what you sounds like very good advice, but I think there are valid exceptions. Specifically dealing with the example of debugging code, your method would save trouble in not needing to recompile code to toggle debugging states which would be a nice benefit. It would also add a fair amount of complexity to the project. It's a tradeoff I think, and the sophisticated configuration options are likely to look better and better the larger the projects gets. I think there is a tipping though, below which the productivity gained by greater control over the execution context of the code cannot offset the extra work needed to setup that control. This may be especially true for debugging code which may be quite temporary. I'd rather just put in a one line output statement plus brief comment and be done with it than update the code and a separate config file.

On the topic of bad APIs, I was wondering if you were going to suggest a wrapper strategy. And again, while this has advantages in some situations it just doesn't seem like the most efficient method of addressing the situation. It you want to wrap a single function call with another purely for the sake of clarity you might as well just use a comment to explain it and be done. Less code, less indirection. A wrapper could also harm clarity in some cases. If you're dealing with a bad API, but a bad API that a lot of fellow developers are familiar with, then wrapping the API's ugliness may actually hinder clarity by forcing developers to learn what is essentially an additional API. This is another tradeoff that might be worth it in some circumstances, but it depends on many factors such as the developer familiarity the API, the API's quality, and how much it used in the project.

What's interesting about the whole API thing is that in our code rewrite we did create wrappers for many parts of the Windows API, and these wrappers provided simpler interfaces that were better customized to our specific needs. This also took a huge amount of time and resulted in little tangible benefit. We spent too much time trying to build a perfect architecture, and too little time just trying to solve the problems that the software was intended to solve (which were in the field of dentistry, not software engineering).

----
The ultimate plays for Madden 2006
[ Parent ]

Comments are not the point (3.00 / 2) (#32)
by gidds on Fri Mar 24, 2006 at 04:05:44 PM EST

Good post. I think most people look at this issue the wrong way around. You shouldn't be asking "What should I put in my comments?" (or even "Should I put in comments?").

You should be asking "How can I make this code as clear as possible?"

That involves such things as:

  • Clear, simple ideas
  • Clear, simple algorithms and designs
  • Clear, simple, meaningful layout and formatting
  • Clear, simple, meaningful identifiers
(Begin to spot the pattern here?)

Yes, it's also likely to involve comments. But comments are the last choice: it's far better to make the code clear, than write confusing code and hope that comments will sanitise it.

Things that are obvious from the code don't need to be in comments: at best, they just make the code longer, and at worst they can make it more confusing, especially when (not if) they get out of date.

So it's only things that the code doesn't make clear which belong in comments. That will usually involve:

  • Big-picture stuff: why this file is here, its ultimate purpose in life, how it relates to the rest of the system, who should be using it, and why
  • Information for the caller: put yourself in the mind of a coder working on another bit of the system, who wants to use this class/method/function/member. What do they need to know? What parameters and preconditions should they set up, and what can they expect out of it?
  • Any important design decisions
  • Any tricky bits of implementation that need to be explained
Most of those naturally belong at the top of the file/method/function; treat lots of comments deep inside the code with suspicion.

But anyway, as I said the goal should always be to make the code clear, and comments are just one of the tool for doing that. It should be the last one you reach for, not the first.


Andy/
[ Parent ]

Personally, (none / 0) (#57)
by daani on Thu Mar 30, 2006 at 02:59:59 AM EST

I like to write comments that make me sound cool. Like it's so ridiculously easy to follow that all anyone needs is a breezy and humorous commentary from me. My assumption being that anyone who can't follow the oneshot garbage my comments surround will be too intimidated by my obvious genius to complain to any bosses.

I also often include some amusing observations regarding the design (and by extension the wisdom of the designer) of any APIs my code uses. My office is loaded with people much dumber than me, so I have to do that a lot.

I mean, that's what everyone here does, right?


[ Parent ]

This was a research project ... (none / 0) (#26)
by walwyn on Fri Mar 24, 2006 at 04:33:39 AM EST

... shouldn't they have researched it first?

----
Professor Moriarty - Bugs, Sculpture, Tombs, and Stained Glass
Timeline (none / 1) (#27)
by Arkaein on Fri Mar 24, 2006 at 09:01:10 AM EST

I probably should have been clearer about when this project took place. I said I worked on it for 5 years, but that was from January of 1998 to May of 2003 (I think I said 2004 in an earlier comment, that was a mistake).

At the time our project was cutting edge. We certainly weren't the only ones to be carrying out this type of project, but we were at the forefront. The driver was that scanning technology, along with general desktop computing and graphics technology were just getting to the level where this kind of stuff became feasible.

Under the link you provided, and looking at the first case study, it appears that Delcam started working on their dental software after purchasing a Renishaw Cyclone scanner (which has fairly similar capabilities to the Steinbichler COMET 100 scanner we used, though somewhat different technologies) in 1999, at which point they started working on custom software to augment the process. This is fairly consistent with our project, though our scanner was acquired in late 1997, a few months before I started working on the project.

----
The ultimate plays for Madden 2006
[ Parent ]

The custom software ... (none / 0) (#31)
by walwyn on Fri Mar 24, 2006 at 04:04:32 PM EST

... is a a process. It just ties together general purpose inspection, cad, and cam software, with minimal user input.

----
Professor Moriarty - Bugs, Sculpture, Tombs, and Stained Glass
[ Parent ]
Well, that's fine (none / 0) (#33)
by Arkaein on Fri Mar 24, 2006 at 04:21:30 PM EST

but it doesn't change the fact that it didn't exist until at least part way through the development of the VDP. Even then, there were several different dental problems that the VDP was being used for (again, I'm no dentist myself so I can't go into a lot of detail here), but I would guess that there was not 100% overlap between the goals of the VDP and Dental Cadcam, so it's mere existence probably did not invalidate the purpose of the project.

----
The ultimate plays for Madden 2006
[ Parent ]

But it did exist ... (none / 0) (#35)
by walwyn on Fri Mar 24, 2006 at 04:52:26 PM EST

... the point being that in 1997 there was already software that could take point cloud scanned data and convert it into cad surfaces, there was already software that could perform boolean algebra on cad models, etc, etc.

The dental cadcam is just an application of existing software. In fact its no different from reverse engineering a car bumber or this.

----
Professor Moriarty - Bugs, Sculpture, Tombs, and Stained Glass
[ Parent ]

CAD/CAM is not dentistry (none / 0) (#38)
by Arkaein on Fri Mar 24, 2006 at 07:22:37 PM EST

You can't just take a replica of teeth, scan it, get a computer model and do relevant dental analysis of it with CAD software. That's the first half of the process, and an essential component, but that alone was not the primary purpose of the software. The primary purpose was to built and explore the possibilities and limitations of software for doing dental diagnosis and treatment planning. There's a lot more to that then just tying together existing software.

I also think you are overestimating the capabilities of pointcloud to CAD translation. It's not a trivial task to do now, and scanner technology has developed a fair amount in the past decade. After my time working on the VDP I did a stint working at a dimensional inspection and reverse engineering company running a larger version of the same scanner (the Steinbichler 400), mainly for reverse engineering purposes, so I do have an idea about what I'm talking about here. My CAD experience is quite limited, but I can tell you that organic shapes like teeth are not easily plugged into CAD software, but rather it is a fairly tedious process that involves cleaning up the data, fitting a NURBS surface to it (which is far from automatic for complex surface topologies), and then exporting the resulting solid. Even after export traditional CAD software doesn't handle NURBS surfaces as easily as it does directly parameterized objects because it doesn't support the same type of geometric characteristics.

In short, the differences are present, and they are huge.

----
The ultimate plays for Madden 2006
[ Parent ]

Hmmm ... (none / 0) (#51)
by walwyn on Mon Mar 27, 2006 at 05:41:52 AM EST

... I put the link in because that is what we do. We've done point cloud to cad conversion for 15+ years. Inspection for 10+ years, using laser scanners as the inspection device since they were introduced.

BYW you're wrong about CAD software and NURBS surfaces, by about 20 years.

----
Professor Moriarty - Bugs, Sculpture, Tombs, and Stained Glass
[ Parent ]

Okay (none / 0) (#52)
by Arkaein on Mon Mar 27, 2006 at 10:11:19 AM EST

I'm sure you know a lot more about CAD than I do, I only have brief experience with it compared to my greater experience developing software for graphics and visualization.

However, I'm not sure what you mean that I'm wrong about with CAD and NURBS surfaces. I don't doubt that there have been tools for translating NURBS surfaces into CAD models for a long time. What I experienced (and I admit my experience in this particular area is limited) is that the software we used was not capable of fitting a decent NURBS mesh onto anything but a trivial surface automatically, and that defining a good NURBS mesh was one of the most labor intensive parts of converting scanned data into CAD ready NURBS surface.

The tools we used were the Steinbichler COMET 400 for scanning, Polyworks for scan alignment and merging, and Geomagic for surface cleanup, hole filling, and NURBS surface creation. Geomagic had powerful NURBS surface creation features, but it was a lot of work to define the grids so that all parts of the model were captured with sufficient detail. I probably only create a half dozen NURBS surfaces, but I got to be reasonably proficient with the software. The hardest surface I ever created was from a scan a car dashboard insert. There were several holes where dials and the stereo would go, and the back was full of these thin ridges that provided structural support, and small posts where the piece plugged into the console. Geomagic was not even close to being able to create a NURBS surface for that automatically. If there's significantly better software out there I'd like to hear about it.

----
The ultimate plays for Madden 2006
[ Parent ]

Actually ... (none / 0) (#53)
by walwyn on Tue Mar 28, 2006 at 05:56:07 AM EST

... I'm intrigued why you wanted to create nurbs when the result was to compare changes over time. Why not just compare the point cloud data?

Actually you need to ask yourself "When do I need surfaces?" In most cases you don't, and either the point cloud or a STL file will suffice for dental applications.

BTW we've written bespoke applications for the dental industry of over 10 years.

Yes NURB fitting can be tricky but surfacing is nowhere near as hard as it once was.

----
Professor Moriarty - Bugs, Sculpture, Tombs, and Stained Glass
[ Parent ]

More details (none / 0) (#54)
by Arkaein on Tue Mar 28, 2006 at 09:16:45 AM EST

We didn't actually do anything with NURBS surfaces in the dentistry applications, I only mentioned NURBS when I went a bit offtopic in discussing pointcloud to CAD translation. This was something I did i my stint at the inspection and reverse engineering outfit after I had finished working on the dental project.

When we did comparisons it was done using point (and triangle) data. Not STL but a similar custom format I came up with which stored surface normals internally and included triangle strips for fast rendering.

As far as the dentistry applications, the project I worked on had been doing some of this stuff for many years before my particular project. The first version of quantitative dentistry software from the department was develop (I think) in the mid to late 1980's by my professor and worked with one tooth scanned at a time. I'm not sure of the hardware used for this. The second generation software was written by the grad students I mentioned at the beginning of the article during the late 90's and was called AnSur3D. AnSur used a custom built contact profiler and could scan larger sets of teeth, up to four I think, as long as they were in a fairly straight line. Stratus and Cumulus were essentially the third generation software. This was the point where the scanning hardware had gotten good enough and cheap enough to allow quick scanning of an entire jaw or bite record.

The previous software was essentially inspection software for teeth. Cumulus was meant to do this, but quite a bit more as well. One of the larger projects (though one that I don't think was ever quite successful) was to accurately model the opening and closing of the jaw and how the teeth came into contact using a series of bite records taken from a patient and knowledge of the skull's anatomy. This is the type of thing I meant when I said that the project included things that simply couldn't be done by tying together existing CAD and inspection software. This was definitely new, though we probably weren't the only ones working on the problem.

Now to go off the main topic once more, you say that NURBS fitting isn't as hard as it used to be so I ask, using what software? My experiences are only from a few years ago, so was Geomagic horribly behind the state of the art, or have things changed dramatically in the last few years? I know that Polywork had NURBS capabilities that were worse than Geomagic's.

----
The ultimate plays for Madden 2006
[ Parent ]

I've not seen Geomagic ... (none / 0) (#55)
by walwyn on Tue Mar 28, 2006 at 10:41:40 AM EST

... so I can't comment on its NURB creation, perhaps I can find someone that can.

Meanwhile a few years ago we scanned, surfaced, scaled up, and divided this statue I can't recall off hand how long it took, a few months or so I think. Much of the problem involved the scaling. In essence any blemish in the original scan would be huge when scaled. A curl of the hair is almost the size of an adult as you can see from one of the diagrams.

----
Professor Moriarty - Bugs, Sculpture, Tombs, and Stained Glass
[ Parent ]

yawn (none / 0) (#34)
by tkatchevzz on Fri Mar 24, 2006 at 04:51:34 PM EST



As a sophomore in college... (1.33 / 3) (#36)
by Journeyman on Fri Mar 24, 2006 at 06:16:14 PM EST

... what qualified you to judge others "talented"?

It's not that hard (none / 0) (#37)
by Arkaein on Fri Mar 24, 2006 at 07:11:00 PM EST

When you see software that is fairly sophisticated, was developed reasonably quickly, and there are only two people responsible for it it's a fairly logical conclusion. I was also able to make comparisons between their skills and my own, as well as other developers I've worked with in the five years on that project and later. Also, I'm not just retelling my judgement of them at the time, but in the context of the eight years of experience I've gained since I started on the project. Finally, they had written the immediate predecessor software to the project I worked on and that software was quite successful in its use in research. I also saw their code, and even as a sophomore I had been coding for years.

This also wasn't just my opinion, no one who worked on the project doubted their skills, and they were good enough to start their own company (which I believe at least started successfully, though I haven't followed them for years). I did pick the word "talented" deliberately though instead of something like "great" because despite their skills they way they developed the code left the rest of us in quite a bind.

----
The ultimate plays for Madden 2006
[ Parent ]

Not hard to know talent??? (none / 1) (#40)
by Journeyman on Sat Mar 25, 2006 at 03:43:11 AM EST

I did pick the word "talented" deliberately though instead of something like "great" because despite their skills they way they developed the code left the rest of us in quite a bind.

That's my point. You pervert the word "talented" by using it pejoratively. That alone stopped me reading the rest of your article.

Judge not, lest thee be judged. Drop adjectives you're not qualified to use. They were simply "programmers". Like you.

In future, consider the adjective "competent". It is sufficient both to praise and denounce a programmer.

[ Parent ]

That's funny (none / 0) (#42)
by Arkaein on Sat Mar 25, 2006 at 09:45:47 AM EST

I'm not capable of making a basic assessment of talent of two individuals I worked with, but you are capable of such an assessment by reading my writing, despite claiming my writing is poor. Even more ludicrous is that you make either claim after stating you only read the first sentence!

I'm not even sure where to start pointing out the holes in such tattered logic, and I won't even bother because I'm guessing it will fall on deaf ears.

----
The ultimate plays for Madden 2006
[ Parent ]

I'm all ears! (1.33 / 3) (#48)
by Journeyman on Sat Mar 25, 2006 at 06:33:28 PM EST

"I'm not capable of making a basic assessment of talent of two individuals I worked with"

You said it. I'm calling you on making a value judgement that isn't supported by your evidence.

I don't think a "college sophomore" may assess whether another student has talent. I say that is an assessment reserved for an instructor.

Did your instructor, or some other instructor, describe the PhD candidates in question as "talented"?

[ Parent ]

Its a retrospective! (none / 0) (#60)
by webmonkey on Fri Apr 07, 2006 at 10:19:37 AM EST

I don't think a "college sophomore" may assess whether another student has talent.

Did you not get the point that this is a retrospective?

The assessment of the grad students as "talented" is in retrospect and written with eight years of experience as a software developer. I would hope that after eight years, someone could look back at a colleague and say "Hey, those guys were talented - I was impressed then but, based upon what I now know, I realize how [un]talented they really were."

[ Parent ]

What is talent? (none / 0) (#59)
by lukme on Thu Apr 06, 2006 at 12:58:26 PM EST

To me talent is something that is rather inate. As far as I can tell, if someone is talented, it will show up quickly - think of music (I consider programming mostly art/design).

Competence, as far as I can tell, can be obtained both by talent (the easy way) or by hard work.

Greatness is both talent and hard work.

I think it is easy to reconize talent, it is even harder to reconize competence (especially by incompetent people) and even harder to reconize greatness (without the perspective of time).


-----------------------------------
It's awfully hard to fly with eagles when you're a turkey.
[ Parent ]
The professor is talented... (none / 0) (#61)
by thoglette on Mon Apr 10, 2006 at 09:52:24 AM EST

..but like most research people he likey has zero interest or experience in getting his product on the shelf, in a box, with manual, in time for the Christmas sales.

If you can eventually get a proof-of-concept out of a uni lab, which sort of works, some of the time, then you're doing well.

Don't get me wrong, a researcher is supposed to be like that: that's how the important stuff gets discovered.

But don't expect to find cutting edge project management skills. Heck, they're rare enough in industry!

(And yes, Virginia, I have got just a little product development experience)

insightful, thanks (none / 0) (#63)
by 10west on Wed Dec 27, 2006 at 04:21:08 AM EST

The middle section, about doing stuff thats actually fun, may actually become useful. I always did my own stuff, and finished theirs at home, because, work at work, led to boredom, will play at work, added new features to future projects, and work at play was more productive, which leads me to more industrial diagnostics, of the entire setting needing change, which is why Bill works at home, for?

What I Learned About Project Development | 62 comments (56 topical, 6 editorial, 0 hidden)
Display: Sort:

kuro5hin.org

[XML]
All trademarks and copyrights on this page are owned by their respective companies. The Rest 2000 - Present Kuro5hin.org Inc.
See our legalese page for copyright policies. Please also read our Privacy Policy.
Kuro5hin.org is powered by Free Software, including Apache, Perl, and Linux, The Scoop Engine that runs this site is freely available, under the terms of the GPL.
Need some help? Email help@kuro5hin.org.
My heart's the long stairs.

Powered by Scoop create account | help/FAQ | mission | links | search | IRC | YOU choose the stories!