The processes you've outlined are one way of viewing a project. There are a few trillion others, which all have their merits and pitfalls.
The way I was taught to handle large-scale projects was to follow the Software Engineering software lifecycle, which was (roughly) described as:
Specification -> Design -> Reification -> Initial Implementation -> Testing (nth generation) -> Validation (nth generation) -> Bug Fixing (nth generation) -> cycle round to testing again
How to do any of the steps you outlined, without "doing the homework for you"? That's a touch challange, but I'll give it a go. :)
First, we come to planning. Planning will involve establishing what the problem is that you are trying to solve. If you're trying to develop an "expert system", or automate some manual process, you MUST be careful to avoid a common pitfall of assuming either you OR the other person knows what they're doing. Often, manual processes are instinctual or so automatic that the person is not going to be 100% aware of what they're actually doing. Your job would include deducing some of the requirements.
Requirements Definition - Once you know what the problem =REALLY= is (not simply what people =think= it is), you're in a position to say what the requirements are. In fact, it should follow. Most of this stuff does. Computing is not hard. But it's easy to =make= it hard.
Detailed Design - Depends on the problem and how critical it is to get the solution "right". A program to check if cheese is ripe is not as critical as an embedded controller in a nuclear reactor. The chief problem is this: Good designs will produce bug-free code. Simple designs will produce efficient code. Good designs are NEVER simple designs.
My recommendation here is to look at the nature of the problem, and place it in one of a number of categories, solving each category with the appropriate method:
- Mission-Critical (a bug could cause significant damage, loss of life or some other significant loss): Use a formal specification language, such as Z. For something like this, you'd also be wise to go with the ISO 9000 standard.
- Sensitive (a bug could seriously impair what you're trying to do, but failures aren't catastrophic): Again, you'd be wise to use Z, but don't bother with ISO 9000 compliance. The overhead isn't worth it, here.
- Major (a bug will inconvenience users to the point where it's going to damage the company's image and may affect sales, but it's not going to be a disaster, even then): Here, a formal specification language is going to eat up time and resources for no perceptable gain. Getting it to work usably is more important than guaranteeing perfection. Structure diagrams are quick, easy to draw, and easy to code. Something like a Jackson Structure Diagram (JSD) would be great. Once you have a JSD, you can "expand" it into a Flow Chart, if you prefer working with low-level diagrams.
- Minor (a bug will irritate, but nothing coffee won't solve. Nobody's going to lose any data, and in the scheme of things, nobody is even going to really care): Here, "detailed design" is actually redundant. You're actually looking for some quick sketches of the different sections and what data from your planning & requirements is applicable to what.
Iterative development and testing are essentially part and parcel of the same thing - the life-cycle of the designed software. Each generation is tested - often in a test harness, though also in the "Real World"(tm), and the results fed back to the developers to perform the next round of development.
Test Harnesses are one of the least-used, but most vital of ALL the utilities you will EVER use in software development. "Random" testing (ie: letting the user have a go) won't stress the product enough, in 99.9% of all cases. Users are unpredictable, and what they do one week, month or year, will often be totally different from what they do the next day.
Test Harnesses allow you to stress-test segments of the code, by force-feeding known inputs, and comparing the results with known outputs. This allows you to seriously put the code through it's paces, testing every imaginable combination of inputs and outputs, whether they make sense or not, or even whether they're allowable or not.
If your code survives a well-built test harness, THEN it's time to finish documentation. Finish? Yes! You SHOULD start documenting the program, the moment you start planning. Ideally, you'd start several days earlier. Documentation SHOULD contain a sketch of every stage of development, every major decision, every bug found & how fixed, and what you ate for breakfast. In short, each section needs to be brief, but you need a lot of them to be complete.
Deployment: Don't even bother, until the code has been stress-tested in a test harness which checks for valid, invalid and erronious data, and valid, invalid and erronious combinations of data. Once it's passed those checks - and the strictness will depend on the type of code, as outlined earlier - THEN AND ONLY THEN do you even consider deployment.
Once you deploy, that's it. You've now shown your cards to the users. If you skimped on the collection of requirements, mucked up on the specification, or put blind faith in a function as unnecessary to test, this is when you need to find religion. I'd also check prices at the local church. On the other hand, if you've been thorough and intelligent, so far, this section is plain sailing. Hand the code over, and/or install it on their systems, and you'll be fine.
Completion: No such animal. No matter =HOW= careful you've been, completion does not exist. It's a mirage. (No, not the French fighter, either.) Once you've handed over to the end user, you need to start looking for bug reports, support calls, users begging for extra bells & whistles, etc. These need to be fed back into the Detailed Design stage, so that you can check them against the constraints the design imposes, and those that can be implemented (within a reasonable cost) get specified, designed and implemented as above.
NOTE: There's an important word here -- constraint. Memorise it. This word is going to be your best friend, or your worst enemy. It's your choice, and it all depends on how you treat it. Constraints allow you to define what's allowable, where and when. This can be used in the test harness as part of the validation.
On the other hand, constraints ALSO confine you to what's allowable. Once a constraint is defined, it's going to be VERY hard to change, as it impacts every possible section of code the user can reach, and every possible path the user can follow. That's a lot of impact, for one piece of data. A bad choice can ruin your whole day.
Framework - Just about any book written on Software Engineering (the treatment of programming as a facet of the science of engineering) will cover this in depth. However, since each step IS a natural progression from the last, a formal treatment is often excessive. All you need is a book on formal (and informal) specifications, BNF grammers, and/or Jackson Structured Design. Again, comprehensive books aren't necessary. A pamphlet which gives you the complete syntax for all three, and some examples, would be quite sufficient for anything short of mission-critical, and even there, you could probably get along quite fine with that & a summary of ISO 9000.
Interesting subject, one that most companies ignore (which is why nobody offers any warranties on software!) and one in desperate need of discussion, IMHO.
(P.S. If companies, and Open Source coders, were to use even the most basic of test harnesses, on a regular basis, over 90% of all common bugs and 99% of all common security holes would not exist. Companies & OS coders avoid formal methods, largely because it's too much like class work, but also because it's cheaper to convince users that bugs are inevitable. Cheap psychology is often more profitable than quality technology. I'm just waiting for someone to sell the Emperor's New Clothes. I'm sure they'll make a fortune. People will notice and complain, but they'll buy in, just the same.)