Kuro5hin.org: technology and culture, from the trenches
create account | help/FAQ | contact | links | search | IRC | site news
[ Everything | Diaries | Technology | Science | Culture | Politics | Media | News | Internet | Op-Ed | Fiction | Meta | MLP ]
We need your support: buy an ad | premium membership

[P]
Crossing the Rubicon

By mirleid in Op-Ed
Sat Aug 06, 2005 at 11:57:59 PM EST
Tags: Software (all tags)
Software

How often have you lovingly designed a system, using proven guidelines and patterns, making sure that there are consistent and coherent interfaces, only to have it butchered later in the development stage by others more concerned with immediate goals like processing X transactions per second, rather than longer term ones like maintainability and scalability?

This has happened to me more than once in projects that I worked on. The end result was always a system that performed according to spec, but that was not viable in the longer term.

So, I ask you: What is more desirable?
  • A system that is consistently designed along coherent guidelines, using well understood design principles, even though those design principles might cause it to perform less efficiently than it would otherwise perform if some of those principles were relaxed or altogether dropped
  • A system that is designed around achieving the end performance targets but suffers from design inconsistencies (i.e., different components being designed using different approaches) due to addressing performance concerns in inappropriate ways


The questions above become intensely relevant when the task at hand is building a mission-critical enterprise system of any meaningful size, with a projected lifespan in the order of 8-12 years. In fact, the very nature of the system and its life expectancy make issues like technology choices, maintenance and scalability of solution increasingly important. And, I would argue, the system's expected performance normally stands in the way of making the right choices at design time. "What would be the point of creating a beautifully designed system that does not meet performance requirements, and is thus not fit for purpose?" you ask. If you are interested in my take on it, please read on.

Technology choices

When initiating a programme to build a system such as the one described above, one of the first things that needs to be decided is what technologies should be used to support it. By this I mean making choices like
  • Should we go WinTel or Unix (oversimplifying the issue, because we can have Linux running on Intel machines, but it'll serve to illustrate the point)
  • Which type of programming language should we use (Java, .NET, C++)
  • Do we want a relational database or an object database
There are people saying that the targeting of the system should only happen later in its development cycle, but I would argue that these choices need to be made up front. You might not get down to the level of selecting specific vendors, but you need to have a clear idea of what you are going to have available at product level when designing the system. For instance, if the system is required to support a 24/7 mode of operation, and the regulator for the specific market the company is operating on requires you to have two live-live data centers, with a DR site in a different country, this might influence you to chose a specific database vendor and product, due to the parallel operation and data replication requirements that this implies.

Maintenance

This is where things start getting hairy. Let's assume that, after examining your requirements, you decided that what you need is a system running on Solaris targeted at a J2EE-based execution platform. Well, writing EJBs (and doing it properly) is not something that anybody can do and is a skill that is hard to find (contrary to what most people posting CVs on the job sites would have you believe; and, yes, there's a bit more to J2EE than writing JSPs). So, you decide that you need to create some infrastructure code that will be used by the developers employed to write the system: this infrastructure code will materialize a number of patterns and coding guidelines aimed at "dumbing down" J2EE and thus making it possible to employ people that only know J2SE. Furthermore, creating this piece of infrastructure will ensure that there is a consistent "theme" to the code produced, making it simpler to code review and debug (or so you hope). It should also have a beneficial effect on the maintenance of the system, since the code that comprises it will fit a particular pattern that should be well documented (if not simple to understand).

Obviously, there is a flip side:
  • Developers will start complaining almost instantly: this piece of infrastructure will necessarily restrict what they can do, and developers do not like to be restricted. If you hired consultants from the Big 5, their expectation is that, by being staffed to your project, they will acquire J2EE coding skills, which your infrastructure is preventing them from doing.
  • By its very nature, your infrastructure piece will introduce overhead. This overhead means that there's less time for business logic to execute if SLAs are to be achieved, which leads to more complaining from the developers, because infrastructure is preventing them from delivering to spec. Overall, you are effectively slowing the system down by adding infrastructure.

Scalability

Given that the system must live for quite a long time, it needs to be able to cope with (hopefully) expanding business volumes. In other words, it needs to be able to scale. In order to make sure that it scales, your architecture is based on asynchronous communication, so that you detach the producers from the consumers, enabling you to tune your system and allocate resources where they are most needed.

On the flip side, you have also decided that messages should be passed not in binary format, but in XML, because you communicate with a number of external systems (which is expected to grow) and you do not want to have to keep updating translation code throughout the life of the system. And, from a scalability perspective, this all makes perfect sense: asynchronous communication ensures that you can deploy more consumers if you need to process messages faster (even when the system is live and running), XML ensures that you have decoupled your internal data formats from what external systems expect to see. The problem is that all this has introduced more overheads. And the system has, as a result, become less performant.


So, what do you do?

You are now between a rock and a hard place. Your design achieves all the desired targets except for the one that you'll primarily be measured upon: the system being able to process those business transactions as fast as the business costumers have (more often than not, arbitrarily) decided they should be processed.

The big question is: do you start compromising, "cutting corners" in the architectural design so as to accommodate specific performance requirements (i.e., allowing some components to invoke another synchronously), or you just tell your client to buy (rent/lease/whatever) bigger boxes?

I would argue that the right option is the "bigger boxes" one. Obviously that this is not to say that you should not perform optimisations on your system, nor this is to say that you can get away with producing crap code that runs like a dog. What I am saying is that, in the long run, and at the rate that technology is evolving, you can always throw a bigger box at the problem, assuming your system scales (which is something that you might not even be able to achieve with the "optimised" approach to designing the system, given to the fact that you created a system that uses different techniques and mechanisms in different parts of it, and they respond differently to hardware based scaling). What you cannot do is to revert this state of affairs once you have gone down an "optimised" approach.


Bottom line

Prepare to be confronted with the "what-are-you-talking-about-16-cpus-the-current-system-works-fine-on-1-and-runs- pretty-fast" syndrome. People that say this neglect to say that that the current system is written in assembler and nobody can maintain it, except for those two contractors that have been around for 10 years earning more money than the CEO. And that there is a reason why it is being replaced.

What they also neglect to say (or put a dollar/pound/euro value on) is that the system cannot be extended because it simply wasn't designed for it: everything that was done to it over the last 10 years was to add yet another patch to something that now looks like prime hippy handicraft. So, try to argue your point, and argue it strongly. When you finally lose (which you will 9 out of 10 times), remember to get in writing that it was their option. Otherwise, may God have mercy on your soul.

Sponsors

Voxel dot net
o Managed Hosting
o VoxCAST Content Delivery
o Raw Infrastructure

Login

Related Links
o Also by mirleid


Display: Sort:
Crossing the Rubicon | 57 comments (37 topical, 20 editorial, 0 hidden)
Context dependent (3.00 / 2) (#6)
by GileadGreene on Thu Aug 04, 2005 at 03:12:04 PM EST

This problem is entirely context-dependent. Why does the customer need the performance they claim they need? If it is truly arbitrary, then it is a tradeable design parameter - and where you draw the line depends on how much the customer values short-term performance versus long-term scalability. If the performance figure is not arbitrary, then you must meet that requirement. How you meet it is another question, and basically becomes a short-term cost (bigger iron) versus long-term cost (maintainability and scalability) issue. Again, that will depend on what the customer needs/wants. There is no "one size fits all" answer to the question you pose.

My professional opinion (2.50 / 6) (#8)
by localroger on Thu Aug 04, 2005 at 08:42:02 PM EST

Speaking as the guy who probably designed that current system that is written in assembler and it's true, nobody can maintain it except me because they don't fucking teach the techniques any more, but you neglect to mention that it also never crashes...

I predict this upgrade will be a complete disaster no matter what path you take. I've seen it at least a dozen times and the result never varies. The new system will indeed need 16 Pentium CPU's instead of 1 486-based dos box to do less less reliably and you know what? Nobody will be able to maintain it either because there will be so many hidden and undocumented dependencies and throughput bottlenecks and un-sanity-checked inputs that will bring the system to its knees every time someone enters a time and date that isn't in valid format into some HMI box on the shop floor.

But the people responsible for chucking the system that worked will blame all the problems on the operators, and higher management will believe them instead of the operators who have kept their business running since 1975.

(Incidentally, while I do our stuff for customers I don't do our in-house accounting stuff, and this even happened to MY company in 1999 when they chucked our much reviled but rock-solid IBM System 36 for a Wintel-network based system that has never, ever worked very well and whose authors can't seem to fix it no matter how meticulously we detail its myriad bugs, race conditions, and crashes.)

I am become Death, Destroyer of Worlds -- J. Robert Oppenheimer

EOL (none / 0) (#14)
by mirleid on Fri Aug 05, 2005 at 04:29:45 AM EST

I am not trying to build a case for dumping systems that work and fulfill their stated goals. In particular, I am a firm believer in the if-it-works-dont-fix-it principle. I am starting from the point where somebody makes the decision that the current system no longer supports the business, and that the business plans that are in place for the company go in a direction that makes it clear that updating the current system is not a viable choice.

I have seen all the flaws that you mention in different companies that I have executed projects for, and actually, most of them pop up in old systems whereby the implementers did not think far enough ahead. And I do disagree with your comment on "undocumented dependencies and throughput bottlenecks": if you fuck up in your design, and fail to conduct proper testing, obviously such flaws will appear, but it is not inevitable, nor is it a result of using technologies other than assembler. It is indeed a result of shoddy workmanship and generally not giving a fuck for what's going to happen three months after you have delivered and your company pulled out of the site.

I think that it is just a fact of life that systems reach their End Of Life, and when that happens, they need to be retired. Having a sucession plan in place is key when that time approaches, but most companies do not think that far ahead, and then resort to placing their existing systems on "life support", even though they are no longer viable.

Chickens don't give milk
[ Parent ]
There's no compelling reason to use ASM today. (none / 1) (#42)
by skyknight on Sun Aug 07, 2005 at 04:46:16 PM EST

You act as if the reason that most software systems suck is because people aren't bothering to write them in assembly. I find that a bit bizarre...

In general, software systems suck because the people developing them suck at engineering. They are thoughtless, undisciplined, talentless hacks, running with the the modus operandi that mediocrity takes a lot less time and people probably won't notice until it is too late. They are not far-sighted architects with a mind toward the problems of tomorrow, but rather are just trying to kludge together whatever meets the minimal specs and takes the least work. They are not writing vigorous automated testing frameworks, but rather inhabiting the it-compiles-ship-it camp. They think that source code was intended only to be executed by a computer, as opposed to being read by people as a story of a system, only incidentally capable of executing instructions on a machine.

Language choice is largely a red herring. To the extent that it is not, assembly is most assuredly not the answer. We need higher level languages, not lower. We need programming constructs that support such things as design by contract, not completely structureless stuff that maps directly to op-codes. We need people who are visionary architects.

I sincerely hope that this is an ongoing troll on your part.



It's not much fun at the top. I envy the common people, their hearty meals and Bruce Springsteen and voting. --SIGNOR SPAGHETTI
[ Parent ]
You put all of the blame on the software people (none / 0) (#49)
by lukme on Mon Aug 08, 2005 at 07:45:02 AM EST

At least half of the blame for software sucking is with management. In most companies the software developers never sit down with end users, nore do they know what end users actually do with their software - even if there are end users in the company. Requirements for software are passed from end user up to management and then down from management to the software engineer who is told to just make it work.




-----------------------------------
It's awfully hard to fly with eagles when you're a turkey.
[ Parent ]
Well, read what I said again... (3.00 / 2) (#53)
by skyknight on Mon Aug 08, 2005 at 06:50:20 PM EST

I said "people developing [systems]", a sufficiently general phrase as to include management. Software projects being managed by people who are not qualified to make technical decisions underpin the majority of software problems, but they are only partly to blame. The passivity of developers is also culpable. Many self-styled software engineers expect be spoon fed specs for how systems are to work. This is absurd. Management should not be providing them with specs. Rather, management should be providing them with resources, and the developers should be gathering requirements, writing specs, verifying that the specs reflect the desires of users, developing prototypes to gather further verification, and only then belting out large volumes of code. Furthermore, when they are grinding out code, then they should be writing robust automated test suites, something that hardly anyone seems to take seriously. Hardly any self-proclaimed "software engineer" deserves the "engineer" moniker.

It's not much fun at the top. I envy the common people, their hearty meals and Bruce Springsteen and voting. --SIGNOR SPAGHETTI
[ Parent ]
I watch six years ago (none / 1) (#9)
by IceTitan on Fri Aug 05, 2005 at 03:09:31 AM EST

as IBM implemented SAP R/3 worldwide. It crashed. They threw money at it. It got fixed. It crashed again. They threw more money at it. They then needed different functionallity in it, so again, more money. The current system was heavily modified from the original base. Some in the know claim the two aren't remotely the same anymore. It's been patched, repaired, added on to, sliced, diced, julianned. IBM just keeps throwing money at it to make it work.

My current employer does work for IBM. We use their system. We wanted to expand, so we got different contracts and a different system. Guess what? My company didn't have the money to throw at the problem the way IBM did. What we are currently left with is a system with lots and lots of manual work arounds. This system has been active for about two years. We were even testing moduals that we were later informed the company had not purchased. My modual that they did purchase won't even work the way we set up the business. So I'm resigned to a spreadsheet instead of a paperless automated system.

The way I see it, when you bring in an already designed enterprise level system, it was already built around a certain model. Often, as in my case, the model conflicts with the current or previous way business is done. You can either throw money at it and have it your way. Or you can conform to the new preset model. Or you can half ass it the way my company did and continues to do.

The primary problem I see with mine and probably most system implementations is incompetent managment. But that is another rant.
Nuke 'em from orbit. It's the only way to be sure.

Evolution (none / 0) (#39)
by cdguru on Sun Aug 07, 2005 at 12:28:29 PM EST

30-40 years ago it was obvious - spend lots and lots of money either building or tailoring some package for the way you want your business to operate. Many software companies in the 1970s had either a staff of consultants for such tailoring projects, or entire divisions of IT were dedicated to performing this tailoring - sometimes they stayed on because the project was never really done.

The beginning of the PC era made it clear that smaller businesses could not pay someone to customize software for them. Oh sure, they would pay someone to do this and the result was a disaster. The original person would have moved on, and the new guy looked at it and said "Nope, it can't work this way. I'll just rework it all so it is correct." Many dollars later ... let's just say the continual rework/revisions with different people ended up costing more than just doing it by hand.

A much closer model in the 1990s (and, I feel beyond) is that you buy a package that does what you need or what you think you need and adapt to it. Tailoring it or trying to make it work the way you think it should will result in nothing good and a lot of money being spent. Businesses, especially large ones, that try to go the tailoring route invariably end up regretting it. And, it shows they have lots of 1970s trained staff making decisions.

[ Parent ]

Relational/Object Databases (3.00 / 5) (#17)
by alby on Fri Aug 05, 2005 at 06:14:45 AM EST

Do we want a relational database or an object database?
Neither.

I'll be using FLAT FILES!1!

--
Alby

Relational or object databases (none / 0) (#21)
by chase the dragon on Fri Aug 05, 2005 at 11:46:10 AM EST

It was my impression that object databases are fairly rare in industry. I've never understood why, except that many IT professionals have invested a lot of time and expertise in relational databases. J2EE projects often use object relational mapping that makes the relational aspects fairly redundant. In my opinion, it's only a matter of time before the relational part is abandoned entirely.

I actually know of a couple of sites... (none / 0) (#23)
by mirleid on Fri Aug 05, 2005 at 12:40:54 PM EST

...one of which I worked on that use Versant. It is pretty cool stuff, all the more so if you are using Solarmetric's JDO implementation to do persistence...

Chickens don't give milk
[ Parent ]
Relational DB's are here to stay. (none / 0) (#43)
by Iota on Sun Aug 07, 2005 at 05:00:30 PM EST

Relational databases will never be replaced by object databases for the vast majority of current usages. One of the reasons that makes objects so desirable in programming is the exact reason they are so undesirable in database technologies, that of encapsulation.

With a current relational database you can very effectively tune your queries based on the exact data you have, you know what tables are involved, what column types are, average datasets, spread over the disk, index depths, you know everything because it's all open and accessable to you.

However with an object approach the encapsulation kicks you in the arse, all that is visible to you is the object methods, the table structure is entirely hidden from you, therefore you can't tune your queries at all, if you decide to delve into the structure beneath the methods then you break encapsulation and your application loses all of the original benefits of choosing object databases for it in the first place.

Object databases have their place, but the areas are very specialised, the data has to have very complicated relationships, and the data tends to be very fluid, such as CAD based design projects where one project may consist of a thousand parts, but none of those parts can br effectively grouped and placed in a single static (as in columns) table.

For applications where the data relationships are simple, such as this website, moving to a object database would horrifically reduce performance

example: doing simple sorts of data using an object approach means each object needs to ask the next one "am I bigger or smaller than you?" only then can you find out the sort order, compare this to a relational approach were the data is openly visible to the query engine and the performance is of an order of magnatude better (especially when you use indexs and materialized views which you simply can't do in objects without breaking encapsulation (because you need to know the internal structure)).

Other points of weakness are storage costs go up, you can't normalise your data, so you have to replicate (each object needs to be a standalone entity) therefore when an object is updated it has to pass a heavy amount of messages around the dbms to propogate changes and of course this increases the risk of data integrity and consistancy failing.

It's possible to map objects from a J2EE application to a relational table easily and there are successful programs out there (hibernate.org) that achieve this, this gives you the benefits of having your persistant objects for your application, but still having full control of the data on the administration side.

Anyway, long story short, relational databases aren't going anywhere. As with any technology, you pick the right tool for the job and that for the vast majority of cases will be relational.

Oh and if anyone ever tells you that XML databases are the future, poke them in the eye.



[ Parent ]
+1, non-obscure programming article (none / 0) (#24)
by More Whine on Fri Aug 05, 2005 at 12:48:40 PM EST

Thank you for not using dozens of acronyms or obscure language that only programmers would understand.  This is an interesting article and deserves to be voted up (and I normally hate anything that has to do with programming because I find the general concept of programming to be so tedious and life-consuming that I don't see how programmers don't go insane).

Actually, we ARE insane... (none / 0) (#31)
by elver on Sat Aug 06, 2005 at 01:59:00 PM EST

And if you piss us off, you're gonna wake up with the head of a horse in your bed.

[ Parent ]
+1 FP (3.00 / 3) (#25)
by Veritech on Fri Aug 05, 2005 at 01:24:21 PM EST

Has words like "it", "be", and "the" in it.

I sympathise with you (none / 0) (#26)
by emmanuel.charpentier on Fri Aug 05, 2005 at 06:54:21 PM EST

it's been my experience that management want and demand silly things, that we must live with a historic system that seem like a prehistoric house in the middle of manhattan.

Currently I'm trying to recover from my past projects. The last one was designed for education, and all of france. It was for one million concurrent users. Yes you read me right, mgmt was ambitious to says the least.

So I designed something, and fought and fought and fought. At the end they understood that file synchronisation was viable and most of all, easy and straightforward. What a pain for such a simple and obvious result!

Of course the company tanked, and we had not one user :-(

Nowadays I think in simpler terms. Start with a caban, but one that can evolve into a house, and if necessary, a building. Malleability is the key. And only write what is necessary today and tomorrow. I guess it looks like extreme programming, at least some part of it. And I love the idea that documentation is not allways required, good simple and obvious code is necessary!

But well, management controls money, which in the end controls our work, our time... I'm going to learn hypnosis :-)

Extensible *where*? (none / 0) (#34)
by gidds on Sun Aug 07, 2005 at 03:40:16 AM EST

Start with a caban, but one that can evolve into a house, and if necessary, a building. Malleability is the key.

(I assume you mean 'cabin'?)

Well, yes, you should always aim to make your code extensible in future; the problem is, though, that it's often very hard to guess just where they might want to extend it. If you start with a cabin, it might need to evolve into a tower block; or it might need to evolve into an aircraft, an elevator, a cruise ship, an underground command facility, a particle accelerator, a matchbox, or even a Jefferies tube...

Design is about choosing abstractions, and which abstractions you pick will affect which directions you can evolve in. That's why it's such an art. (And why maintaining and extending systems is often so hard...)

Andy/
[ Parent ]

Wintel/Unix - languages (none / 1) (#27)
by lukme on Fri Aug 05, 2005 at 07:04:59 PM EST

Both of these issues are somewhat religous.

As far as the Wintel/Unix - it depends on your application. If you need to optimize on disk access, then the higher end Unix machines will beat Wintel every time - they are a more balanced system. The Wintel boxes are processor heavy and will beat the Unix machines if the entire process can be cached in to the processor cache.

As for languages, It really doesn't matter which one you choose. Choose one that gives you the option of coding in an easy to read section for the parts that don't need to be optimized (also for the first iteration of the code), and then only optimize what the profiler says needs to be optimized for your application. Quite frankly, it is better to have slow code in hand then vapor.




-----------------------------------
It's awfully hard to fly with eagles when you're a turkey.
Finally! (none / 0) (#38)
by elgardo on Sun Aug 07, 2005 at 12:14:43 PM EST

Finally someone who agrees with me when I try to argue that I can write my next heart-and-lung-monitoring-and-medicine-dosage-machine-software in Atari Logo! :)

[ Parent ]
this I've got to see - go for it. (none / 0) (#48)
by lukme on Mon Aug 08, 2005 at 07:16:24 AM EST

while your at it, why not do a snobol to logo translator - since all of the snobol people need to be using something more graphical.


-----------------------------------
It's awfully hard to fly with eagles when you're a turkey.
[ Parent ]
-1, I'm sorry. (1.00 / 8) (#32)
by What Good Is A 150K Salary When Living In NYC on Sat Aug 06, 2005 at 05:06:09 PM EST

Look, I really wanted to vote this piece to the front page and all but I just cannot see what at all this has to do with negroes. Therefore I cannot in good conscience cast my vote in the realm of positive or neutral.


Skulls, Bullets, and Gold
architecture astronauts (none / 0) (#35)
by nml on Sun Aug 07, 2005 at 03:48:50 AM EST

Your design achieves all the desired targets except for the one that you'll primarily be measured upon

sorry, but if your design doesn't meet one of the primary requirements (i.e. speed), then it's not a good design. I'm presuming you knew well in advance that you'd have to meet this target, so why did you wait to the end of the project to deal with it? You sound like you've been in this situation before - why didn't you anticipate it?

"what-are-you-talking-about-16-cpus-the-current-system-works-fine-on-1-and-runs- pretty-fast"

why is it unreasonable to expect that your new system will perform within an order of magniture of the old? If it's hand-coded in assembly then newer code will be slower, but why such a huge gap? Besides, if your new system is so wonderfully maintainable, why can't you just optimise it? After all, a maintainable system should be able to be adapted to meet new needs, and you've now got a need for speed.

What they also neglect to say (or put a dollar/pound/euro value on) is that the system cannot be extended

so why didn't you deliver the system that they asked for? If they didn't pay for it, they can't complain about not getting it. Because it sounds like you ignored their requirements and built a system with a lot of infrastructure to satisfy your desire for 'maintainability' and a lot of features, instead of building what they asked for.

Given that the system must live for quite a long time, it needs to be able to cope with (hopefully) expanding business volumes. In other words, it needs to be able to scale.

of course, the problem with this assumption is that your system that 'scales' can't even cope with the existing business volume, thanks to all the layers of buzz-word laden crap you've designed into it. The whole point of design is to make intelligent tradeoffs, not to insist that 'scalability' 20 years down the track requires XML-based asychronous message passing and huge overheads. Do what you were paid to do - design a system that works now. Implement it in a clean, consistent way. Optimise as necessary. You can't possibly anticipate all future needs, so don't try. Remember, you aren't going to need it



Maybe the article wasn't clear enough... (none / 0) (#36)
by mirleid on Sun Aug 07, 2005 at 05:56:59 AM EST

I never said that the system would not be able to meet the requirements if you threw enough CPU at it. What I am trying to communicate in there is that there are a number of parameters that you'll be measured on, performance normally being part of that set; that normally, satisfying the vast majority of those parameters will mean not being able to satisfy the performance one, at which point you'll need to compromise. And then I proceed to try to argue that you shouldn't compromise on your design, that you should advise your client to buy a bigger box (and that there's long term economic sense in doing that).

Chickens don't give milk
[ Parent ]
clear, but i still disagree (none / 0) (#45)
by nml on Sun Aug 07, 2005 at 11:00:18 PM EST

I never said that the system would not be able to meet the requirements if you threw enough CPU at it.

yes, but that's the same as not meeting the performance requirement. The whole point of requiring a certain performance is to avoid having to buy lots of expensive hardware. You can solve almost any performance problem by throwing enough money at it.

that normally, satisfying the vast majority of those parameters will mean not being able to satisfy the performance one, at which point you'll need to compromise

sorry, but if you can't meet all of the requirements for a bit of software, you shouldn't agree to them. If you agree to produce the impossible, of course you're going to have to compromise

And then I proceed to try to argue that you shouldn't compromise on your design, that you should advise your client to buy a bigger box (and that there's long term economic sense in doing that).

of course there's economic sense in it from your point of view - you're spending their money. It may indeed make economic sense for them as well, but i'm pointing out that the client has a right to be pissed off, because you haven't delivered the requirements that you agreed to. You're taking the cost of making that system run at the agreed performance level and externalising it. I'm also going to suggest that your design should have included the ability to optimise it without ruining the architecture, since performance was one of the original requirements.



[ Parent ]
The thing is... (none / 0) (#46)
by mirleid on Mon Aug 08, 2005 at 05:47:18 AM EST

...you are normally not in charge of choosing the hardware platform as a system designer. At best, you are asked for estimates/sizing of what should be required to run it at the required performance levels. And once you hand your recomendation is, it is duly ignored, and the client proceeds to buy whatever it is that they think is adequate (which normally isn't).

Obviously, the discussion that then ensues should be backed by estimates of how much it will cost to debug/fix/upgrade the system in whatever state it will be left in as you "cut corners" vs the cost of (including opportunity costs) buying the appropriate hardware solution in the first place, but this is seldom done.

On your point on "[...]the ability to optimise it without ruining the architecture[...]", I did mention that the system is designed to scale (which is an appropriate response to increasing volumes and potentially to performance issues), and that you need testing phases (preferrably done regularly throughout the lifetime of the project) to determine and address potential performance bottlenecks. The big question is: when you are up against the wall because the hardware assigned just does not cut the mustard, and you are out of "in-architecture" optimisations options, what do you do: recommend bigger boxes, or start "cutting corners"?

Chickens don't give milk
[ Parent ]
results oriented vs process focused (2.50 / 2) (#37)
by ColloSus on Sun Aug 07, 2005 at 09:43:04 AM EST

The choice you are really talking about is whether to be results oriented or process focused. It really is a choice for the young graduate who gets hired and has to adapt to the realities of the enterprise and forget the habits acquired in University, where sometimes it's more important to learn how to go about solving a problem than to actually solve it. But once you spend some time in the real world, you realize that there is no place left for process focused individuals and they don't play well with others, unless they change their ways. Being process focused betrays a certain inability to deal with changes and adapt to a rapidly evolving environment. It is also a prelude to obsessive compulsiveness.

Cheers!
"Democracy is the art and science of running the circus from the monkey-cage." Mencken
Absolutely... (none / 0) (#40)
by cdguru on Sun Aug 07, 2005 at 12:33:14 PM EST

Furthermore, the emphasis on process often conflicts with management in a "show me the bottom line" results-oriented way. The manager wants results and it is unimportant how they are achieved. Trying to convince the manager that this new way with poorer performance is indeed better because it is following the right process is a losing battle.

[ Parent ]
The art of requirements (none / 1) (#54)
by pyro9 on Tue Aug 09, 2005 at 03:21:50 PM EST

The trick is to inject a few natural elements of the right way into the project requirements. The art is to translate simple questions used in developing requirements such as "Should it have a well defined versitile inter-module protocol or should it be a steaming pile?" into business speak.


The future isn't what it used to be
[ Parent ]
but (none / 0) (#41)
by speek on Sun Aug 07, 2005 at 02:08:52 PM EST

Usually the "result-oriented" crowd is only selectively results-oriented. Things often not taken into account is how much time and money is spent patching the system, fixing bugs, and developing manual workarounds. Or how, 5 years down the road, the rate of progress on the system has slowed to a crawl. When all you look at is the speed of the initial delivery, then the best solution always appears to be to hire a cowboy who develops the system in his basement in a two-week fit of caffeine-supported work frenzy.

I'm not saying be process-oriented instead of results-oriented, I'm saying be radically results-oriented. Consider ALL the results - ie, how many bugs there are, how long it takes to fix them, how long it takes to add new features to the system, how many developers understand the system, what happens what that primary developer goes on vacation, etc.

--
al queda is kicking themsleves for not knowing about the levees
[ Parent ]

but+ (none / 0) (#44)
by ColloSus on Sun Aug 07, 2005 at 07:29:14 PM EST

Good point. I guess this is what PHBs call "TCO". But then if you start taking design decisions that are long-term in conflict with your manager's requirements, you probably won't keep your job long enough to see the fruits of your design. The best bet is in my view to make it simple for them: either you let me do this my way, but it will take 2X the time, or we do it the fast way, but you will need to make time for me later to fix the code. In my experience, the PHB will tell you to do it your way but by his deadline, which essentially means "do it my way, but it's all your fault". :)

Cheers!
"Democracy is the art and science of running the circus from the monkey-cage." Mencken
[ Parent ]
long-term design decisions (none / 0) (#50)
by speek on Mon Aug 08, 2005 at 07:52:48 AM EST

I think the only necessary long-term design decision is the DRY rule (don't repeat yourself). So long as you don't have a lot of duplicate code in your project, your design is probably fine and relatively easy to extend. Design patterns are often just ways to avoid writing the same code multiple times.

--
al queda is kicking themsleves for not knowing about the levees
[ Parent ]

Don't think process plays a part in this... (none / 0) (#47)
by mirleid on Mon Aug 08, 2005 at 05:53:29 AM EST

...the process of building and designing is not really the crux of the matter. You do have to follow some sort of methodology to do your designing (whatever that methodology may be, I am not religious on that point), and in that sense, there is a level of process that must be followed through, otherwise you end up with several different kinds of artifacts in different stages of maturity which will hurt you as the coding stage unfolds. Those processes are supposed to be facilitators to achieve results, and if some piece of the process actually prevents you from achieving results, then the process is wrong and needs to be reformulated. I do not advocate being anal about process, but any large project (let us say, a design team of about 12-14 people, and a projected development team of about 40-60) requires a set of processes in place in order to be able to work. As much as I like the concept of constructive anarchy, I have never seen actually work in this environment (and I have been through a few of these)...

Chickens don't give milk
[ Parent ]
Seems like the customer is confused (none / 0) (#51)
by keiff on Mon Aug 08, 2005 at 10:21:58 AM EST

As soon as any major consultancy gets involved, then they will have to reduce the technology level down to their common denominator.

Lets face it, their business model is many many minions on site for as along as possible learning as much as possible, to either

  1. Market their new skills to the next customer
  2. Learn everything of value, create a new business unit and sell those skills to the next customer
  3. Learn everything of value, create a project and sell it to the next customer
You see a consultancy is only as good as the next big out sourcing deal, that only comes from showing you have the right skills, how do you get those skills, well you train your staff on existing customer sites

As soon as a consultancy gets involved, you know exactly what they are going to do, trash everything already their, state that nothing will ever work, put in a case for complete re-design, re-simplification and/or re-write, but only with 10 times their own staff, and a tasty bonus



Sounds kind of... (none / 0) (#52)
by mirleid on Mon Aug 08, 2005 at 11:59:31 AM EST

...familiar...

Chickens don't give milk
[ Parent ]
offtopic, but maybe helpful (none / 0) (#55)
by ccdotnet on Wed Aug 10, 2005 at 12:09:18 AM EST

Everyone I know that started their own business was driven in large part by the need to do it "better" and on their own terms. They all saw short-comings in the way they were forced to do things to make time or budget constraints. They knew (whatever the product or service was) it could be built/delivered better than they were. And what an opportunity, right? Do it better, make a killing.

The reality dawns shortly after they go out alone. Perfection has no place in the business world. Take your high expectation, and bring it down into the realm of what's affordable to the client. Yes it can be built better, yes it can be executed better. But when they say "the client is always right" I interpret it as "the client sets the quality bar, not you".

You need to lower your standards to what the market will bear. Your concerns about long-term benefit and "doing it right" belong in your hobbyist/side-project. In your day job, the result is what counts, not whether or not the solution is elegant.

A bitter pill (none / 0) (#57)
by Harvey Anderson on Mon Aug 22, 2005 at 09:49:54 AM EST

isn't always the right medication.

[ Parent ]
Veni Vidi Vici? (none / 0) (#56)
by treefrog on Fri Aug 12, 2005 at 01:05:44 PM EST

What has the content of this post got to do with its title?

I don't really see any connection withthe poster's problem - business pressures vs architectural elegance, and Julius Cesaer's decision to cross the Rubicon (border of Italy) and seize power in Rome, overturning the constitution?

Someone please enlighten me?

Regards, treefrog
Twin fin swallowtail fish. You don't see many of those these days - rare as gold dust Customs officer to Treefrog

Crossing the Rubicon | 57 comments (37 topical, 20 editorial, 0 hidden)
Display: Sort:

kuro5hin.org

[XML]
All trademarks and copyrights on this page are owned by their respective companies. The Rest 2000 - Present Kuro5hin.org Inc.
See our legalese page for copyright policies. Please also read our Privacy Policy.
Kuro5hin.org is powered by Free Software, including Apache, Perl, and Linux, The Scoop Engine that runs this site is freely available, under the terms of the GPL.
Need some help? Email help@kuro5hin.org.
My heart's the long stairs.

Powered by Scoop create account | help/FAQ | mission | links | search | IRC | YOU choose the stories!