Kuro5hin.org: technology and culture, from the trenches
create account | help/FAQ | contact | links | search | IRC | site news
[ Everything | Diaries | Technology | Science | Culture | Politics | Media | News | Internet | Op-Ed | Fiction | Meta | MLP ]
We need your support: buy an ad | premium membership

Distributed Computing Technologies Explained: RMI vs. CORBA vs. DCOM

By Carnage4Life in Technology
Sat Feb 10, 2001 at 04:39:40 AM EST
Tags: Software (all tags)

RMI, CORBA, DCOM, RPC, COM+, etc. to many are just meaningless buzzwords that smack more of hype than results. This article is aimed at demystifying these technologies that have revolutionized distributed computing and are transforming businesses all over the world. The article is not strictly targetted at the layman but deep programming knowledge is not needed.

My primary reason for writing this article is that I received an email a while ago from someone who found two papers I wrote on the subject, entitled Introduction To Distributed Computing and Distributed Object Technologies Compared respectively, and expressed interest in an article.

Why distributed computing?

Once networking computers began to take hold it soon became necessary to share resources and data amongst them in a cohesive manner. Today numerous computer applications use distributed computing in one form or the other: From large scale ERP applications that allow monitoring of all the diverse aspects of the average corporation, to file servers, web application servers, groupware servers, database servers and even print servers that enable several machines to share a single printer.

A more recent application of distributed computing that is becoming prevalent now that the processing power of computers is sufficiently advanced and their use became widespread it harnessing the power of many personal computers in tandem and using them to process data in much the same way that mainframes were used in days of yore.

In the beginning there was RPC

The first distributed computing technology to gain widespread use was the Remote Procedure Call (RFC 1831) commonly known as RPC. RPC is designed to be as similar to making local procedure calls as possible. The idea behind RPC is to make a function call to a procedure in another process and address space either on the same processor or across the network on another processor without having to deal with the concrete details of how this should be done besides making a procedure call.

Before an RPC call can be made, both the client and the server both have to have stubs for the remote function that are usually generated by an interface definition language (IDL). When an RPC call is made by a client the arguments to the remote function are marshalled and sent across the network and the client waits until a response is sent by the server. There are some difficulties with marshalling certain arguments such as pointers since a memory address on a client is completely useless to the server so various strategies for passing pointers are usually implemented the two most popular being a.) dissallowing pointer arguments and b.) copying what the pointer points at and sending that to the remote function.

An RPC function locates the server in one of two ways:
  • Hard coding the address of the remote server which is extremely infexible and may require a recompile if that server goes down.
  • Using dynamic binding where various servers export whatever interfaces/services they support and clients pick which server that they want to use out of those that supports whatever service is needed.
RPC programs, as well as other distributed systems, face a number of problems which are unique to their situation such as
  1. Network packets containing client requests being lost.
  2. Network packets containing server responses being lost.
  3. Client being unable to locate its server.
There are a variety of solutions to solving these problems which can begleaned from the various links provided in this article.

The need for distributed object and component systems

As distributed computing became more widespread, more flexibility and functionality was required than RPC could provide. RPC proved suitable for Two-Tier Client/Server Architectures where the application logic is either in the user application or within the actual database or file server. Unfortunately this was not enough, more and more people wanted a Three-Tier Client/Server Architectures where the application is split into client application (usually a GUI or browser), application logic and data store (usually a database server). Soon people wanted to move to N-tier aapplications where there are several seperate layers of application logic in between the client application and the database server.

The advantage of N-tier applications is that the application logic can be divided into reusable, modular components instead of one monolithic codebase. Distributed object systems solved many of the problems in RPC that made large scale system building difficult, in much the same way Object Oriented paradigms swept Procedural programing and design paradigms. Distributed object systems make it possible to design and implement a distributed system as a group of reusable, modular and easily deployable components where complexity can be easily managed and hidden behind layers of abstraction.


A CORBA application usually consists of an Object Request Broker (ORB), a client and a server. An ORB is responsible for matching a requesting client to the server that will perform the request, using an object reference to locate the target object. When the ORB examines the object reference and discovers that the target object is remote, it marshals the arguments and routes the invocation out over the network to the remote object's ORB. The remote ORB then invokes the method locally and sends the results back to the client via the network. There are many optional features that ORBs can implement besides merely sending and receiving remote method invocations including looking up objects by name, maintaining persistent objects, and supporting transaction processing. A primary feature of CORBA is its interoperability between various platforms and programming languages.

The first step in creating a CORBA application is to define the interface for the remote object using the OMG's interface definition language (IDL). Compiling the IDL file will yield two forms of stub files; one that implements the client side of the application and another that implements the server. Stubs and skeletons serve as proxies for clients and servers, respectively. Because IDL defines interfaces so strictly, the stub on the client side has no interacting with the skeleton on the server side, even if the two are compiled into different programming languages, use different ORBs and run on different operating systems.

Then in order to invoke the remote object instance, the client first obtains its object reference via the Orb. To make the remote invocation, the client uses the same code that it would use in a local invocation but use an object reference to the remote object instead of an instance of a local object. When the ORB examines the object reference and discovers that the target object is remote, it marshals the arguments and routes the invocation out over the network to the remote object's ORB instead of to another process within the on the same computer.

CORBA also supports dynamically discovering information about remote objects at runtime. The IDL compiler generates type information for each method in an interface and stores it in the Interface Repository (IR). A client can thus query the IR to get run-time information about a particular interface and then use that information to create and invoke a method on the remote CORBA server object dynamically through the Dynamic Invocation Interface (DII). Similarly, on the server side, the Dynamic Skeleton Interface (DSI) allows a client to invoke an operation of a remote CORBA Server object that has no compile time knowledge of the type of object it is implementing.

CORBA is often considered a superficial specification because it concerns itself more with syntax than with semantics. CORBA specifies a large number of services that can be provided but only to the extent of describing what interfaces should be used by application developers. Unfortunately, the bare minimum that CORBA requires from service providers lacks mention of security, high availability, failure recovery, or guaranteed behavior of objects outside the basic functionality provided and instead CORBA deems these features as optional. The end result of the lowest common denominator approach is that ORBs vary so wildly from vendor to vendor that it is extremely difficult to write portable CORBA code due to the fact that important features such as transactional support and error recovery are inconsistent across ORBs. Fortunately a lot of this has changed with the development of the CORBA Component Model, which is a superset of Enterprise Java Beans.


Distributed Component Object Model (DCOM)is the distributed version of Microsoft's COM technology which allows the creation and use of binary objects/components from languages other than the one they were originally written in, it currently supports Java(J++),C++, Visual Basic, JScript, and VBScript. DCOM works over the network by using proxy's and stubs. When the client instantiates a component whose registry entry suggests that it resides outside the process space, DCOM creates a wrapper for the component and hands the client a pointer to the wrapper. This wrapper, called a proxy, simply marshals methods calls and routes them across the network. On the other end, DCOM creates another wrapper, called a stub, which unmarshals methods calls and routes them to an instance of the component.

DCOM servers object can support multiple interfaces each representing a different behavior of the object. A DCOM client calls into the exposed methods of a DCOM server by acquiring a pointer to one of the server object's interfaces. The client object can the invoke the server object's exposed methods through the acquired interface pointer as if the server object resided in the client's address space. All DCOM components and interfaces must inherit from IUnknown, the base DCOM interface. IUnknown consists of the methods AddRef(), Release() and QueryInterface(). AddRef() and Release() are used to for reference counting and memory management. Essentially, when an object's reference count becomes zero, it must self-destruct.

Java RMI

Remote Method Invokation (RMI) is a technology that allows the sharing of Java objects between Java Virtual Machines (JVM) across a network. An RMI application consists of a server that creates remote objects that conform to a specified interface, which are available for method invocation to client applications that obtain a remote reference to the object. RMI treats a remote object differently from a local object when the object is passed from one virtual machine to another. Rather than making a copy of the implementation object in the receiving virtual machine, RMI passes a remote stub for a remote object. The stub acts as the local representative, or proxy, for the remote object and basically is, to the caller, the remote reference. The caller invokes a method on the local stub, which is responsible for carrying out the method call on the remote object. A stub for a remote object implements the same set of remote interfaces that the remote object implements. This allows a stub to be cast to any of the interfaces that the remote object implements. However, this also means that only those methods defined in a remote interface are available to be called in the receiving virtual machine.

RMI provides the unique ability to dynamically load classes via their byte codes from one JVM to the other even if the class is not defined on the receiver's JVM. This means that new object types can be added to an application simply by upgrading the classes on the server with no other work being done on the part of the receiver. This transparent loading of new classes via their byte codes is a unique feature of RMI that greatly simplifies modifying and updating a program.

The first step in creating an RMI application is creating a remote interface. A remote interface is a subclass of java.rmi.Remote, which indicates that it is a remote object whose methods can be invoked across virtual machines. Any object that implements this interface becomes a remote object.

To show dynamic class loading at work, an interface describing an object that can be serialized and passed from JVM to JVM shall also be created. The interface is a subclass of the java.io.Serializable interface. RMI uses the object serialization mechanism to transport objects by value between Java virtual machines. Implementing Serializable marks the class as being capable of conversion into a self-describing byte stream that can be used to reconstruct an exact copy of the serialized object when the object is read back from the stream. Any entity of any type can be passed to or from a remote method as long as the entity is an instance of a type that is a primitive data type, a remote object, or an object that implements the interface java.io.Serializable. Remote objects are essentially passed by reference. A remote object reference is a stub, which is a client-side proxy that implements the complete set of remote interfaces that the remote object implements. Local objects are passed by copy, using object serialization. By default all fields are copied, except those that are marked static or transient. Default serialization behavior can be overridden on a class-by-class basis.

Thus clients of the distributed application can dynamically load objects that implement the remote interface even if they are not defined in the local virtual machine. The next step is to implement the remote interface, the implementation must define a constructor for the remote object as well as define all the methods declared in the interface Once the class is created, the server must be able to create and install remote objects. The process for initializing the server includes; creating and installing a security manager, creating one or more instances of a remote object, and registering at least one of the remote objects with the RMI remote object registry (or another naming service such as one that uses JNDI), for bootstrapping purposes. An RMI client behaves similarly to a server; after installing a security manager, the client constructs a name used to look up a remote object. The client uses the Naming.lookup method to look up the remote object by name in the remote host's registry. When doing the name lookup, the code creates a URL that specifies the host where the server is running.

Pick your poison

I'm a big fan of Java so I'm partial to both RMI and CORBA. Anyone who has had experience with any of the three aforementioned technologies is welcome to post below.

References and further reading

Component Engineering Corncupia

Java RMI Tutorial

Introduction To CORBA (uses Java)

Dr. GUI's Gentle Guide To COM

Using CORBA and Java IDL


Voxel dot net
o Managed Hosting
o VoxCAST Content Delivery
o Raw Infrastructure


Which of these technologies have you used as a programmer?
o EJB 6%
o RMI 5%
o CORBA 8%
o COM 7%
o COM+/DCOM 5%
o RPC 6%
o More than one of the above technologies 24%
o None of the above. 34%

Votes: 105
Results | Other Polls

Related Links
o Introducti on To Distributed Computing
o Distribute d Object Technologies Compared
o file servers
o web application servers
o groupware
o RFC 1831
o stubs
o interface definition language
o marshalled
o binding
o export
o Two-Tier Client/Server Architectures
o Three-Tier Client/Server Architectures
o Object Request Broker
o Interface Repository
o Dynamic Invocation Interface
o Dynamic Skeleton Interface
o CORBA Component Model
o Enterprise Java Beans
o Distribute d Component Object Model
o Remote Method Invokation
o Java Virtual Machines
o java.rmi.R emote
o java.io.Se rializable
o Component Engineering Corncupia
o Java RMI Tutorial
o Introducti on To CORBA (uses Java)
o Dr. GUI's Gentle Guide To COM
o Using CORBA and Java IDL
o Also by Carnage4Life

Display: Sort:
Distributed Computing Technologies Explained: RMI vs. CORBA vs. DCOM | 86 comments (83 topical, 3 editorial, 0 hidden)
DCOM (4.14 / 7) (#1)
by aphrael on Fri Feb 09, 2001 at 09:55:35 PM EST

one of the important things about COM/DCOM is a related technology called automation which allows you to use binary objects without knowing anything about them other than the names of the methods.

Standard COM/DCOM requires you to have a header, pascal unit, java class, or whatever that describes the object in the syntax and semantics of the language you want to access the object from. By providing an interface that can be used to query the object for information about itself, Automation allows you to access an object knowing nothing other than the layout of that interface and the names of the methods (although not even necessarily the parameter list).

In addition, if the object was created from IDL, there is a seperate binary which implements interfaces that describe the object, and which can be queried. Most tool vendors provide the ability to take this binary (known as a type library) and automagically create language-specific descriptions of the object that can be used directly.

Recommended reading:

COM is cool but... (3.25 / 4) (#6)
by Carnage4Life on Sat Feb 10, 2001 at 12:46:28 AM EST

I've always thought that the idea behind COM was great. Being able to reuse binary objects across languages and machines is great.

Unfortunately the underlying COM architecture seems to be a spectacular example of how short sighted design decisions and "clever" hacks can make for a terrible platform for development.

[ Parent ]
C and C++ (3.00 / 2) (#15)
by ucblockhead on Sat Feb 10, 2001 at 10:52:30 AM EST

There is a bizarre contrast in the Windows world between VB and VBScript on the one hand, where using COM controls is simple and straightforward, and C and C++ on the other, where using COM is painful and confusing.

But then, Microsoft itself is a living case-study of short sighted design deicions.

Perhaps C# will solve the problem by providing a systems language that it is meant for COM, but somehow I suspect that they'll screw that up, too.

This is k5. We're all tools - duxup
[ Parent ]
C# (2.00 / 1) (#26)
by delmoi on Sat Feb 10, 2001 at 01:32:19 PM EST

heh, I've been thinking the same thing. The diffrence between VC and C++ in regards to com is simply huge. a VB project that implements a COM object is about one page. The C++ implementation ended up having something like 50 autogenerated files.... If you ask me, they should have just modified the C++ compiler or linker that VC++ comes with to generate COM objects out of every C++ class.

Of course, VB as a programming language is fucked up itself.

The question with C# isn't if they'll screw it up, it's how they'll screw it up.
"'argumentation' is not a word, idiot." -- thelizman
[ Parent ]
C compatibility (3.50 / 2) (#30)
by ucblockhead on Sat Feb 10, 2001 at 03:43:52 PM EST

Part of the trouble is that they kept COM compatible with straight C, which was a braindead decision, if you ask me.
This is k5. We're all tools - duxup
[ Parent ]
C compatibility (4.33 / 3) (#35)
by eLuddite on Sat Feb 10, 2001 at 06:03:07 PM EST

Unlike a decision to shut out C programmers who conservatively outnumber C++ programmers 10:1, that would be a stroke of genius.

Can you offer a similiarly compelling reason why its C compatibility isnt a burden in VB which is also not C++?

God hates human rights.
[ Parent ]

C compatibility in COM was braindead (3.50 / 2) (#37)
by Carnage4Life on Sat Feb 10, 2001 at 06:33:52 PM EST

Can you offer a similiarly compelling reason why its C compatibility isnt a burden in VB which is also not C++?

You have obviously not done any COM programming in C++ or VB to make such a comment. To create a COM application in VB, VBScript or JScript is a usually a simple task, with all the complexities and inner workins of COM hidden from the programmer so all he has to do is worry about his application logic and nothing else.

For example to instantiate a the C++ XML parser that ships with Internet Explorer in my JScript application involves writing a single line of code.

var xmlDoc = new ActiveXObject("microsoft.xmldom");

After which I can use xmlDoc like it's any other JScript object with exactly no difference in how it is used.

On the other hand doing the same thing in C++ involves various additional and unnecessary complexities. First of all besides the fact that all sorts of stuff has to be done with namespaces and imports or that so much code has to be autogenerated by wizards to avoid complexity (even COM gurus at MSFT don't fully understand it all), several aspects of writing COM objects made sense when the target language was C but are completely ridiculous in a C++ environment. Here are my top 3 pet peeves.
  1. Use of function return types (HRESULT) for reporting errors meaning that ALL functions use out parameters instead of using Exceptions for error handling.

  2. Using TRY/CATCH macros instead of C++ try/catch keywords for error handling in Win32. In fact I should spread this to overuse of macros and typedefs (or macros that are typedefs of a typedef) in most Win32 APIs.

  3. Using BSTR when std::string exists.

[ Parent ]
C compatibility in COM was braindead (4.00 / 3) (#39)
by eLuddite on Sat Feb 10, 2001 at 08:13:57 PM EST

Can you offer a similiarly compelling reason why its C compatibility isnt a burden in VB which is also not C++?
You have obviously not done any COM programming in C++ or VB to make such a comment. To create a COM application in VB, VBScript or JScript is a usually a simple task, with all the complexities and inner workins of COM hidden from the programmer so all he has to do is worry about his application logic and nothing else.
Um, thank you for making my point. None of the languages you mention are C++. Clearly C compatibility was _not_ a "braindead" decision.

God hates human rights.
[ Parent ]

*sigh* (3.00 / 3) (#40)
by Carnage4Life on Sat Feb 10, 2001 at 08:31:53 PM EST

Um, thank you for making my point. None of the languages you mention are C++. Clearly C compatibility was _not_ a "braindead" decision.

Most people who create COM objects use C++ while most people who use off-the-shelf COM components (objects) do it from VB, JScript or VBScript. Notice that I don't mention C, heck Microsoft doesn't even ship a Visual C product, yet developing COM objects in C++ (which is the primary language used for creating them), is hampered by braindead backwards compatibility with C.

[ Parent ]
Re: *Sigh* (3.00 / 3) (#45)
by eLuddite on Sat Feb 10, 2001 at 10:14:50 PM EST

Most people who create COM objects use C++

I dont use C++ to create COM objects. I dont use C++ for anything.

Notice that I don't mention C,

Well then, you're hard of reading. Learn to pick the threads you post to.

yet developing COM objects in C++ (which is the primary language used for creating them), is hampered by braindead backwards compatibility with C.

Piece of cake using the .net edition of visual studio. What changed? C++? COM? The tools for creating COM in a language neutral fashion? Sorry, I'm confused. Maybe you sigh some more till it sinks in for me.

God hates human rights.
[ Parent ]

layers (none / 0) (#84)
by spongman on Wed Feb 21, 2001 at 06:16:39 AM EST

it's easier in VB because VB was designed to do this. simple.

When you create your xmlDoc object the JScript dll is doing all the stuff you hate in C++ for you. It was written in C++ and it uses HRESULTS and BSTRs, pointers, v-tables, the rest of it. Just because someone's written a program to abstract away all the hard stuff that you don't fully understand, doesn't make the hard stuff ridiculous. If it was, you wouldn't have the JScript DLL to make it all easy for you. Besides, do you complain about the complexity of the assembly code that your compiler generates every time you compile a program? I hope not.

I'm sure there are many people at Microsoft who don't understand it all. But I'm sure if all they did was complain about it being too complicated they'd be regarded as rather petty...


[ Parent ]

com objects in C++ (none / 0) (#83)
by spongman on Wed Feb 21, 2001 at 05:57:25 AM EST

COM objects require a certain amount of overhead especially if you want to provide automation, persistence, properties, reflection etc... Object environments such as VB and Java handle these tasks (with overhead) for you. They also handle such things as object lifetime, memory management, etc... This support isn't just in the compiler or the language, it's also part of the runtime environment, or virtual machine.

C++ does none of these (inherrantly) and doesn't have a runtime environment significantly different from the operating system so you have to do it yourself (or let a wizard help you). VC++ 7.0 (.NET) provides support for the .NET runtime which includes java/VB-like runtime support for object/memory management, and all the other nice things that mean you can write fully managed objects in 1 page. But, of course, it's not really standard C++ any more...


[ Parent ]

Use Delphi (none / 0) (#61)
by Dacta on Sun Feb 11, 2001 at 10:04:25 PM EST

Provided you don't mind Pascal's syntax, Delphi is the Nivarna of COM development on Windows.

It's as easy to use as VB, but as powerful as C++. No wonder there are huge similarites between C# and Delphi.

Now Kylix is (nearly) released, Delphi doesn't even tie you to Windows anymore.

(I don't work for Borland, but I used to program in Delphi, and I highly recommend it.)

[ Parent ]
COM in Kylix (none / 0) (#63)
by aphrael on Mon Feb 12, 2001 at 03:00:17 AM EST

Now Kylix is (nearly) released, Delphi doesn't even tie you to Windows anymore

There's no COM in Kylix. There might be client-side COM support in it someday, but it's relatively low on the priority list, and server-side COM is even lower. (Why would you want to do server-side COM under a freeware COM subsystem for linux done by some random third party? This is when you rearchitect for CORBA).

That said, the interface stuff is built into object pascal, so you can do interface-based programming in kylix which will look and feel like it's objects derived from IUnknown --- but there is no hand-holding for trying to talk via DCOM to something running under windows.

[ Parent ]

Interfaces in Delphi (none / 0) (#64)
by Dacta on Mon Feb 12, 2001 at 03:28:30 AM EST

And as I understand it (or last time I was checking this out - I haven't followed developments closely for 6 months or so) Interfaces in Delphi were no longer tied so closely to COM (you can now implement interfaces that no longer implement IUnknown, for instance.)

Is that true?

[ Parent ]
Yes and no. (none / 0) (#65)
by aphrael on Mon Feb 12, 2001 at 11:47:45 AM EST

IUnknown has become a IInterface, to remove the illusion of depending on COM, but IInterface still requires you to implement QI, AddRef, and Release. Still, there's no direct COM tie-in there, and the nomenclature has changed.

[ Parent ]
Visual Basic (none / 0) (#82)
by spongman on Wed Feb 21, 2001 at 05:48:15 AM EST

The fact that COM programming in Visual Basic is so much easier than COM programming in C/C++ is testament to the power of the Visual Basic environment. Sure, the language itself isn't so hot, but remember OLE controls were designed as an upgrade to the original Visual Basic controls which we programmable through the familiar drag-drop/properties model. These controls are native to VB and that's why it's easy to use them in it. The flexibility of the underlying support for these controls (COM/OLE) necessitates a certain amount of complexity, which fortunately is hidden from the programmer by the VB environment. If you want to program this functionality directly (in C/C++) you have to address the complexity directly.

I don't quite understand what you mean when you describe a necessarily complex design as short-sighted. Surely it would have been more short-sighted of them to have designed a system that wasn't flexible enough to handle the requirement that would, will, and have been made of it? After all, it all boils down to twiddling bits on a piece of silicon, which to me is extremely complex - but I don't see its design as short-sighted.


[ Parent ]

IDL-free (none / 0) (#12)
by mikpos on Sat Feb 10, 2001 at 09:29:56 AM EST

Being mostly of the Smalltalk (well, actually, Objective C) school of object-oriented programming, I've wondered if there are any IDL-less component models. In dynamically-bound languages, like Smalltalk, the object itself knows what methods it responds to, so it can transmit its interface without any outside help (i.e. IDL compiler). I've always disliked the idea "pre-compiling" code, and having an IDL completely removes the possibility of changing the object at run-time (i.e. adding new methods).

[ Parent ]
Reflection and interfaces (4.25 / 4) (#20)
by Simon Kinahan on Sat Feb 10, 2001 at 11:27:40 AM EST

I think the feature of Smalltalk and Obj-C you are talking about is reflection, which means any object can be interrogated at runtime about the methods it implements. Actually this exists in many languages, including Java, which has interface declarations similar to many IDLs. CORBA also has a reflection mechanism.

The reason people used IDLs is so that stubs can be generated that allow a client object to make requests from "server" objects without having to know anything about their location or the intervening network. That is, it supports location and network transparency, as the stub object on the client looks just like the server object. Admittedly, a system using remote reflection would allow stubs to be generated on the fly in a run-time compiled dynamic language like Smalltalk, but since the programmer needs to know the interfaces anyway in order to write to them, you may as well generate the stubs at build time.


If you disagree, post, don't moderate
[ Parent ]
Java's Reflection? (none / 0) (#14)
by kellan on Sat Feb 10, 2001 at 10:48:16 AM EST

Is this any different from Java's reflection api? Sounds like the same idea. You have classes that know hope to figure out information about other classes. Reflection is at the heart (a slow cumbersome heart some claim) of many of Java's new technologies, from beans, to tag libraries.

Its also used by frameworks like voyager to automatically generate stubs, and enable mobile agents.


[ Parent ]

SOAP (4.33 / 6) (#3)
by Dacta on Fri Feb 09, 2001 at 10:51:45 PM EST

Another distributed object protocol worth discussing is SOAP.

Unlike all the other protocols, SOAP isn't designed to write a full apllication in - rather it allows a few carefully desgined objects to be made available very simply.

Unlike DCOM, CORBA and RMI it has been spcifically desgined for ease of use in an environment where the network isn't neccessarily under the control of a companies IT deparment. For instance, SOAP objects can be used via HTTP (through firewalls) or even via SMTP (email).

Apples to Oranges (4.66 / 3) (#4)
by Carnage4Life on Sat Feb 10, 2001 at 12:30:27 AM EST

DCOM, CORBA and RMI are complete systems for creating a distributed application. SOAP and XML-RPC simply protocols for use in communicating between objects in these systems.

That said, I'm not a hundred per cent sure I agree that replacing IIOP, JRPC, or ORPC with an XML protocol is a the good solution. Theoretically it does mean that is more possible for Objects from disparate systems to invoke methods on each other since the arguments are marshalled as XML instead of in some proprietary binary protocol but I'm not sure that the overhead of sending an XML string and parsing it each time you invoke a remote method is worth it.

[ Parent ]
Not quite (3.33 / 3) (#8)
by FuzzyOne on Sat Feb 10, 2001 at 01:08:33 AM EST

To a certain I agree that it might be apples to oranges based on early specs and a narow definition, but you really owe it to yourself to study SOAP a little and include it. SOAP is one of those "good enough" protocols (like HTML was to presentation markup) that can become ubiquitous very quickly, and CORBA, RMI and DCOM will be little niche technologies compared to the volume of transactions flowing over SOAP and/or XML. Where IBM, Sun and Microsoft could never agree on standards involving any of the technologies you mentioned, today they all are looking towards SOAP for building distributed services in the future.

I think you cover your topic well, but it's based on technologies that are between 4 and 10 years old (and older). I'd love to see you do a follow-up (or re-work this one if it falls through) comparing your favorites with SOAP. Take a look at the recent Sun response to Microsoft regarding .NET (as posted in a /. story earlier this week.) The writing is on the wall.

[ Parent ]
Apples, Oranges and Bananas (4.00 / 4) (#18)
by Simon Kinahan on Sat Feb 10, 2001 at 11:18:37 AM EST

I think these kinds of comparisons of technologies confuse some of the issues.

SOAP is a wire protocol, that is used for object-oriented RPC over HTTP 1.0 or better. Sure the same DTDs could be used for transfer via a different transport, but SOAP as a protocol is dependent on HTTP as it stands. As you say, the great advantage of SOAP is that it goes through firewalls nicely. Arguably, thats more an issue of dumb firewall usage than anything else, but it does seem to be a practical fact that people will not let anything but HTTP through their firewalls.

CORBA, on the other hand is a generalised architecture for distributed object oriented applications. It is a set of facilities, and interfaces, that all ORBs are supposed to supply. It is not tied to any particular protocol, but does require that all the features required by GIOP exist, though in practice things work OK when some of them are missing. SOAP only supplies a subset of these features, as CORBA supports naming and brokerage services to find objects, and reflection mechanisms as well as RPC. In principle, there is no reason why SOAP could not be used as a part of a CORBA protocol implementation, although there would doubtless be many practical difficulties with doing so.

RMI and DCOM are at least both the same kinds of thing. They are interfaces to object oriented RPC systems specific to particular environments, Java for RMI and Windows for DCOM. RMI can be and often is implemented over CORBA.


If you disagree, post, don't moderate
[ Parent ]
Not only HTTP (none / 0) (#51)
by Dacta on Sun Feb 11, 2001 at 01:26:18 AM EST

Although most talk about SOAP is about using it via HTTP, the spec isn't neccecarily tied to a single method of transport.

For instance, the Apache SOAP toolkit supports SOAP via SMTP.

[ Parent ]
Soap & Squid? (none / 0) (#22)
by kellan on Sat Feb 10, 2001 at 11:33:46 AM EST

Soap, and XML-RPC are both designed to tunnel over http. So does that mean you could use something like Squid to cahce requests?

Hmmm, would urls have to be deterministic, or could the content negoiation built into http1.1 handle that intelligently? It would definitely be a boon to be able to cache expensive queries in something like Squid. Also would ligthen the load of expensive SSL connections to your main server if your application needed to be secure?

What do you think? Am I off in left field, go-get-a-cup-of-coffee-and-wake-up territory?


[ Parent ]

SOAP applicability (3.50 / 2) (#24)
by SlydeRule on Sat Feb 10, 2001 at 12:02:13 PM EST

SOAP addresses an interesting subset of distributed computing needs, but it isn't universally applicable.

SOAP not only doesn't provide a security model, it subverts existing firewall security. Any SOAP server process must be very carefully audited for security vulnerabilities.

SOAP also doesn't provide any transaction context. This is a significant problem in many applications.

So, if what you're trying to do is a machine-accessible equivalent of a Web server, SOAP looks pretty good. But if you're trying to make a machine-accessible equivalent of a transaction server, SOAP isn't even in the running.

[ Parent ]

You can use RMI via HTTP,too. (none / 0) (#46)
by pig bodine on Sat Feb 10, 2001 at 11:03:31 PM EST

But it's seldom a good idea to do that sort of thing. Better to call up your company's IT department and make a request for a change to the firewall, than risk their ire by subverting it.

[ Parent ]

I don't see the point. (3.12 / 8) (#7)
by gblues on Sat Feb 10, 2001 at 12:54:15 AM EST

What exactly is the point behind having a single application distributed across a network? I can't speak for anyone else, but RPC (and its heathen offspring) look like nothing more than planned security holes.

Whatever happened to good old-fashioned IPC? Why would any sane programmer introduce the inherent latency, unpredictability, and insecurity of a network connection into the executable portion of his program? It's bad enough trying to debug a monolithic program running on one CPU--imagine having to troubleshoot an entire network to figure out why you're getting core dumps!

No thanks. I'm a member of the "Just because you can, doesn't mean you should" club.

... although in retrospect, having sex to the news was probably doomed to fail from the get-go. --squinky
components are neat (3.75 / 4) (#11)
by mikpos on Sat Feb 10, 2001 at 09:21:24 AM EST

IPC is a pain because you have to manually (de)marshall everything. Components allow you to get a handle on an object (from a different process) as if it were an object in your own address space. I guess you could think of it as IPC with a bunch of stubs and skeletons already written for you (by the IDL compiler).

As for distributed components, the pipe dream is that you (as an application, say a word processor), whenever you need some functionality you don't have, just ask your component server (like a file server). e.g. if you don't have a spell checker, you just ask the network "hey, give me a spell checker", and you can get one, no matter what programming language or operating system or platform that spell checker happens to use. I think the OS- and platform-independence is the big gain to distributed objects.

[ Parent ]

Yuck! (2.00 / 3) (#28)
by gblues on Sat Feb 10, 2001 at 03:18:01 PM EST

I can just see it now:

College student running MS Word Distributed Edition is up late at night working on his term paper, and tries to access the spelling checker. An hourglass appears on his screen, and a minute later he gets an error: "Spell check is not available: Object Broker could not be located. Try again? [Retry] [Cancel]"

He sighs in frustration, and decides to just save it and call it a night. So he clicks save. Another hour glass, then "Unable to save file: Object Broker could not be located. Try again? [Retry] [Cancel]"

Now he's really pissed, but figures maybe he can print it out and retype it later. So he clicks print. Another hourglass, another error: "Unable to print document: Object Broker could not be located. Try again? [Retry] [Cancel]"

Yeah, that'll fly _real_ well ;)

... although in retrospect, having sex to the news was probably doomed to fail from the get-go. --squinky
[ Parent ]
Bad example. (4.50 / 2) (#53)
by inpHilltr8r on Sun Feb 11, 2001 at 04:14:24 AM EST

For an application like word processing, RPC is a pretty dumb solution. Dictionaries are pretty static, so you have nothing to gain by storing them on a seperate machine. Similarly, if the printers in the same room, why would the print server be on the other side of the planet?

[ Parent ]
You'd be surprised. (3.00 / 1) (#59)
by gblues on Sun Feb 11, 2001 at 01:49:46 PM EST

My aunt works for an advertisement printing company called Treasure Chest. Basically they print all the adverising inserts you see in your Sunday paper or whatnot. Anyway, in order to print something to a printer 20 feet away, she has to send the print job to a server on the east coast (she's on the west coast), which sends it back to the west coast to print out on the printer 20 feet away. If the network goes down (and it does, frequently), she's fscked.

Also, if you're going through a terminal server (NT Terminal Server, Citrix Metaframe, etc), you're in the same boat. The job comes from the remote server, not the local computer, so even if the printer is directly connected to the PC, network problems (or server problems) can keep the user from printing.

So, it does happen. :)

... although in retrospect, having sex to the news was probably doomed to fail from the get-go. --squinky
[ Parent ]
Distributed Computing (4.00 / 3) (#13)
by _Quinn on Sat Feb 10, 2001 at 09:56:02 AM EST

   Tends to be more about "we can't" than "we can," in practice. We can't add more processors to our Sun E10000 -- distribute the work. We can't afford to provide all of our engineers with SP3s... distribute the work. We can't allow Joe Random Desktop User to have SuperSecureDatabase on his machine -- distribute the work. RPC and its descendants are attempts to make this kind of distribution easier, by making it (look) programmatic, rather than protocol-based.

Reality Maintenance Group, Silver City Construction Co., Ltd.
[ Parent ]
Some reasons: cacheing, trust, isolation (4.33 / 6) (#16)
by kellan on Sat Feb 10, 2001 at 11:00:51 AM EST

There are a couple of neat uses I've played with recently:
  1. Cacheing - make your expensive queries once, load all your results into the 10 gb of ram on your cacheing server, and then have all your webservers pulls results out of memory instead of off the desk or out of database
  2. Proccessing power - sometimes you just can't stuff another proccessor inside your server
  3. Trust - we need to work together, that doesn't mean I want to give you a priveleged account on my machine, I'll just publish an api/service that you can call against.
  4. Isolation/Buggy Software - my absolutely favorite reason! (and one I recently implemented) Imagine you have a neat but non-essential piece of your application that leaks memory like a seive. Now imagine you promised your boss/users/investors that this feature was going to be there. What do you do? You put it on a remote box, and kick it everytime it kills the machine.(Meanwhile the rest of your application runs merrily along)
Besides, its fun. I've been looking forward to agents ever since I read Brin and Russel


[ Parent ]

Isolation (4.00 / 1) (#52)
by Mihg on Sun Feb 11, 2001 at 02:28:05 AM EST

Isolation/Buggy Software - my absolutely favorite reason! (and one I recently implemented) Imagine you have a neat but non-essential piece of your application that leaks memory like a seive. Now imagine you promised your boss/users/investors that this feature was going to be there. What do you do? You put it on a remote box, and kick it everytime it kills the machine.(Meanwhile the rest of your application runs merrily along)
This has got to be one of the ugliest hacks I've ever heard of. (But from a purely pragmatic point of view, it's sheer genius :-)

The HotMail address is my decoy accout.
I read it approximately once per year...

[ Parent ]
It's not just a single application... (4.33 / 3) (#21)
by skeezix on Sat Feb 10, 2001 at 11:32:50 AM EST

There wouldn't be much point if it were just a single application. However, having distributed components is very powerful. In the same way the multiple applications can use a single library, many applications (clients) can use the components in a very flexible manner (in process on the same machine, out of process on the same machine, out of process across a network). I've just joined the COM group of a healthcare software company. We are working on a project to componentize our Order Management software. These components can reside on one machine, with the GUI completely separate. Clients who use our components can put their own GUI (in whatever language or platform they want) frontend to hook into our components. For a hospital with hundreds of workstations, it's very powerful to have all the backend components on one separate server somewhere on the network, rather than duplicate the code on each individual workstation.

[ Parent ]
Currently ... (4.00 / 1) (#23)
by Simon Kinahan on Sat Feb 10, 2001 at 11:33:57 AM EST

Mostly for administrative reasons, with a side order to reusability. An n-tier architecture, especially if you use a thin client like a web browser, makes your application much easier to upgrade than a fat client + database system, or a single isolated program, as the application logic for each tier exists only in one place.

For example, if I run an e-commerce web site that does order processing, I can use a standard database (one tier), and a standard set of EJB components, with some extra ones I write myself (two tiers), and write the presentation tier (three tiers) using Java servlets, or PHP, or whatever. I can now vary the presentation easily without messing with the back end application logic, and vary the application logic without having to reinstall anything on my client's machines.

I have to point out that you are currently using a 2 tier distributed application. I don't think Scoop separates application logic from presentation.


If you disagree, post, don't moderate
[ Parent ]
How Would NFS Work Without RPC? (4.20 / 5) (#25)
by Carnage4Life on Sat Feb 10, 2001 at 12:15:29 PM EST

Whatever happened to good old-fashioned IPC? Why would any sane programmer introduce the inherent latency, unpredictability, and insecurity of a network connection into the executable portion of his program?

You seem to have missed the point of the entire article. IPC is a way to send bytes back and forth between processes. RPC deals with calling functions in different proceses while distributed object technologies deal with interacting with objects between processes.

You probably notice that is incremental pattern follows that of the trends in regular software development, assembly programming -> procedural programming -> OOP programming. This is because each incremental advancement reduces the complexity of building large and complex software systems.

Now if your question is simply "Why do we not use IPC for distributed computing?". Here are the choices
  1. When you want a remote server to perform a task for your application such as retreive remote files (NFS), perform a SQL query (any database server), or print a document you send it a stream of bytes then the application on the other side which must know how to parse the bytes and figure out what exactly you want calls local procedures and then sends you another stream of bytes which you then must parse to figure out what occured on the other side. IPC/Named Pipes Solution

  2. When you want a remote server to perform a task for your application such as retreive remote files (NFS), perform a SQL query (any database server), or print a document you make a regular function call and wait for it to return. RPC/Distributed Object Solution

[ Parent ]
IDL (4.50 / 2) (#34)
by ksandstr on Sat Feb 10, 2001 at 05:47:28 PM EST

Personally, I like the ability to specify in the IDL source that "this message (method) takes these arguments, returns these values and may raise the following exceptions", and actually have the IDL compiler produce the marshalling code that will make the newly defined protocol (that's what object interfaces are, really) work. Instead of writing my own marshalling code, my own "put this integer in this packet here" code and my own server main loops, and then auditing them all for trivial security holes.

In a nutshell, I like CORBA because its IDL is well-defined and sufficiently simple (if you don't use any of the CORBAServices, that is). Of course, many of these object-oriented IPC thingies are completely unusable low latency is a goal (for example, in a networked fast-action game), but for the kind of stuff I do at work they're all good enough.

[ Parent ]
Many reasons. (5.00 / 1) (#43)
by pig bodine on Sat Feb 10, 2001 at 10:02:17 PM EST

Every network application can be looked at from the point of view that it is a single application with client and server components. Even online Quake games can be viewed in this way. One thing that these technologies are aimed at is reducing the amount of time spent writing code to send and receive messages over the network. Generally speaking, once you've written TCP code for one program, you've written TCP code for all programs. There's no reason why programmers should have to devote time to this fairly repetitive portion of code for each network program they write.

Other reasons become apparent when you have a unified system for client/server programs. The advantage of running one server permanently, which can launch server objects as they are needed is obvious. (This is similar to inetd in unix, but in CORBA this is made better, in that CORBA server objects can be kept active for a while after the client exits, in case another client wants to access them. Inetd just launches a seperate instance of a program for each client.)

It's important to note that CORBA and RPC both predate the web, and were developed at a time when web solutions were not available. While they are still useful, many of the applications they were designed for are now built using web interfaces (forms). These technologies mostly exist to form a middleware layer for databases. Considering the cost per connection for licenses for database systems like Oracle, it makes sense to use middleware client/server arrangements to split a small number of database connections between a large number of clients.

[ Parent ]

I can think of an easy example (none / 0) (#79)
by blackwizard on Wed Feb 14, 2001 at 11:57:23 PM EST

I work at HP where I am helping to develop a program that discovers certain network devices and places them on a map. We're basically using RMI for a client/server implementation where the client displays the data and the server discovers the devices. Usually, the client and the server can reside on the same machine, but sometimes, the network is so huge and complex and requires so much processing and resources (for either the server or the client, really) that it would be quite useful to put the client and the server on separate machines. Or what if you wanted to run multiple clients? RMI is a great way to distribute client/server loads in this way. For the higest reliablility, of coruse, you just get an extremely buff machine for the server and run the client on the same machine. (of course, if the network goes down, you can always just fire up a client on the server machine. no problem)

[ Parent ]
Hmm... (4.00 / 4) (#9)
by scriptkiddie on Sat Feb 10, 2001 at 01:11:27 AM EST

It's interesting that this issue should come up now, because I'm busy trying to sort out these very systems at the moment. i'm writing a database app, and it seems using some kind of 3-tier strategy will be necessary. Right now, I'm thinking of using a wxPython Windows app as the client, an XML-RPC server written in Python and communicating via HTTP to a middle tier, and using simple SQL to communicate with an RDBMS on the other side.

So does anyone have any experience doing things like this? One obvious problem with using XML-RPC is bandwith - XML marshals are ridiculously large. But with maybe a dozen simultaneous users, I hope that won't be an issue.

It seems to me that XML-RPC combined with Python could provide a really nice all-in-one network architecture. You start up the client (a nice, portable wxPython app), it lets you log on to an arbitrary server, the client downloads a bunch of Python plugins which run in a restricted environment and store data on the server. So your e-mail app could be downloaded when you log in, then get your e-mails using XML-RPC over HTTP, which is a nice protocol in that it is easily encryptable with SSL and can be served with a standard program like Apache, instead of the IMAP server or whatever that's on the back-end. Send an e-mail, and an XML-RPC handler is invoked on the server which does all the SMTP stuff. There's the obvious difficulty that XML-RPC is completely stateless, which makes good authentication difficult if not impossible, but this could probably be overcome with some kind of automatically-added session key parameter. It could possibly be made to work with normal Web browsers somehow also.

If anyone understood that, I'd be very surprised. Maybe I'll code it this weekend and then I can show you guys instead of writing aimless comments :).

sounds cool! a piece of feedback (4.33 / 3) (#17)
by kellan on Sat Feb 10, 2001 at 11:17:20 AM EST

Wow, that sounds like a really neat project, I hope to see what you come up with. My only suggestion is.....abstraction!

You are going to want to put a couple of layers of abstraction in there unless you want want to rewrite every 6 months for the next two years.

Obviously you want a middleware layer that handles the marshalling of request and responses.(As opposed to just a very thin layer around sql queries which is what so many people do) This allows you to do things like: switch databases, use caches, and move to a clustered architecture. (and did I mention world peace?)

You also probably want to wrap a layer around your communication protocal, especially if you go with XML-RPC.

The landscape is changing. Used to be XML-RPC, and RMI where the only things accessible to us mere mortals, but now it looks like Soap is standardizing, and becoming easier to use and who knows someday Corba might even be useable.(or maybe we'll all live in a groove enabled world.)

XML-RPC has some problems. Its light weight, this is by design, but it does introduce some limitation. Stateless like you said, but also weak negoitation, and guarentees.

Then there is the Dave Winer factor. The man has always been on the cutting edge of this "chattering applications" space, but he is also often obnoxious, opinated, and overbearing, and as much as he would like to claim XML-RPC is an open standard, its largely just his (very successful) project. Which means it could be obsolete in 6 months depending on how many flame wars he gets into.


[ Parent ]

you should check out Zope (none / 0) (#54)
by seb on Sun Feb 11, 2001 at 06:27:04 AM EST

Sounds like an interesting idea. But if you are trying something like this, *and* you already know python, Zope is *exactly* the framework you are looking for.

The centrepiece of Zope is an Object Database, to which various interfaces are defined. The ZODB has a web interface (a url is a method call on an object), a webDAV interface, an ftp interface, and an XML-RPC interface. The ZODB supports transactions and subtransactions transparently, and you can make any class persistent within it. All you'd have to do is write the logic inside Zope, and you'd immediately have a middle tier to which you can connect using thin clients, browsers, etc.

On top of this, Zope has pretty good interoperability with SQL RDBMS. It supports a kind of SQL object that maps between ZODB objects and SQL result sets, and there are connectors available for postgres, mySQL, Oracle, LDAP, whatever.

It has several other cool features, such as a fine-grained security model, its own python web server (plays well with apache too, though), an insane take on inheritance called acquisition, a growing library of third party products (e.g. pop/imap clients, blogging components), and a simple clustering technology whereby you can separate your data into several ZODBs served by a single ZServer.

Finally, Guido and the rest of the old pythonlabs team are now working at DC (the makers of Zope), which I reckon makes it a pretty cool company.

I should point out Zopes bad sides too, though. The learning curve is just about vertical. The documentation has improved with a new ORA book which is available online from zope.org, but the source remains an important way of understanding the whole thing. Because much of your code lives in the ZODB, you can't grep it, and Source Control is very fiddly. It really needs a developer-friendly IDE.

Sorry - this was written in a hurry, so I haven't had time to add links - but check it out!

[ Parent ]
Re: Hmm... (4.00 / 1) (#76)
by baka_boy on Wed Feb 14, 2001 at 01:21:11 AM EST

While I admire your ambition, you might be better off sticking with a browser that supports HTTPS, and the occasional applet or DHTML where you just can't stretch basic HTML to do what you need. Python is a beautiful language, but there's no reason to start from scratch when most of the UI elements you need for a standard database application (form elements, rich-formatted text, tabular reports, etc.) is already built in to Netscape, IE, Mozilla, Opera, and a decent application interface can be put together in HTML in a few hours.

I certainly share your appreciation for the 'wow' factor of XML, and have even recently found myself trying to use it in a number of situations where another data format would have been perfectly acceptable, or even preferable. The primary reason to use a technology like XML-RPC is for actual distributed computing, where 'live' objects (or their stubs, proxies, etc.) are being passed across the wire, and you need to preserve complex data types and relationships. Of course, there could be something really tricky that needs to be exchanged between your client and server tiers; I'm just basing this on your short description of the app.

P.S.: Besides, if you really want to be cool and use Python in the most cutting-edge way possible, you should base your UI around Mozilla, and use the Python XPCOM bindings that ActiveState put together for Komodo.

[ Parent ]

Why XML-RPC not SOAP ? (none / 0) (#78)
by dingbatlabs on Wed Feb 14, 2001 at 03:12:06 PM EST

Curious - I wouldn't touch XML-RPC (even allowing for my Winerphobia)

[ Parent ]

[OT] web application servers (2.40 / 5) (#19)
by kellan on Sat Feb 10, 2001 at 11:24:35 AM EST

This is slighty off-topic, but Carnage4Life provide a link to a 1999 article at ZDNet for his coverage of web application servers.

Does anybody know of a more up-to-date and comprehensive link, preferably one with a focus on Free Software? (as oppose to the Beas of the world)

There is Zope, and a couple of projects written in Perl.(though no clear leader there)

What else is there?


(Java) App servers (none / 0) (#50)
by Dacta on Sun Feb 11, 2001 at 01:23:12 AM EST

The two major Open Source Java app servers are Enhydra and JBoss. Enhydra has been around longer, but has it's own programming model (although I believe the next version will support EJBs) while JBoss already supports EJBs.

[ Parent ]
thanks. now where to find an overview? (none / 0) (#55)
by kellan on Sun Feb 11, 2001 at 11:45:06 AM EST

I've seen JBoss referenced as an open source EJB container, I've never quite been sure what it means to be an EJB container.

Do you know of any place where these are compared and contrasted?

thanks, Kellan

[ Parent ]

reviews (5.00 / 1) (#75)
by Dacta on Tue Feb 13, 2001 at 09:05:08 PM EST

TheServerSide.com has a lot of reviews of EJB app servers (including JBoss).

[ Parent ]
Application servers (none / 0) (#60)
by Aquarius on Sun Feb 11, 2001 at 07:11:30 PM EST

In the non-free world, there's SilverStream, although I'd recommend against it on the grounds that it's not very stable and it makes you use its horrible, horrible WYSIWYG page designer, and there's MS Site Server. Other people have already recommended Zope, Enhydra and JBoss. I think Tomcat might be an application server, but I wouldn't swear to it; something to check out...


"The grand plan that is Aquarius proceeds apace" -- Ronin, Frank Miller
[ Parent ]
no on tomcat (none / 0) (#62)
by kellan on Sun Feb 11, 2001 at 11:34:56 PM EST

Tomcat is not really an application server, its just a jsp container, though the line between them can get pretty blurry.

Siiiigh, I was really hoping someone was going to point me to the appliation server weblog where all things app server were discussed.



[ Parent ]

Perl application server (none / 0) (#66)
by lachoy on Mon Feb 12, 2001 at 01:08:07 PM EST

One more: OpenInteract. It uses Apache, mod_perl and the Template Toolkit to implement a rich environment for both hardcore developers and HTML jockeys. It's only recently been released, altho I'll email Andy to get it on the TT list of applications :-)

If you look through some of the credits for OpenInteract and a supporting app (SPOPS) you might find some interesting names...


M-x auto-bs-mode
[ Parent ]
RMI was a mistake (3.20 / 5) (#27)
by krokodil on Sat Feb 10, 2001 at 02:05:06 PM EST

I think Java RMI was SUN mistake. Exising CORBA technology could do the same and even better. I think little later on SUN realized that and included CORBA in Java platform as well as well as RMI over IIOP. CORBA is mutliplatform, langauage independent, well established platfrom. Java CORBA binding are pretty good. If you starting new Java project - use CORBA, not RMI.

I don't know about that.... (4.50 / 2) (#31)
by Carnage4Life on Sat Feb 10, 2001 at 04:10:39 PM EST

...if you are using a pure Java solution, RMI isn't bad. Since Java already has Reflection and supports dynamic class loading, going CORBA's DII and DSI interfaces to do dynamic discovery of methods in remote objects and the like is unnecessary overhead.

[ Parent ]
Not really. (5.00 / 2) (#41)
by pig bodine on Sat Feb 10, 2001 at 09:44:08 PM EST

RMI is a lot cleaner than CORBA. The specs for CORBA are pretty wide-reaching, and include functionality that few people have a use for. (and some functionality that I have yet to see even one realistic application for!) There are a wide number of competing CORBA implementations, of which very few implement the full spec. (I think Toshiba has one with the lot, but nobody else has bothered that I recall) While most of the different implementations adhere to the same methods for passing data back and forth, there are problems between implementations regarding making the initial connection. The specs for this section of CORBA appear to have been too far open to interpretation.

On the other hand, RMI doesn't suffer from those problems. It does no more than it is needed to, and generally does it correctly. This arises from the fact that RMI was not produced by a collaboration of 500+ companies, each with different philosophies towards distributed computing and different requirements.

It may seem to be a drawback that RMI lacks multilanguage/platform compatibility, but remember, these technologies exist largely so that you can throw together a client/server system as quickly as possible, without having to fiddle about with the network code yourself. If it takes you more than a few hours at the most to convert an RMI program to a CORBA program (not counting rewriting a Java program in a different language...just converting Java RMI to Java CORBA), either you have one of the most complex distributed programs ever, or you aren't trying very hard. Essentially, once you have written the RMI program, you have done most of the work necessary to rewrite the program as a CORBA program.

I would say that, if you were tossing up between CORBA and RMI for an all-java solution, I'd go with RMI. It's not that much effort to convert to CORBA later, should it ever become necessary. (With the obvious exception that you would probably be rewriting one side of the program in a different language. This effort would be required whether you used CORBA or RMI to start with.)

[ Parent ]

yes, but (4.00 / 1) (#42)
by krokodil on Sat Feb 10, 2001 at 09:53:41 PM EST

(With the obvious exception that you would probably be rewriting one side of the program in a different language. This effort would be required whether you used CORBA or RMI to start with.)

But with RMI to switch one part of your application to different language will require rewrite of all communication layer for all parts, not only this one.

As too being to advanced - I do not see this as limitation of software technology. It might be limitation for consumer market.

[ Parent ]

All parts of the communication layer? (4.00 / 1) (#44)
by pig bodine on Sat Feb 10, 2001 at 10:14:39 PM EST

CORBA/RMI should be all parts of the communication layer. Client functions call RMI/CORBA functions which communicate to the server CORBA/RMI functions which call server functions. If you've done your RMI properly it should usually be no great job to convert to CORBA.

There's nothing wrong with being too advanced, I admit. There are things wrong with being bloated and overcomplicated for most applications. The CORBA spec is both of these.

[ Parent ]

not bloated (none / 0) (#47)
by krokodil on Sat Feb 10, 2001 at 11:24:46 PM EST

I do not think CORBA is bloated. I use it extensively and once you learned basics it is fairy simple. As to bloated, just look at the code generated by IDL compiler - you can easily understand it without knowing much details about CORBA internals.

Spec itself might look quite big, but good detailed spec is good.

[ Parent ]

Bloated is probably not the right word (4.00 / 1) (#49)
by pig bodine on Sun Feb 11, 2001 at 12:34:03 AM EST

I'm really referring to the fact that CORBA tends to provide more functionality than most apps need, and the ORBs are, as a result, more complex pieces of software than is necessarily called for. IDL, and the generated code are generally not bloated. (Or at least, the compiled binaries from IDL code are not...the actual source often contains more than you need, since the code needed for all types of CORBA servers is generated, though you will end up using only one version in the binary. I do not consider this a problem.)

I think we are not likely to agree on the CORBA/RMI issue, since we have different priorities in mind. I would say that, unless you have a genuine need for CORBA, use RMI; I think you would say the opposite.

[ Parent ]

my point exactly (none / 0) (#48)
by krokodil on Sat Feb 10, 2001 at 11:29:07 PM EST

This is my point exactly: you will have to switch to CORBA (or something elst) once you want to rewrite part of your RMI application in C++ or some other language. You will have to rewrite all communication layer in _all_ parts, not just one you are switching to different language.

On the other hand, you use CORBA, all other parts will be left intact.

[ Parent ]

RMI to CORBA not *that* easy (5.00 / 3) (#67)
by jonabbey on Mon Feb 12, 2001 at 02:36:39 PM EST

At least, not the last I looked. RMI's distributed garbage collection (essential for dead client detection, at least), composite object tree serialization, and automatic class transfer need to be supported on CORBA in order for code to be completely convertible without having to considerably change a heavily RMI-dependent design.

Also, it has never been clear to me how well the primitive CORBA types map to Java types. With RMI, you're guaranteed that a String object will support full unicode, that an int type will be 32 bit two's complement, etc. I imagine that code written to use CORBA would have to deal with the fact that the data types coming across the wire might not be able to handle the full range of the Java type (Unicode characters), or that a given type might not naturally map into one of Java's primitives.

Also, you'd have to be sure the thread-mapping for your CORBA implementation fit the RMI-centric assumptions of your server. With RMI, incoming calls are automatically spread out to different threads, providing an easy to use event-oriented driver for a multi-threaded server.

Orfali and Harkey's Client/Server Programming with Java and CORBA, now in second edition, talks about the Visigenic ORB for Java, which can do some of the RMI-style tricks (particularly object passing by value) over IIOP. A solution like that might make it easier to port RMI code to CORBA, but then you are still left with the necessity of only communicating with code that can reconstitute serialized Java object graphs.

I would say that RMI is different enough from CORBA that if you really take full advantage of RMI, you'll likely have to significantly rework the semantics of your network API's to move to CORBA.


Ganymede, a GPL'ed metadirectory for UNIX and friends.
[ Parent ]
Good reasons for using RMI, perhaps? (5.00 / 1) (#69)
by pig bodine on Mon Feb 12, 2001 at 09:46:29 PM EST

Things like serialization are good reasons for using RMI if you have the option. You benefit from the fact that it isn't required to support multiple languages and multiple ideas of what distributed computing should do.

I agree that converting from RMI to CORBA isn't going to be quite as straightforward as taking out the RMI functions and slotting in the CORBA ones. You will have to write some functions to convert some datatypes. CORBA in Java uses "holder" classes for data types. You'll have to fix your code to deal with these. Where there are problems (ie. unicode strings) the functions to deal with these shouldn't be that hard to write. A unicode string is just a sequence of integers, after all.

Things like object serialization won't be insurmountable problems. You'll need to exercise a little forethought when writing your IDL, but it isn't usually going to be a major project.

[ Parent ]

rmi over IIOP (none / 0) (#72)
by Galactus on Tue Feb 13, 2001 at 05:56:10 PM EST

havent you heard of rmi over iiop? You can get references to a RMI server object from a regular Corba client, changing just a couple of lines in the server side. (not using rmiregistry but a regular CosNaming server, inheriting from PortableREmoteObject instead of Unicast...). RMI programming is way simpler than Corba programming, and given that I can use rmi objects from any plataform (using corba over riiop), I will always code my server object using RMI! :)

[ Parent ]
Great write-up (2.75 / 8) (#29)
by extrasolar on Sat Feb 10, 2001 at 03:41:44 PM EST

Great write-up. This is the kind of material I like to see on Kuro5hin.

But, a lot of that went over my head. Probably because I need to study it some more.

But I was wondering, where does .NET fit into this. Is it at all related?

From what i've been able to gather (2.50 / 2) (#32)
by aphrael on Sat Feb 10, 2001 at 05:03:14 PM EST

(and i've been avoiding it because i'm doing Linux work right now), .NET is sort of related. It's a VM-based operation (similar to java) where compilers compile to a common language runtime in which all objects are binary compatable regardless of their originating language, and from there they are compiled into native code.

MS has promised that there will be a way to access CLR objects from COM, tho.

[ Parent ]

.NET (none / 0) (#86)
by chris mahan on Wed Feb 28, 2001 at 09:40:03 PM EST

From my understanding, .NET is about creating services hosted over the WEB, that can be accessed from within a computer program or a web page or a web server. It will be (i believe) primarily reliant on SOAP. The thing to remember is that SOAP is made of XML and that SOAP can contain *anything*.

[read Chapterhouse Dune by Frank Herbert]

[ Parent ]
One of the most confusing things (3.50 / 6) (#33)
by aphrael on Sat Feb 10, 2001 at 05:37:13 PM EST

about comparative distributed computing is the way the terminology changes. That which COM thinks is a stub, CORBA thinks is a skeleton; that which COM thinks is a proxy, CORBA thinks is a stub.


Other side of Distributed COmputing (4.33 / 6) (#36)
by admiyo on Sat Feb 10, 2001 at 06:11:27 PM EST

I've done quite abit of DCOM and java RMI programming. But there is another piece to the destributed computing puzzle that should be addressed. All the technologies described above are for synchronous programming. But asynchronous programming is actually very important. Usually this technology is known as messaging. The product that leads the field here (in age if nothing else) is MQSereies from IBM. MS has MSMQ. A slew of companies have products out there such as Vitria, WebMethods, STC (or whatever they are called now), Sun (Java MQ, not to be confused with JMS which is the API) and so on. In the asynch world, messages are sent from machine to machine, and must be picked up by the target process, usually through some sort of callback mechanism. THe best provide guaranteed once and only once delivery, as well as non persisted queueing for events that are only relevant to the active processes. Most have both a queued input (who ever reads the queue first gets the message) and publish and subscribe modes of operation. One big difference between messaging and RPC style is type safety. Most messaging was orignally a large array of bytes sent to a COBOL program. THe modern stuff has a way to specify the type of message. JMS allows object messages, similar to how RMI serializes objects. Idon't know if you can send class definitions via a message for most systems, but basically if you are sending messages to a remote machine, you have to agree on the message format ahead of time, and thus make sure both sides are in synch. I've worked primarily with Java Messaging Service, the java standard API for messaging services, although I did a little COM related stuff with MSMQ and flat file type messages with MQSereis a long while back. I have yet to see an open source mesaging product. I've been tempted to build one, but my programming cycles are spent on work related stuff. An example of how this would be useful is separating a web site from the fulfillment and merchandising systems. Orders are sent to the order processing queue. Product updates are sent to the product up date queue. THe web site writes to the first and reads from the second. All this stuff can be doen VIA RPC style integrations, but you have to handle the once and only once stuff yourself. Publish and subscribe is powerful if you want to throw an even out that must be procesed by many systems: A new custmer has an order that goes to the OMS system, and a new customer record that goes to the Customer Relationship system. Sorry for the business focused example, but that is the world I've lived in for the past four years.

Open source JMS implementations. (4.00 / 3) (#38)
by Dan Walters on Sat Feb 10, 2001 at 06:37:42 PM EST

Check out SwiftMQ and OpenJMS .

[ Parent ]
Expirences? (none / 0) (#57)
by kellan on Sun Feb 11, 2001 at 12:17:08 PM EST

Have you played with either one of these implemenations? Would you reccomend one?

I've been looking at the O'reilly JMS book, it looks so cool!


[ Parent ]

SwiftMQ (5.00 / 1) (#58)
by admiyo on Sun Feb 11, 2001 at 01:36:23 PM EST

Your link doesn't work, but this does: http://www.swiftmq.com/

[ Parent ]
CosEvent (none / 0) (#68)
by krokodil on Mon Feb 12, 2001 at 05:42:19 PM EST

I work with CORBA and use CORBA Event Service implemented in ORBacus

[ Parent ]
STC now known as SeeBeyond (none / 0) (#77)
by eWulf on Wed Feb 14, 2001 at 01:23:35 PM EST

[ Parent ]
I never see the point with web apps (3.33 / 9) (#56)
by seb on Sun Feb 11, 2001 at 11:51:09 AM EST

J2EE, CORBA et al. - I can see their value for massive systems that require legacy big iron integration, huge amounts of processing, data warehousing, etc. The benefits for *big* web sites with these kinds of requirements are also obvious.

But for most web apps, they seem like a massive amount of overkill. I'd rather spend the time on something disposable, which actually works, yet everywhere I look web agencies have J2EE oozing out of their ears, and everyone talks in UML as if it will solve all their problems by itself.

N-tier application design, granular security, caching, have all been cited as examples of why distributed computing is cool, but IMHO you can acheive these goals more easily if you never go near entity beans and the like. The amount of ceremony associated with such projects is enormous, you end up with so many tiers it takes you about half an hour to figure out where your logic is getting executed, and debugging and profiling your app gets more and more complicated.

Even for a website serving tens of thousands per day, a 2 or 3 tier application using something like php and any db, combined with a bit of load balancing, should see you right. Sometimes the amount of abstraction, interfaces and design patterns I see in a J2EE application drives me a bit crazy. Perhaps I've just never really *got* it all, but I'd rather just get down and dirty, slap something together, and throw more hardware at it when it starts grinding.

I know what you mean (4.00 / 1) (#71)
by speek on Tue Feb 13, 2001 at 04:44:12 PM EST

But try reading Martin Fowler's "Refactoring: Improving the Design of Existing Code". It sure helped me "get it".

al queda is kicking themsleves for not knowing about the levees
[ Parent ]

I disagree (5.00 / 3) (#80)
by kostya on Fri Feb 16, 2001 at 10:30:03 AM EST

Perhaps using J2EE/EJBs/CORBA is overkill for a simple website or weblog. But in most "enterprise" situations, these technologies are a God-send for the competent.

Notice the competent. I have done dozen large scale projects and I've seen the good, the bad, and the down-right ugly. This is basically a relfection of the skills and experience of the developers involved. This is because N-tier and distributed programming are built on complex concepts. These concepts allow for higher abstraction and optimization. But they come at the trade off of complexity. I would rate the concept areas as follows, order of increasing complexity:

  • Object Oriented Programming
  • Distributed Programming
  • N-tier architectures

While the areas overlap a bit, they are definitely distinct "stages" where things get more difficult by at least one magnitude. This mostly has to do with issues of design and project size. OO design may not differ from Procedural design very much, but due to the "shortcuts" that OO provides, many inexperienced developers will make huge design errors that kill the project as it grows larger. (but it should be noted that this is true of large procedural/C projects as well, so it is probably equal--OO design skills is just a harder to find)

Distributed programming is even harder to get the hang of, but it is definitely a valid and effective solution. I've used it effectively on several projects, and it is an excellent way to maximize hardware and software performance, as well as design effective fail-over techniques. But DP is extremely hard to wrap your brain arround, and since any DP architecture worth its salt must have threads, you just got more complex by at least two magnitudes.

Then you have N-tier architecture design, which usually uses DP, but not always. N-tier is a great way to design a high-performance application that is extremely flexible and robust. It provides layers of logic, allowing you to isolate and optimize. It also allows you to increase load, much like a Transaction Monitor--you are effectively controlling access to the "store" (database, file, LDAP) and you can then pile more load on the system. But now, performance issues and load balancing now become a must, and the number of places a bottle neck can appear in increases. Performance tuning and load balancing are very difficult disciplines that are more art than science--so we have yet another magnitude of complexity.

All this is to say that an N-tier architecture is built on very complex concepts that the average "Joe Coder" is probably not experienced in. Which leads to mistakes. Which leads to bad apps. But then this is a problem with many things. I, for one, have seen these concepts used effectively, and I would have hated to try and solve the problems without these concepts.

Veritas otium parit. --Terence
[ Parent ]
OT: Thank you, thank you, thank you! (4.00 / 3) (#70)
by jabber on Mon Feb 12, 2001 at 11:19:55 PM EST

This is completely off topic, but I missed the pre-posting period to drop in an editorial.

This is EXACTLY the sort of story that brought me to K5 in the first place. I hope that this is the beginning of a trend. Thanks C4L for a truly useful and interesting piece.

[TINK5C] |"Is K5 my kapusta intellectual teddy bear?"| "Yes"

Not for the weak of heart... (3.50 / 2) (#73)
by WWWWolf on Tue Feb 13, 2001 at 06:01:14 PM EST

<WWWWolf> Hmmm... "CORBA for Dummies" -- I guess that's a bit too advanced for my tastes. Does anyone know where I could find "CORBA for complete and total idiots"? =/

(Said that in #gimp some time ago...)

I like the idea of being able to call methods remotely, I really do. It's just that CORBA isn't exactly easy to learn because the things get confusing pretty soon.

OK, I have ORBit. And ORBit-C++, that for some reason won't compile its test files (I know why, and I'm unable to fix it right now...) I have the CORBA spec, but my eyes started to bleed or something when I even thought of reading the hefty PDF from the screen. I've seen MICO and some other CORBA things.

And I'm not getting any further. =(

Maybe I just had initial trouble with the idea that I need to do many steps here. I probably got scared when I realized it wasn't just #include <corba.h> and use some calls.

Thanks for this overview; I think I need to take a better look at CORBA now.

-- Weyfour WWWWolf, a lupine technomancer from the cold north...

You need two things to learn CORBA programming (4.00 / 1) (#74)
by cezarg on Tue Feb 13, 2001 at 06:42:15 PM EST

If you want to learn CORBA you need two things: This book and mico. Ignore the "advanced" in the title. The book treats about the basics of Corba. I'm sure it's the only corba book worth buying because I own quite a few.

[ Parent ]
one more distributed computing technology... (2.50 / 2) (#81)
by pustulate on Tue Feb 20, 2001 at 03:35:16 PM EST

is the defined protocol. Why go through all the hassle of learning CORBA, COM, or RMI when you can just write up your protocol, provide some C/Java libraries, and stick up a test server? All the common internet protocols can be thought of as a type of remote procedure call, with the server piece being a non-local subroutine.

The above technologies attempt to abstract away the over-the-wire stuff, but at a cost of complexity of implementation. Only Java, IMO, adds any real value-add, because you can instantiate non-local classes (neat!). Don't know much about DCOM, but with CORBA, you're basically tied to an implementation because interoperability between CORBA implementations is low (from what I've heard). It's also complicated.

So - if you don't need the heavyweight infrastructure or learning curve of the distributed frameworks, use a message-passing mechanism.

Another comparision of CORBA and DCOM (4.00 / 1) (#85)
by jonathanclark on Fri Feb 23, 2001 at 09:17:11 PM EST

A while back I researched the differences between CORBA and DCOM. The results are here: http://www.jonathanclark.com/diary/dcom_corba/

Distributed Computing Technologies Explained: RMI vs. CORBA vs. DCOM | 86 comments (83 topical, 3 editorial, 0 hidden)
Display: Sort:


All trademarks and copyrights on this page are owned by their respective companies. The Rest 2000 - Present Kuro5hin.org Inc.
See our legalese page for copyright policies. Please also read our Privacy Policy.
Kuro5hin.org is powered by Free Software, including Apache, Perl, and Linux, The Scoop Engine that runs this site is freely available, under the terms of the GPL.
Need some help? Email help@kuro5hin.org.
My heart's the long stairs.

Powered by Scoop create account | help/FAQ | mission | links | search | IRC | YOU choose the stories!