Kuro5hin.org: technology and culture, from the trenches
create account | help/FAQ | contact | links | search | IRC | site news
[ Everything | Diaries | Technology | Science | Culture | Politics | Media | News | Internet | Op-Ed | Fiction | Meta | MLP ]
We need your support: buy an ad | premium membership

[P]
Why I Like PLT Scheme

By jacob in Technology
Wed Mar 17, 2004 at 07:19:00 PM EST
Tags: Software (all tags)
Software

When I tell programmers that my favorite programming language is Scheme, they tend to react as though I'd told them my favorite snack were packing styrofoam or my favorite music were junior high orchestra recordings: those who can even bring themselves to believe me mostly wonder what on Earth I could be thinking. Some wonder if I've ever programmed in a real language like C or C++ (I have), if I'm one of those academic nuts who cares more about proving theorems about programs than actually using them for real work (I'm not), or if all those parentheses have rotted my brain (probably). So what could make me like this weird language with no for loop and so many parentheses it makes you crosseyed?

In this article, I'll show you by way of example. By the time I'm finished I hope I'll also have convinced you that you might want to give it a shot yourself.


ADVERTISEMENT
Sponsor: rusty
This space intentionally left blank
...because it's waiting for your ad. So why are you still reading this? Come on, get going. Read the story, and then get an ad. Alright stop it. I'm not going to say anything else. Now you're just being silly. STOP LOOKING AT ME! I'm done!
comments (24)
active | buy ad
ADVERTISEMENT
I'm going to assume you're a working programmer who comes from a C, C++, Java, or similar background, so I'm not going to go into the details of how Scheme differs from other more-or-less exotic languages like Common Lisp, Standard ML, or Haskell, as interesting and worthy of discussion as those languages are. Suffice it to say that many of the ideas I present here as features of Scheme are also present in some other languages.

Furthermore, there are a large variety of Scheme systems out there, and they tend to do some important things in different ways. In this article I'll be talking about a particular distribution, PLT Scheme; the code in this article will not run in other Schemes without some modification. Most serious Scheme systems have equivalents or near-equivalents to everything I use here, but the details differ.

Those disclaimers out of the way, let's get on to the good stuff.

Scheme head-first

I figured the best way to explain why I like Scheme would be to show something I've actually had occasion to use in real work: a multithreaded port scanner. Here it is, written basically the way I'd really write it if I needed to:


; scan : string[hostname] (listof int) -> listof (list int string)
; gives the number and well-known service name of each port in the given
; list that is open on the given host
(define (scan host ports)
  (map
    (lambda (p) (list p (port->name p)))
    (open-ports host ports)))

So what does this do? I've defined a function called scan that takes two arguments: a string called host and a list of numbers called ports. It gets every open port using the function open-ports we'll define later and returns each one paired with its service name. To understand how it does it, we're going to need to make a brief diversion into looping.

A brief diversion into looping

Scheme doesn't do looping the way mainstream languages do. That leads a lot of people to think that Scheme doesn't do looping at all, which isn't true — Scheme actually has a very sophisticated and flexible way of writing loops that just takes a while to get used to (no pun intended).

The "assembly language" of loops in Scheme is recursion. Recursion in Scheme is not at all inefficient and does not waste memory, and frequently leads to clear and elegant solutions to problems. For instance, here's an implementation of a range function (similar to features of Python and Perl) that takes two numbers and builds a list of all the numbers between them in order, written recursively:


(define (range low high)
  (cond
    [(> low high) null]
    [else (cons low (range (+ low 1) high))]))

This might look alien at first, but there's a very compelling logic to what the function says: a range where the low number is higher than the high number is just an empty list (represented by null) and any other range is just a list consisting of the low number followed by another range with the same high point but a low point that's 1 bigger (cons is a function that takes an element and a list and returns a new list consisting of the given element followed by the given list). It's a logically elegant way of thinking of the problem that also happens to be an executable program. (Of course the fact that it's elegant doesn't mean it's easy to write or read. It turns out, though, that with a little practice you can actually write recursive functions more easily than their iterative counterparts. See How to Design Programs for an online textbook that teaches that skill.)

Of course recursion isn't the end of the loop story. I say recursion is the assembly language of loops because while it's the fundamental construct that allows you to get looping to happen in Scheme, real Scheme programmers frequently use constructs built on it rather than using it directly. Since in Scheme declaring a nameless function is a simple affair that doesn't even need to use a single line of text, a common idiom is to write loops as functions that express the essence of a particular style of loop and take another function to use as the body.

The function map I used in the scan function is one example of such a loop. It accepts a function and a list as input, applies the function to each individual element of the list, and returns a new list containing the results. So for instance (map add1 (list 0 1 2 3)) would yield (list 1 2 3 4). It's built in, but if we wanted to implement it, we might write something like:


; map : (X -> Y) * (listof X) -> (listof Y)
(define (map func a-list)
  (cond
    [(null? a-list) null]
    [else (cons (func (car a-list)) (map func (cdr a-list)))]))

where car extracts the first element of a list and cdr returns the rest of the list without the first element. (Sorry if I just did somebody's homework, by the way.)

With this in mind, we can get back to our port scanner.

On with the show

So what's going on with the scan function? It's mapping a function that takes a port number and produces a list containing a port number and a service name over the list of all open ports (the result of calling open-ports). What we'll get back is a list consisting of port-number/service-name pairs, which is exactly what we want.

Next we need to implement open-ports. Here it is:


(require (lib "list.ss")) ; for filter
 
; open-ports : string[hostname] (listof int) -> (listof int)
; returns the sublist of numbers that represent open ports on the
; given host, performing all checks concurrently
(define (open-ports host ports)
  (filter (lambda (x) (not (eq? 'closed x)))
          (threaded-map
           (lambda (port) (if (can-connect? host port) port 'closed))
           ports)))

The open-ports function checks every port in its list in parallel to see if they are open, and, when it's done, uses a library function called filter to take the closed ports out of the list. It gets all the open ports concurrently with the help of two new functions: threaded-map and can-connect?. Since can-connect? is easier, I'll explain it first.

can-connect?: exceptions and sockets

The can-connect? function isn't bad at all:


; can-connect? : string[hostname] number -> bool
; determines if the host is listening on the given port
(define (can-connect? host port)
  (with-handlers ([exn:i/o:tcp? (lambda (e) #f)])
    (let-values ([(ip op) (tcp-connect host port)])
      (close-input-port ip) (close-output-port op) #t)))

It's just a small wrapper around the tcp-connect library function, which returns an input port and an output port if it succeeds at connecting and raises an exception if it doesn't. PLT Scheme has a built-in exception mechanism very similar to Java's or C++'s in that any code can raise an exception, and that exception propagates up the stack until it reaches an appropriate handler (specified by try/catch in C++ or Java, with-handlers in PLT Scheme). The can-connect? function calls tcp-connect and returns #t (true) if it succeeds. If tcp-connect raises an i/o:tcp exception (which it will do if it can't establish a connection), with-handlers catches that and returns #f (false).

threaded-map: cocurrency, finally

We want threaded-map to behave just like map, which we've seen before, except that while map applies a function to each element sequentially, we want threaded-map to apply the function to each element in parallel. Here's code to do that:


; threaded-map : (X -> Y) * (listof X) -> (listof Y)
; maps the given function over the given list with each computation
; done in parallel
(define (threaded-map f l)
  (let ((cs (map (lambda (x) (make-channel)) l)))
    (for-each (lambda (x c) (thread (lambda () (channel-put c (f x))))) l cs)
    (map channel-get cs)))

PLT Scheme uses concurrency mechanisms taken from Concurrent ML to create and synchronize threads. Rather than locks on shared memory, the CML primitives allow threads to send values to each other over channels and block until those values are transmitted. In this case, threaded-map creates a new channel for each item in the input list, spawns a thread for each item that runs the input function on it and then sends the result over the appropriate channel. Once all threads are spawned, the function builds a new list by getting values out of each channel in order.

With that done, we've got everything we need to detect open ports, and all that's left is to look up service names.

port->name: libraries, HTTP communication and regular expressions

There's a little complication in implementing port->name. If I only wanted my program to run on Unixish systems I might just call the getservbyport C function through PLT Scheme's foreign function interface (it only takes a few lines, actually), but I'd like it to run on Windows and Macs and various other platforms that don't have access to getservbyport (or the /etc/services file it uses). Fortunately, the Internet Assigned Numbers Authority has an online list of ports in the same format as /etc/services, so we'll make our program fetch that if a local file isn't available. Here's the code:


(require (lib "url.ss" "net")) ; for get-pure-port and string->url
 
(define NAMES
  (let ([ip (if (file-exists? "/etc/services")
                (open-input-file "/etc/services")
                (get-pure-port (string->url "http://www.iana.org/assignments/port-numbers")))]
        [nametable (make-hash-table)])
    (while m (regexp-match "([^ \t\r\n])[ \t]+([0-9]+)/tcp[ \t]+([^\r\n])" ip)
       (hash-table-put! nametable (string->number (list-ref m 2)) (list-ref m 1)))
    nametable))
 
(define (port->name p) (hash-table-get NAMES p (lambda () "unknown")))

After loading in a library that defines some useful HTTP-related functions, we define the global variable NAMES to be a hash table that maps numbers to strings. To do that, we get a new input port that's connected to either /etc/services (if it exists) or to the IANA's web page, and we also make an empty hash table. Then we use a while loop to iterate over the input port getting each port-number/name combination (as determined by a regular expression) and add each to the hash table, and finally we return the hash table which becomes the value of NAMES. Unfortunately Scheme has no while loop, but since it's such a nice fit for this problem we'll just define it ourselves.

while: the power of macros

We've implemented lots of loops so far, but while is especially tricky because of the way we want to use it. In our previous loops we required users to pass in an anonymous function, and furthermore none of our loops introduced their own variable names. We could probably make a variant of the while loop that conformed to those expectations, but I'm stubborn and I'd rather have Scheme conform to me than to have me conform to it. If I were programming in Java or C, I'd just be out of luck, but Scheme lets me have my way. Scheme has a very unusual facility called macros (a distant cousin of C macros, the well-known black sheep of the family) that take a piece of a parse tree as input and produce another one as output, allowing programmers to automatically rewrite code as they see fit. They've got two key features that make this source-code rewriting tractable and useful: First, they operate on syntax trees, not on characters. Second, they know enough about the language to automatically avoid unintentional name collisions, a property we'll see more about in a minute.

We can use macros to make a while loop that will behave just like I want it to. Here we go:


(define-syntax (while stx)
  (syntax-case stx ()
    [(_ var test body)
     (identifier? #'var)
     #'(let loop ((var test))
         (when var body (loop test)))]))

It's scary-looking, but don't fear, it's really pretty simple once you get used to the way macros are written. There's some junk at the top that says that this is a macro and sets us up to do some work, but the important stuff begins on the third line, which specifies a pattern for what a use of while should look like. In particular, each use should look like

(while [some-variable] [some-test] [some-body])

where the variable has to literally be a legal variable name but the test and the body can be any old Scheme expression. It also gives each of these pieces a name (var, test, and body, respectively). Those names are important for the rest of the macro: starting with the #'(let ...) line, the whole rest of the code isn't actually code, but rather a template for what the output syntax should be, and it's returned literally except that every time var, test, or body appears in the template, whatever actually appeared in that blank is substituted in instead.

Once you get the hang of it, macros like these are pretty simple. For example, our use of while in defining the NAMES table turns into the following after Scheme expands it (code from the original scan in bold, code introduced by the macro in normal font weight):


(let loop ((m (regexp-match "([^ \t\r\n])[ t]+([0-9]+)/tcp[ \t]+([^\r\n])" ip)))
  (when m
    (hash-table-put! nametable (string->number (list-ref m 2)) (list-ref m 1))

    (loop (regexp-match "([^ \t\r\n])[ \t]+([0-9]+)/tcp[ \t]+([^\r\n])" ip))))

Of course we won't actually ever see this code or type it in — Scheme generates it for us behind the scenes when it runs our program. What the expanded code actually does is use one of Scheme's built-in loops, the so-called "named let," to run the test expression and bind it to the variable m. The named let syntax lets you type in an identifier (when I program, I almost always use the identifier loop, but you could use charlie or frobnosticate or whatever) and a body, where in the body invoking the identifier as a function makes the program run another iteration of the loop. In the case of while, if m isn't false (in Scheme anything that isn't #f is true), it evaluates the body with m still bound to the test's result and then loops, checking the test again and again until eventually m is #f.

One of the neatest things about Scheme macros is that variable names actually know whether they're "printed in bold" (in code written by the user) or not (in macro-generated code), and boldface variables don't interfere with normal variables and vice versa. So, taking the macro above for example: since the identifier loop is introduced without boldface, if the name loop happened to appear in boldface in the expanded code (for instance because the while loop was nested inside some other loop that used the let loop convention) the two names wouldn't interfere with each other. Because of that property, it's relatively simple and easy to make new syntax that you can think of as being built-in, and allows Schemers to write huge complex macros like object-oriented programming syntax and embedded that make Scheme into a programmatic PowerPoint-like slideshow definition language.

Anyway, since the while loop was the last thing we needed to write, we're done and we've got ourselves a complete working port scanner. Hooray!

Wait a minute, where's the I/O?

I don't need any, though of course if we wanted to we could easily add it in. Scheme, like Python, is designed around an interactive read-eval-print loop. I can open DrScheme, type in all the definitions, and then just use my function however I want. Here's an example run:


> (scan "localhost" (range 1 100))
((22 "ssh") (25 "smtp"))

Finishing touches

Now that we're finished with the scanner, we should put the finishing touches on and bundle it up as a library. That's pretty easy: in a file called scanner.ss, we'd write


(module scanner mzscheme

  (require (lib "contract.ss"))

  (provide/contract
     (scan (string? (listof natural-number?)
           . -> .
           (listof (list/p natural-number? string?)))))

  [... all the above code ...])

This declares a library module that provides only the scan function. Furthermore, it attaches a contract to scan, sort of an optional type-check that occurs at run-time rather than compile-time and can be much more flexible for that reason. Every time the function gets invoked, the contract verifies that it's being given the proper arguments (in this case a string and a list of natural numbers) and that when it returns it's giving back the right sort of thing (in this case a list of lists where each sublist consists of a natural number and a string). These contracts don't force static typing on programmers but let them get a lot of the benefit of static typing anyway. In my experience judicious use of contracts is essential for large software projects.

Further reading

There's lots more reading you can do that elaborates greatly on issues I've barely scratched the surface of in this article. Two great books that introduce programming using Scheme are available free online: Structure and Interpretation of Computer Programs by Abelson, Sussman, and Sussman, and How to Design Programs by Felleisen, Findler, Flatt, and Krishnamurthi. Both books cover the foundations of Scheme well and are very accessible books.

For more advanced topics, I'd recommend you go to (read scheme), an online bibliography of Scheme-related research. There you'll find much deeper treatments of many of the topics I've talked about here as well as comprehensive discussions of important topics I've omitted entirely (continuations, web programming, object oriented programming, efficient language implementation, behavioral contracts, and many more).

Of course you can also read the de facto Scheme standard, called the R5RS, online. Another good starting point is www.schemers.org.

Executing the code

If you actually want to execute this code, install DrScheme (when it asks what language level to use, set it to "(module ...)"), open it, and copy all the code from this article into the definitions window (the one on top). Hit the "execute" button and you're off. You can use any of the functions from the program directly in the interactions window.

Some conclusions

So, we've written a multithreaded port scanner, nicely bundled up as a library for use in our other programs. In the process we've touched on a lot of my favorite features of Scheme (and PLT Scheme in particular): its clever loop constructs, its higher-order functions, concurrency, macros, modules, and contracts. The interesting thing about all these features is that at nearly every stage of the process Scheme gave me the ability to say what I meant, or failing that gave me the tools to let me build a way to say what I meant. Using that feature I could make the language do things my way rather than it making me do things its way. Schemers take full advantage of that ability, and it's hard to find a group of programmers more stubborn in their pursuit of writing down exactly the concept they are trying to convey rather than having to clutter their programs with extraneous concepts.

Those of you who've seen Scheme before, especially as part of a college class, might be surprised at how different this code looks from what you saw there. That's another interesting point: people frequently learn Scheme as though it were a functional language. It's true that Scheme lets you program in a functional style, but I think it's really more properly thought of as the quintessential multiparadigm language than a functional language. You can write declarative, imperative, object-oriented, aspect-oriented, logical-reasoning, lazy, strict, functional-reactive, or any other kind of programs you want, all using exactly the syntax and semantics you want, all in Scheme, all working together. To my way of thinking, that's a great reason to like Scheme. Give it a shot.

Sponsors

Voxel dot net
o Managed Hosting
o VoxCAST Content Delivery
o Raw Infrastructure

Login

Related Links
o Common Lisp
o Standard ML
o Haskell
o PLT Scheme
o How to Design Programs
o Concurrent ML
o Internet Assigned Numbers Authority
o an online list of ports
o Structure and Interpretation of Computer Programs
o (read scheme)
o online
o www.scheme rs.org
o DrScheme
o Also by jacob


Display: Sort:
Why I Like PLT Scheme | 236 comments (210 topical, 26 editorial, 3 hidden)
Great - what do you do with it? (none / 2) (#1)
by waxmop on Wed Mar 17, 2004 at 09:59:08 AM EST

The only real-life applications I've found that use scheme are the reports in GnuCash, which are now being ported to Perl, because nobody knows scheme.

What is scheme good for? CGI? GUI stuff? Text parsing? Multi-threaded real-time crap? Climate simulations? natural language processing?

I'm not trying to be sarcastic; I'm really curious.
--
We are a monoculture of horsecock. Liar

lots of things (3.00 / 8) (#2)
by jacob on Wed Mar 17, 2004 at 10:32:07 AM EST

I don't think Scheme is in the "languages with a narrow purpose" category like Perl and PHP do; it's more of a general-purpose application language along the lines of Python. I use it for writing GUI apps and CGI. PLT Scheme is awesome for CGI, I'd take it any day over any of the other languages I've seriously tried for that purpose (Python, Perl, PHP, ColdFusion, PL/SQL). It's also very good for writing embedded languages, for the reasons I mentioned in the article: it's easy to build application-specific abstractions so you can make your own little language inside Scheme for scripting computer-game interactions, writing slide shows, exploring programming language semantics, or whatever. This facility is exploited heavily by Lisp and Scheme users, hence jokes like "Out of Work Lisp Programmer: Will Write Programs that Write Programs that Write Programs for Cash" and so on.

--
"it's not rocket science" right right insofar as rocket science is boring

--Iced_Up

[ Parent ]
Scheme CGI (none / 1) (#138)
by city light on Thu Mar 18, 2004 at 04:33:02 PM EST

I'd like to try using it for web dev too, but as I don't run my own webserver it's rarely an option. :(

[ Parent ]
good question (3.00 / 4) (#34)
by horny smurf on Wed Mar 17, 2004 at 06:32:43 PM EST

Gimp (anyone remember that?) originally used scheme for the scripting language (script fu). Many of the scripts were more iterative though (eg, abundant use of letrec).

Nowadays, I believe there are perl and python bindings.

For a while, guile (gnu ubiquiteous interpreted language environment or something) was supposed to be the next big thing. guile was a scheme implementation.

I think i heard somewhere that orbitz was written in scheme (or lisp), but i wouldn't voich for it.



[ Parent ]

some of orbitz is written in scheme (none / 2) (#38)
by jacob on Wed Mar 17, 2004 at 08:06:47 PM EST

and some games have started using it (Crash Bandicoot and Jax & Dexter come to mind).

--
"it's not rocket science" right right insofar as rocket science is boring

--Iced_Up

[ Parent ]
Very close: Crash & Jax use lisp (none / 1) (#72)
by spasticfraggle on Thu Mar 18, 2004 at 01:49:38 AM EST

PR guff from Franz

--
I'm the straw that broke the camel's back!
[ Parent ]
Both were written in Common Lisp IIRC (none / 1) (#80)
by voblia on Thu Mar 18, 2004 at 05:56:26 AM EST

Yet i am pretty sure about it.
Soldier boy made of clay now an empty shell ...
[ Parent ]
lisp again. (none / 0) (#225)
by voltairemodern on Mon Mar 22, 2004 at 11:55:31 AM EST

Come on, everyone who's coding professionally uses lisp. Let's not forget that the venerable emacs was written in lisp as well. How can we forget?

[ Parent ]
Try ITA and Common Lisp (none / 0) (#235)
by voodoo1man on Wed Mar 24, 2004 at 10:56:11 PM EST

ITA's airfare search engine is mostly written in Common Lisp; they provide it to Orbitz and a few other ticket merchants. Naughty Dog uses a custom dialect of Lisp (it's supposed to be very Scheme-like, with multiprogramming based on some kind of cooperative multitasking (or so I it looks from some code snippets I saw in a Game Developer mag article on them)). They use Common Lisp to write the compiler and tools for it

[ Parent ]
It's used for stuff (none / 1) (#236)
by voodoo1man on Wed Mar 24, 2004 at 11:09:48 PM EST

For example, DrScheme (the graphical PLT Scheme) is written in Scheme, and it seems to do GUI and text parsing stuff pretty well. They also have (had?) some support for making programming language interpreters/compilers (Lisp generally tends to be very good at this). Recently, Chris Double has been blogging about using a Scheme that runs on top of the JVM for web apps; that way, he can package up web sessions as continuations, and serialize them or send them to another machine on the network. I know Transmeta used (maybe they still do) Scheme for chip verification. There's also been more than a few robots at MIT and Northeastern programmed in Scheme.

[ Parent ]
Why I like perl (2.66 / 6) (#4)
by czth on Wed Mar 17, 2004 at 10:59:32 AM EST

package Scan;
use base qw(Exporter);
our @EXPORT = qw(scan);

use IO::Socket::INET;

sub scan {
  my $host = shift;
  map { [$_ => scalar getservbyport $_] } open_ports($host,\@_)
}

sub open_ports {
  my ($host,$ports) = @_;
  grep IO::Socket::INET->new(PeerAddr => "$host:$_", @$ports)
}

1;

Scheme may be beautiful, but perl's pragmatic.

czth

Yes, there's a misplaced ) -nt (none / 1) (#5)
by czth on Wed Mar 17, 2004 at 11:00:03 AM EST



[ Parent ]
is that multithreaded? (2.60 / 5) (#6)
by jacob on Wed Mar 17, 2004 at 11:04:52 AM EST

I suspect not. Also it uses getservbyport rather than a roll-your-own version. The Scheme version with those modifications is basically the same as that, except the syntax for detecting that a port is open is a little nastier (though does yours handle closing the ports in a timely manner so that this program would play well in a larger app?).

--
"it's not rocket science" right right insofar as rocket science is boring

--Iced_Up

[ Parent ]
Answers (2.83 / 1) (#10)
by czth on Wed Mar 17, 2004 at 11:14:50 AM EST

No, it's not multithreaded, perl threads are admittedly sort of unreliable (they may be better now)... this is a feature of the implementation, not the language. Score one for whoever wrote your scheme implementation.

Of course it uses getservbyport, I'm not much for reinventing the wheel. Since getservbyport is a primitive, non-Unix versions of perl will probably implement it in some useful manner.

The port is closed when the IO::Socket::INET object is destroyed, and while reference counted GC may not be ideal for a lot of things, it does mean in perl's case that objects get destroyed as soon as they (that is, the last reference to them of course) go out of scope.

By the way, very nice article; I wasn't trying to say "perl is better", just to show another version - and also how similar the perl version is, since of course it borrows map, grep, etc. from functional languages, but it emphasises pragmatism over being able to build everything from first principles.

czth

[ Parent ]

The reason he asked about multithreading ... (2.85 / 7) (#27)
by tmoertel on Wed Mar 17, 2004 at 02:38:09 PM EST

... is because without it, a portscanner becomes pretty much useless for the real world. Lots of firewalls silently drop packets to non-public hosts/ports and this has a result of stalling the TCP connection attempt until it eventually times out. If you're scanning several thousand ports, you must then wait for several thousand consecutive timeouts to occur, which could take hours.

--
My blog | LectroTest

[ Disagree? Reply. ]


[ Parent ]
differences (none / 2) (#33)
by ucblockhead on Wed Mar 17, 2004 at 06:11:45 PM EST

Which hilites the danger of language comparisons. If you aren't talking about real-world issues, there's no point.
-----------------------
This is k5. We're all tools - duxup
[ Parent ]
Except, of course, (none / 3) (#35)
by it certainly is on Wed Mar 17, 2004 at 07:41:02 PM EST

only the most naive portscanners make actual connect() calls. Real portscanners roll their own TCP packets and fire them off. It's pretty simple to make your own TCP state machine and you have no need to invoke resource-wasting nonsense like threads to get around your poor choice of system calls that block.

kur0shin.org -- it certainly is

Godwin's law [...] is impossible to violate except with an infinitely long thread that doesn't mention nazis.
[ Parent ]

good reasons to use connect() (none / 3) (#37)
by jacob on Wed Mar 17, 2004 at 08:00:15 PM EST

typically non-privileged users don't get to do anything fancier than connect(), so full-open scanners have a definite ecological niche. I actually wrote an early version of this program long ago for precisely that reason: at the time, there were no freely available multithreaded portscanners for Windows NT that would run for non-privileged users, and it was quicker to write this than it would've been to convince a sysadmin to give me permission to spew any old packets I wanted onto the network.

In any event, a portscanner is a naturally concurrent program (meaning it is composed of a bunch of semi-related tasks that logically complete in a nondeterministic order), so writing it as a threaded program makes sense. If your language doesn't support that, you'll have to bend your program to its wishes rather than bending the language to your wishes.

--
"it's not rocket science" right right insofar as rocket science is boring

--Iced_Up

[ Parent ]

yeesh (none / 1) (#71)
by Work on Thu Mar 18, 2004 at 01:45:24 AM EST

multithreading is so darn useful and increasingly common these days its hard to imagine any modern language not having good support for it.

[ Parent ]
Heh. (2.71 / 7) (#73)
by it certainly is on Thu Mar 18, 2004 at 02:37:47 AM EST

Software always devolves into one of two things: a deterministic state machine, or a non-deterministic state machine. The former can be proven correct. The latter can be transformed into the former.

Multithreading is one of those things, like garbage collection, that makes the developer think "cool!" and then spend endless days trying to fix incredibly tricky race-condition bugs and such. I dislike language/library combos that try and force the programmer to use the threading "cool language feature" by removing all the non-blocking system calls. Ideally, all calls should be non-blocking. Ideally, all software should be distinct DFSAs linked by message passing.

Instead, you have the enormous kludge of "multithreading", which then requires you to make your code "multithreading safe". You shouldn't have the opportunity to make it unsafe.

kur0shin.org -- it certainly is

Godwin's law [...] is impossible to violate except with an infinitely long thread that doesn't mention nazis.
[ Parent ]

eh.. (none / 2) (#119)
by Work on Thu Mar 18, 2004 at 11:53:25 AM EST

if you sit down and do some thinking about the design before you write it, race conditions and other issues don't have as much of a chance to appear.

Alot of programmers also don't know how to properly use the thread tools they're given, or even how threading really works and the whole idea behind notify/signal and notifyAll/broadcast. Or don't think about things before writing them. I'm always somewhat dismayed when i see someone "hey i'm going to write this program..." and jump right into an IDE without even planning out what they want beforehand.

As for transforming an NDFSM into an FSM, have you tried to do this *in practice* on non-trivial machines? It is not simple by any means.

[ Parent ]

You should be asking your sysadmin anyway. (none / 1) (#74)
by it certainly is on Thu Mar 18, 2004 at 02:49:38 AM EST

Port-scanning is an act of network aggression that can easily be misinterpreted. Unless you actually tell your sysadmin what you're up to and seek his explicit permission, then he must assume you are an intruder scanning the network for vulnerabilities.

No doubt you chose a port-scanner as your example as it's very easy to write (in any language) and yet somehow has appeal to the l33t C and C++ kiddies who write 80% of OSS. But that doesn't mean you should be using it. Just use nmap for NT.

kur0shin.org -- it certainly is

Godwin's law [...] is impossible to violate except with an infinitely long thread that doesn't mention nazis.
[ Parent ]

yes indeed (none / 1) (#99)
by jacob on Thu Mar 18, 2004 at 08:07:52 AM EST

I agree. Kids, scan ports responsibly and if you're using this rather than nmap for your h4X0ring needs you should probably consider brushing up on your h4X0ring skills.

Anyway, you're basically right about why I chose it. It was a short, neat program that I had actually used for work rather than being a toy.

--
"it's not rocket science" right right insofar as rocket science is boring

--Iced_Up

[ Parent ]

Perl threaded-map (none / 1) (#114)
by czth on Thu Mar 18, 2004 at 10:05:49 AM EST

OK, you have a point, so here's a threaded-map implementation in perl:

sub threaded_map(&@) {
  my $fn = shift;
  map threads->new($fn,$_)->join(), @_
}

Disclaimer: untested, since I don't have a threaded perl build handy, just a copy of the perlthrtut perldoc page :^).

czth

[ Parent ]

Bad news. (none / 2) (#157)
by tmoertel on Thu Mar 18, 2004 at 11:32:08 PM EST

OK, you have a point, so here's a threaded-map implementation in perl:
sub threaded_map(&@) {
  my $fn = shift;
  map threads->new($fn,$_)->join(), @_
}
No, that's a serial version, too.

What will happen is that map will apply threads->new($fn,$_)->join() to each element of @_ serially. That means that after each thread is created, the main thread will wait for it to terminate (that's what join() does) before moving on to the next element.

What you want is this:

map {$_->join} map {threads->new($fn,$_)} @_;
That way, you will create all of the threads before you attempt to join the first one.

--
My blog | LectroTest

[ Disagree? Reply. ]


[ Parent ]
Good point, thanks -nt (none / 1) (#169)
by czth on Fri Mar 19, 2004 at 10:32:11 AM EST



[ Parent ]
That's why I like TCL... (none / 3) (#64)
by claes on Wed Mar 17, 2004 at 11:20:48 PM EST

# start nmap, collect output
set p [open "| nmap -O 10.19.59.*"]
while {[gets $p line] >= 0} {
   #regexp your way to the answer
}

And if you use fileevent, you can have a whole herd of nmaps blasting away at firewalls in parallel.

-- claes (I really do use TCL pretty often)

[ Parent ]

Why I like Java (none / 1) (#159)
by Mindcrym on Fri Mar 19, 2004 at 01:39:47 AM EST

The following is a multithreaded port scanner written in Java that I just threw together.  Unlike its Perl and Scheme counterparts this is readable by anyone who remotely understands anything about programming languages.

To compile:

javac -d . PortScanner.java

To run:

java -cp . org.kuro5hin.Mindcrym.PortScanner <hostname>

  Enjoy,
   -Mindcrym

-----------------------------------------

package org.kuro5hin.Mindcrym;

import java.io.IOException;
import java.net.Socket;

public class PortScanner extends Thread {
    private String _host;
    private int _port;

    public PortScanner(String host, int port) {
        _host = host;
        _port = port;
    }

    public static void main(String[] args) throws InterruptedException {
        for (int i = 0;i <= 65535;i++) {
            System.err.print(i + "\r");
            new PortScanner(args[0], i).start();
        }
    }

    public void run() {
        try {
            Socket sock = new Socket(_host, _port);
            sock.getInputStream();
            System.err.println("Connection succeeded on port " + _port);
        } catch (IOException stan) {
        }
    }

}

[ Parent ]

Remotely understands? (none / 1) (#174)
by Smerdy on Fri Mar 19, 2004 at 10:49:24 AM EST

I think you're suffering from an inability to recognize what you take for granted. Someone who had only learned functional, non-OO languages would probably find your code very hard to understand. On the other hand, someone who really understands "programming languages" would probably know enough different language paradigms to understand both your code and that in the article.

[ Parent ]
and how many people would that be? (none / 1) (#184)
by Mindcrym on Fri Mar 19, 2004 at 12:01:11 PM EST

How many people learn functional languages before learning a procedural or OO one?  I'm not trying to be a smartass, I really want to know.  Seems to me that the concepts of a procedural and/or OO language are more quickly understood, and therefore, are probably more likely to be taught/learned before functional programming.  But, that could be me, since I learned the ways of procedural programming long before I knew what functional programming was.
  -Mindcrym

[ Parent ]
Naw. (none / 1) (#187)
by tkatchev on Fri Mar 19, 2004 at 12:17:19 PM EST

Point is that the functional paradigm is a paradigm that is distinctly rooted in the U.S. academic system.

Most people follow the British model, i.e. straight-forward imperative languages and Turing machines. This makes sense if you consider the fact that the British created the first programmable computer and the first P.C., and also if you consider the size and influence of the former British Empire.

But at the core, it is nothing but a question of culture and tradition. All computational models are equivalent, and none of them are more "complex" or more "difficult to understand". It's more a question of habit.


   -- Signed, Lev Andropoff, cosmonaut.
[ Parent ]

Equivalence (none / 1) (#198)
by Tayssir John Gabbour on Fri Mar 19, 2004 at 02:45:50 PM EST

All computational models are equivalent, and none of them are more "complex" or more "difficult to understand".

You have a good point, and I further think different ones act more "complex" depending on the problem.

Some argue* that different paradigms make certain solutions easier.  Basically the "weak" interpretation of Sapir-Whorf.  They further argue that programming through incremental changes of state, imperative programming, is generally not close to how people normally think.  It may be how computers think at a bare level, but it's not how every problem should be attacked.  Therefore they advocate that languages support more paradigms to aid the so-called power user.

[*] Timothy Budd does so in Multiparadigm Programming in Leda.  He points out that imperative programmers think in terms of boxes or pigeonholes, and claims that's often not close to how humans solve problems.  Except maybe postal workers and pigeon breeders.


[ Parent ]

Functional languages first (none / 1) (#197)
by Smerdy on Fri Mar 19, 2004 at 02:17:50 PM EST

There is some good empirical evidence that functional programming (in this case, with Scheme) is much easier to learn than imperative programming with C++ or Java.

Also, a very significant portion of top universities worldwide use functional languages for their first programming courses. MIT and Berkeley use Scheme, and I know of many European universities starting with ML or Haskell.

[ Parent ]

Me, for one (none / 0) (#222)
by mutualaid on Sun Mar 21, 2004 at 02:23:50 PM EST

I first learned ANSI C when I was an adolescent by buying a couple books and doing a lot of reading. After that I moved on to win16 GUI programming with Borland C and was so traumatized that I gave up C. Later I screwed around with Visual Basic 3.0 before giving up programming entirely.

Nowadays I use Python whenever I'm forced to write something.

[ Parent ]

Oh, and I just noticed.... (none / 2) (#176)
by Smerdy on Fri Mar 19, 2004 at 10:56:35 AM EST

Your code doesn't do what the article code does! First of all, you aren't giving the text names associated with numbered ports. Second of all, the original collects all open ports and their descriptions in a list. Yours just prints to the screen, which requires much less synchronization. The example from the article could be shortened if it just printed instead of collected.

Your example is already about the same length as the Scheme version, so you seem to be in trouble if you want to show that Java isn't an inexpressive mess.

[ Parent ]

My original one did that, in fact (none / 1) (#178)
by jacob on Fri Mar 19, 2004 at 11:10:50 AM EST

it was 8 lines. :)

--
"it's not rocket science" right right insofar as rocket science is boring

--Iced_Up

[ Parent ]
heh (none / 3) (#194)
by jacob on Fri Mar 19, 2004 at 01:53:14 PM EST

Actually 7.


(define (open? h p)
  (with-handlers ([exn:i/o:tcp? (lambda (x) #f)])
    (let-values ([(ip op) (tcp-connect h p)])
      (close-input-port ip) (close-output-port op) #t)))
 
(define (check-port h p) (when (open? h p) (printf "port ~a is open\n" p)))
 
(define (scan host ports)
  (for-each (lambda (p) (thread (lambda () (check-port host p)))) ports))


--
"it's not rocket science" right right insofar as rocket science is boring

--Iced_Up

[ Parent ]

code length (none / 1) (#189)
by Mindcrym on Fri Mar 19, 2004 at 12:49:45 PM EST

Your example is already about the same length as the Scheme version, so you seem to be in trouble if you want to show that Java isn't an inexpressive mess.

Who said I was trying to make the code smaller than the Scheme version?  What I was trying to show is that people who do not have a firm grasp of Java can still read it and understand what it does.  You don't need to dig up Java in a Nutshell to figure out the syntax.
  As a human being who understands English I find it much more enjoyable to write code in a programming language that my brain understands without much strain.  This allows me to be more productive and get shit done.  Also, someone else who might come along after me is likely to have an easier time figuring out what my Java does than figuring out what my Perl does.
  Oh, and the code I wrote was not meant to do exactly what the Scheme code did.  I saw the post with the Perl code and the ensuing discussion about multithreaded port scanners.  So I wrote one.
  -Mindcrym

[ Parent ]

verbosity (none / 2) (#195)
by ucblockhead on Fri Mar 19, 2004 at 02:07:03 PM EST

You might want to check out COBOL, then.
-----------------------
This is k5. We're all tools - duxup
[ Parent ]
Bad premise (none / 1) (#196)
by Smerdy on Fri Mar 19, 2004 at 02:14:26 PM EST

Someone who has not studied a language similar to Java would not understand your code. I consider almost all mainstream programming languages as "similar to Java" in this respect. They mostly share nearly identical syntax and mental model of code execution.

Try to think back to before you learned programming, or ask some people with no programming experience, if you don't believe me.

[ Parent ]

Since this story has been posted (2.60 / 5) (#36)
by jacob on Wed Mar 17, 2004 at 07:45:42 PM EST

Tayssir John Gabbour's excellent related links are no longer visible. So, I thought I'd repost them:

--
"it's not rocket science" right right insofar as rocket science is boring

--Iced_Up

It looks like LISP (none / 3) (#39)
by lukme on Wed Mar 17, 2004 at 08:58:33 PM EST

May be I am an idot, but I really don't get why scheme is so great


-----------------------------------
It's awfully hard to fly with eagles when you're a turkey.
Scheme is a LISP (2.75 / 4) (#40)
by jacob on Wed Mar 17, 2004 at 09:06:36 PM EST

McCarthy LISP is the parent of both Scheme and Common Lisp. That's why they look so similar.

--
"it's not rocket science" right right insofar as rocket science is boring

--Iced_Up

[ Parent ]
But recursion is inefficient? (none / 2) (#41)
by porkchop_d_clown on Wed Mar 17, 2004 at 09:33:04 PM EST

You can get into real trouble with recursive coding in certain binary formats and cpu architectures - building and tearing down the stack frames can be expensive both in time and memory.

I remember having real problems with UNIX programs ported to other environments - without thinking, I was passing 16k buffers on the stack and exhausting the "entire" 64k region reserved for the call stack on the other OS.

Oops.

After that I learned not to pass arrays and to avoid recursion.

Is there something about Scheme which addresses this problem?

Will we line up for Grand Theft Auto 5 if it's the exact same thing, only with prettier texture-mapped bruises on the whores? -- David Wong

Tail recursion (none / 2) (#43)
by Smerdy on Wed Mar 17, 2004 at 09:36:26 PM EST

All non-toy functional language implementations compile tail recursion to use constant stack space.

[ Parent ]
Hrm. I don't have much experience with (none / 2) (#44)
by porkchop_d_clown on Wed Mar 17, 2004 at 09:37:40 PM EST

building functional languages - how do you handling scoping if you don't build new stack frames?

Will we line up for Grand Theft Auto 5 if it's the exact same thing, only with prettier texture-mapped bruises on the whores? -- David Wong
[ Parent ]
What tail recursion is (none / 2) (#45)
by Smerdy on Wed Mar 17, 2004 at 09:39:30 PM EST

Tail recursion is recursion where the only recursive calls are those used to compute return values directly. For such calls, it should be clear that you can just rewrite your own arguments and "do a goto" to "restart the function."

[ Parent ]
That's not what jacob's claiming, though. (none / 2) (#49)
by Estanislao Martínez on Wed Mar 17, 2004 at 10:13:52 PM EST

His range function is not tail-recursive (though it's easy to write a tail recursive version). The claim he's making is that ordinary, non-tail recursion in a language like Scheme is more efficient than in procedural languages.

--em
[ Parent ]

careful now (none / 3) (#61)
by jacob on Wed Mar 17, 2004 at 11:10:26 PM EST

That's not exactly what I'm claiming. I'm saying that implementations of programming languages can use tricks other than tail-call optimization to make function calls more efficient than they'd be in a naive C compiler. However, the range function isn't meant to be efficient, and my implementation is just simply a worse performer than a tail-recursive version would be (and I'm not positive but I suspect that in PLT Scheme it's probably about equivalent to writing a recursive C function to do the same thing). I wrote it that way because it just doesn't matter. You're not going to do ranges that are big enough for its suboptimality to matter that much (on my system it takes about a quarter second to compute (range 1 66000), and if I were running it without DrScheme's debugging annotations it'd be about 3 to 5 times faster), and my version was easier to explain in an article than the accumulator-style recursion would've been.

--
"it's not rocket science" right right insofar as rocket science is boring

--Iced_Up

[ Parent ]
yeah, but (2.25 / 4) (#65)
by Estanislao Martínez on Wed Mar 17, 2004 at 11:29:08 PM EST

If you're making an argument about recursive functions not incurring a large performance hit, I don't think you're allowed to go back and say that your demo recursive function does not need to run fast.

--em
[ Parent ]

why? (none / 1) (#93)
by jacob on Thu Mar 18, 2004 at 07:49:59 AM EST

Those aren't exclusive. Actually though that section wasn't supposed to be about how recursion doesn't incur a large performance hit, it was that looping is done with recursion in Scheme. It was just a side point that recursion doesn't have to incur a large performance hit.

--
"it's not rocket science" right right insofar as rocket science is boring

--Iced_Up

[ Parent ]
Out of interest... (none / 1) (#117)
by sab39 on Thu Mar 18, 2004 at 11:32:33 AM EST

Is it possible to write a Scheme macro that will translate (certain kinds of) non-tail-recursive functions into accumulator-based tail-recursive ones?

Eg, is there any way that a macro could usefully recognize a recursive function like the first version of "length" posted by evanp elsewhere in this thread, and transform it into the second one if possible? I can't quite decide whether that's necessarily an AI-complete problem or not...

If it is possible for such a thing to be done for some useful subset of functions, presumably Scheme interpreters/compilers could use that trick to interpret/compile more efficiently.
--
"Forty-two" -- Deep Thought
"Quinze" -- Amélie

[ Parent ]

Maybe... (none / 1) (#128)
by joto on Thu Mar 18, 2004 at 03:45:05 PM EST

I'm not to much into the hygienic macros of scheme, so I don't know how general they are, or how easy they are to work with, compared to traditional macros. But with more traditional lisp macros, it's certainly possible. All you need to do is to write a complete scheme implementation as a macro, and make sure that scheme implementation does the above mentioned optimization.

But would it be worth it? This would only work for simple cases, as understanding any turing-complete language is an impossible task, and only a trivial subset of programs would be able to be converted by it. IMHO, you are better served by mapcar, reduce, etc, and some macros for looping.

But if you really think it should be done anyway, it should be added to the implementation, not as a macro, as the real-world benefits of wrapping all your code in a (tailrec-convert ...) is nada. This can of course be said of any other possibly global optimizations too.

[ Parent ]

I don't know if that's possible (none / 1) (#130)
by jacob on Thu Mar 18, 2004 at 03:48:57 PM EST

theoretically. If it is, you can write a PLT Scheme macro to do it.

--
"it's not rocket science" right right insofar as rocket science is boring

--Iced_Up

[ Parent ]
Illusion of functions (none / 2) (#50)
by Tayssir John Gabbour on Wed Mar 17, 2004 at 10:14:40 PM EST

I think the best way to describe it is this way:  stacks are useful in building the illusion of functions that act reasonably like what people are used to.  However, there is no law that Function Calls Must Grow Stack.  And you might find for yourself scenarios where stack growth is actually unnecessary when you call a function.

http://c2.com/cgi/wiki?TailCallOptimization
http://c2.com/cgi/wiki?TailRecursion

[ Parent ]

Indeed; moreover, (none / 1) (#106)
by tkatchev on Thu Mar 18, 2004 at 08:43:55 AM EST

tail recursion is a much more powerful and more general tool than loops. More efficient, as well.

The fact that C's recursion support is broken only means that C compilers are laughably inefficient, nothing more.


   -- Signed, Lev Andropoff, cosmonaut.
[ Parent ]

A bunch of things (none / 3) (#48)
by jacob on Wed Mar 17, 2004 at 10:03:33 PM EST

In addition to the tail-recursion system used by others, it helps to remember that the entirety of a Scheme system can be built with the idea that its going to have to deal with recursions a lot. One trick that SML/NJ (a language that faces this same problem) uses is to have a tuned generational garbage collector and allocate all "stack frames" on the heap: sounds insane until you realize that heap-allocation with this GC is just a pointer increment in the average case, and that you never explicitly deallocate (the garbage collector just comes along and sweeps up stale frames every once in a while; this can actually work out to cheaper overall and certainly isn't more expensive).

Also, in Scheme you never construct values on the stack; there are references on the stack, but that's it (imagine it being like a C program where you only pass pointers and can't use null). You pay the penalty of a possibly-unnecessary dereference for each variable access (unless the compiler gets clever, of course), but in exchange you never have to worry about running out of memory due to passing things on the stack or things like that.

--
"it's not rocket science" right right insofar as rocket science is boring

--Iced_Up

[ Parent ]

It is more expensive. (none / 2) (#113)
by joto on Thu Mar 18, 2004 at 09:39:48 AM EST

Using GC instead of a stack does have a cost. While allocation is just as fast, deallocation is much more costly using a GC, no matter how well tuned the GC is. Doing a full or generational sweep is much more costly than just adding n to the stack pointer. Also, using a stack has the added benefit of increasing cache efficiency, as you are constantly reusing the same memory.

There is a reason Common Lisp, bigloo, and ocaml doesn't support continuations. This is to avoid having to use continuation passing style in their implementations, which allows them to use a traditional stack, except for (parts of) closures. But the fine-tuning of things like this seems to still be more of a black art than a science, much like tuning of garbage collectors is.

The most efficient solution as of today is probably to use a traditional stack, coupled with compiler-supported region-based memory management (region inference, as it's typically misnamed). But region inference only works for compiled languages, and even then, it isn't always too practical. So, an optional GC is probably needed for a more general solution.

[ Parent ]

How so? (none / 1) (#192)
by Vulcannis on Fri Mar 19, 2004 at 01:47:09 PM EST

That small constant cost of incrementing a stack pointer is done per deallocation.  In a generational GC, entire swaths of memory can be swept at once.  Though of course, memory usage patterns or a mistuned GC could screw this up.

Your argument about cache efficiency sounds right, though.  But I personally would rather program to a more abstract machine model, and let increasingly clever implementations and better hardware deal with optimizations, than to restrict myself to a more current-hardware-catering model.  Hardware moves too fast, and optimizing for todays can cause grief down the line.

---
If it's not black and white, you're not looking close enough.
[ Parent ]

So (none / 1) (#202)
by joto on Fri Mar 19, 2004 at 05:15:33 PM EST

That small constant cost of incrementing a stack pointer is done per deallocation. In a generational GC, entire swaths of memory can be swept at once.

Oh please. An integer add for a register is so cheap, you shouldn't even think about it, especially when paired with a branch instruction at function exit. If you expect a GC sweep to be faster, you'd have a mighty weird computer, or memory usage patterns. Just try to imagine a situation where the GC wouldn't have to do at least exactly the same number of instructions...

Your argument about cache efficiency sounds right, though. But I personally would rather program to a more abstract machine model, and let increasingly clever implementations and better hardware deal with optimizations, than to restrict myself to a more current-hardware-catering model. Hardware moves too fast, and optimizing for todays can cause grief down the line.

I pretty much agree with this. Optimizing for current hardware is boring. But we are talking about language implementation here, and in that case, that's exactly where it's needed, and where it's done. If you don't care that much about speed, there are plenty of nice languages that offer "acceptable" speed.

Secondly, optimizing for cache efficiency is not really that hideous, it just means: use less memory, even at some (moderate) extra processor cost (unless you are the kind of guy that knows the cache line size for your current processor and takes advantage of that).

And cache efficiency will be important untill someone invents a revolutionary new kind of fast, cheap memory. I have no doubt that will happen some time (and various companies also claim to already have done so). But I doubt I'll be able to buy it in the next 5 years. And that's almost an eternety in computer years :-)

[ Parent ]

Read this (none / 1) (#203)
by Smerdy on Fri Mar 19, 2004 at 05:52:33 PM EST

Garbage Collection can be Faster than Stack Alocation, Andrew Appel, Information Processing Letters, 25(4):275-279, June 1987.

[ Parent ]
AAAAAAAAUUUUUUUGGGGGHHHHHH!!!!!!!! (none / 1) (#208)
by Bill Barth on Fri Mar 19, 2004 at 07:41:36 PM EST

Yeah, but did you read that paper?!?!?!??!

You only need about an order of magnitude (really about 7 or 8x) more physical memory than you plan to have allocated in your program at once! In fact, the paper suggests that if you have enough memory the garbage collector will never run, and everything is FREE....FREEEEEE I say!

And this quote from the conclusions is a gem

But in a time when 50 megabytes of memory chips can be obtained for under four thousand dollars, there is less need for complex garbage collection algorithms, or special garbage collection hardware. Such techniques as reference-counting, ephemeral garbage collection, closure analysis, etc., may not really be necessary now that it is possible to use massive memories.
What kind of crack were these guys smoking in '87? Where's their foresight? Just because you can have "free" garbage collection when you have more memory than God, doesn't mean that you should plan on actually having almighty amounts of RAM.

I really don't think that paper presents a viable way of making GC better than putting things on the stack.

Yes...I am a rocket scientist.
[ Parent ]

I agree... (none / 1) (#214)
by joto on Sat Mar 20, 2004 at 07:10:04 AM EST

Even at the time the paper was written, it must have been intended at least partially as a troll (well, in more polite terms: to provoke debate). Appel is a clever guy, and I doubt he believes the conclusions of this paper today.

[ Parent ]
My brain still has the scars... (none / 3) (#60)
by claes on Wed Mar 17, 2004 at 11:08:17 PM EST

from doing image processing on a 286 based Xenix system (hey, it was all the Unix I could afford after by Ironics 68010 box died).

Near pointers, far pointers, huge pointers. Nothing fit. What a nightmare. Everything had to be in 64 k chunks. Between Intel, Microsoft and SCO, what a mess.

Luckily, 386 prices came down and Linux came on the scene, and it's been a nice flat addressing model ever since.

-- claes

[ Parent ]

Tail recursion in Scheme (none / 2) (#66)
by evanp on Thu Mar 18, 2004 at 12:11:35 AM EST

Compliant implementations of Scheme are required to optimize tail recursion to gotos (!). So recursion in Scheme can be very natural and efficient.

Tail recursion is when you return the value of another function unmodified as your function's value. This list length function is not tail recursive: it adds one to the results of the recursive call:

  (define (length l)
    (if (null? l)
      0
      (+ 1 (length (cdr l)))))

This one is:

  (define (length l)
    (letrec ((len (lambda (l acc)
                (if (null? l)
                   acc
                   (len (cdr l) (+ acc 1))))))
        (len l 0)))
                     

[ Parent ]

Stupid question (none / 1) (#85)
by bugmaster on Thu Mar 18, 2004 at 07:25:27 AM EST

What's "letrec" ?
>|<*:=
[ Parent ]
"let recursively". (-) (none / 1) (#86)
by tkatchev on Thu Mar 18, 2004 at 07:27:35 AM EST


   -- Signed, Lev Andropoff, cosmonaut.
[ Parent ]

letrec (none / 2) (#110)
by joto on Thu Mar 18, 2004 at 09:21:42 AM EST

In a normal "let" form, the variables are bound in paralell (at least logically), just like the parameters in a function call. With "letrec", they know about each other (and themselves), so you can write recursive functions and sets of recursive functions in them.

"Letrec" can be thought of as a "let", with each binding unset (or set to an arbitrary value), and then a number of "set!" forms, to establish the bindings. Hopefully your compiler will do something smarter (or be able to optimize it anyway).

A somewhat similar distinction exists between "flet" and "labels" in Common Lisp.

[ Parent ]

Why Scheme? (none / 2) (#42)
by Smerdy on Wed Mar 17, 2004 at 09:33:38 PM EST

Your article is rather short on actual discussion of why Scheme is a practical choice for software development. I would certainly see proponents of mainstream programming as being justified in saying that you make no argument that has relevance to their development choices. However, I'm not in that category of people, so I'll ask a different question: Why do you prefer to use Scheme over ML?

macros and heterogeneous s-expressions (none / 3) (#46)
by jacob on Wed Mar 17, 2004 at 09:53:38 PM EST

Macros are #1. I can't stand programming in languages without a macro system anymore. (Nor can I stand programming in languages without first-class functions, but of course SML has those.)

#2 is that Scheme is untyped. This is a mixed blessing, of course, but for a lot of problems I find SML's type system catches the easy bugs I'd've found anyway and makes the cool things a big pain. For instance, I find it much easier to write XML-processing code in Scheme than in SML because in SML you've got to make and use a big thicket of datatype definitions before you can even start with your program. (Of course for that you get the static guarantee that you won't build a broken XML datum, but I think the pain is worse than the safety net.)

--
"it's not rocket science" right right insofar as rocket science is boring

--Iced_Up

[ Parent ]

Response (none / 2) (#47)
by Smerdy on Wed Mar 17, 2004 at 10:01:19 PM EST

You say that you can't stand programming in languages without macro systems, but you don't give any practical reason for this feeling. Do you have one?

I'm unconvinced by your justification of dynamic typing. Static typing in ML makes it very easy to change a project's code in one place and instantly find all the places affected by the change, for instance. The types also provide a useful way to perform high-level program design as you code, in a way that is sanity-checked automatically.

As for the SML example, I don't know if you already know about it, but this statically typed language is designed specifically for XML processing, complete with static guarantees that it is impossible to construct an object violating an XML DTD: http://xduce.sourceforge.net/

[ Parent ]

Every Language War Ever / soft typing (none / 2) (#51)
by Tayssir John Gabbour on Wed Mar 17, 2004 at 10:29:11 PM EST

I believe the PLT Scheme folks are working on various soft-typing systems.

The enormous problem with these discussions is it falls into the Every Language War Ever trap.  Yes, there are advantages to static typing, as there are advantages to dynamic.  Dynamic typers do not want a compiler which rejects programs it can't prove correct.

If you want to be persuasive, argue the merits of static analysis in a way that does not nuke the benefits of dynamic type systems.  I'd like a metaprogram which I can run at runtime so that I do not partition bughunting between "compiletime" and runtime.

[ Parent ]

Answered with a complexity-theory-style reduction (none / 3) (#53)
by Smerdy on Wed Mar 17, 2004 at 10:39:27 PM EST

For all uses of dynamic typing in Scheme that don't use very esoteric language features, you can get the same flexibility in ML by creating a simple variant type that has branches for all the dynamic types that Scheme supports. Save for unimportant syntax differences, you can then program with essentially the same abstract syntax that you use in Scheme in some places, while keeping the benefits of ML's type system in other places.

[ Parent ]
Then ML family doesn't reject unprovables? (none / 2) (#57)
by Tayssir John Gabbour on Wed Mar 17, 2004 at 11:00:15 PM EST

That's a pleasant surprise.  Most advocates of ML argue that the compiler will reject programs where type may be unprovable.

I would like this analysis to occur at runtime because it would provide hints to error clustering, when combined with observation of runtime behavior.  Plus, type safety has never been high on my list of desired static guarantees.

That would be a reason to shift attention to an ML dialect sooner than I had planned.

[ Parent ]

warning (none / 3) (#59)
by jacob on Wed Mar 17, 2004 at 11:03:47 PM EST

the argument he's making is a theoretical one, not a practical one. You really shouldn't consider SML to have the capacity to program in an untyped fashion, though he's right that it's possible to build an essentially untyped sublanguage.

--
"it's not rocket science" right right insofar as rocket science is boring

--Iced_Up

[ Parent ]
Oh No (none / 1) (#67)
by unknownlamer on Thu Mar 18, 2004 at 12:18:27 AM EST

You don't know Smerdy. He worked on an ML compiler for a while (not sure if he is still working on it). He is a freakin genius and can do anything.

I bet Smerdy could write an ML compiler in BrainFuck. In fifteen minutes.



--
<vladl> I am reading the making of the atomic bong - modern science
[ Parent ]
well (none / 1) (#95)
by jacob on Thu Mar 18, 2004 at 07:52:59 AM EST

Not trying to diss Smerdy of course. But the style he's suggesting is a giant pain in the ass in a practical sense.

--
"it's not rocket science" right right insofar as rocket science is boring

--Iced_Up

[ Parent ]
I'm talking about something like this: (none / 2) (#63)
by Smerdy on Wed Mar 17, 2004 at 11:20:10 PM EST

datatype scheme = Int of int | Nil | Cons of scheme * scheme



fun car (Cons (x, t)) = x

fun cdr (Cons (x, t)) = t

fun add (Int x) (Int y) = Int (x+y)

So, instead of this:
(+ (car (cons 0 nil)) 18)

You'd write this:
add (car (Cons (Int 0, Nil))) (Int 18)

[ Parent ]

Yeah (none / 1) (#58)
by jacob on Wed Mar 17, 2004 at 11:01:21 PM EST

Have you ever tried actually programming in that style? I have. There's a reason nobody does it. :)

--
"it's not rocket science" right right insofar as rocket science is boring

--Iced_Up

[ Parent ]
by the way (none / 1) (#96)
by jacob on Thu Mar 18, 2004 at 07:58:52 AM EST

We are indeed working on so-called "soft typing," optional type inference and static typing that you can run optionally. In fact we have one already, a thing called MrSpidey, but it's no longer maintained and doesn't cover the whole language any more. Currently we've got two other research projects going: one to add a Hindley-Milner-style typechecker, and another that uses set-based analysis that's very mature but not quite released yet. All of these are optional extra phases: you have your program in DrScheme, click the type-check   button, and the analysis colors problem areas red (and prints some other information) so you can decide for yourself whether they're real problems or not.

--
"it's not rocket science" right right insofar as rocket science is boring

--Iced_Up

[ Parent ]
Q: (none / 1) (#104)
by tkatchev on Thu Mar 18, 2004 at 08:36:40 AM EST

Is the typing info used for making code more optimised? That's all I care about, actually. :)

   -- Signed, Lev Andropoff, cosmonaut.
[ Parent ]

No (none / 1) (#107)
by jacob on Thu Mar 18, 2004 at 08:56:54 AM EST

The PLT people aren't that concerned with doing crazy tricked-out optimization. The PLT interpreter isn't even really supposed to be fast; Bigloo or Chez Scheme or especially Stalin are better choices if that's what you're after. (I find that people frequently worry about optimization more than they actually need it; my Scheme programs are always plenty fast enough for me, though the equivalent SML/NJ ones would probably be faster.)

--
"it's not rocket science" right right insofar as rocket science is boring

--Iced_Up

[ Parent ]
I'm not talking about 'crazy-tricked-out'... (none / 1) (#111)
by tkatchev on Thu Mar 18, 2004 at 09:22:37 AM EST

...optimisations.

The point is, if you're doing things the way they're supposed to be done, your performance will be on par with a decent C compiler. If not, then your compiler design is flawed somewhere.

Look at OCaml if you don't believe me. They don't do any optimisations at all, except for designing their compiler properly from the ground up. Oberon, as well -- all the goodness of Java with a speed that exceeds optimising C compilers. The Oberon folks also managed to get all this without using any special optimisation tricks.

(This is not a flame, BTW, just an honest observation.)

   -- Signed, Lev Andropoff, cosmonaut.
[ Parent ]

Well (none / 1) (#112)
by jacob on Thu Mar 18, 2004 at 09:29:40 AM EST

The difference is that those languages are typed as part of their core language idea. In an untyped language where the runtime is assumed to be making dynamic checks implicitly, removing them if you happen to be able to prove statically that at runtime only particular values can flow into a particular location in code is somewhat crazy of an optimization.

In any event, it's really a question of priorities. The PLT people just aren't concerned enough with that level of optimization to be willing to put large amounts of effort into making it work. Maybe it'd be worth it, I don't know, but as it stands the PLT distribution has never been or attempted to be the fastest or most efficient Scheme implementation, just the one it's easiest to program in. It's really no slower than Python or Perl anyway.

(Not that I'm disagreeing that there's a place in the language spectrum for statically typed languages or for compiler optimizations that rely on knowing types statically.)

--
"it's not rocket science" right right insofar as rocket science is boring

--Iced_Up

[ Parent ]

I agree. (none / 1) (#115)
by tkatchev on Thu Mar 18, 2004 at 10:47:18 AM EST

PLT Scheme is really friendly and very nice to program in. Much better than Python, in many respects.

   -- Signed, Lev Andropoff, cosmonaut.
[ Parent ]

Purpose of symbolic expressions (none / 1) (#52)
by Tayssir John Gabbour on Wed Mar 17, 2004 at 10:36:31 PM EST

There is something deeper than macro systems.  It is the fact that symbolic expressions are a data format, like XML but readable.  It allows use of techniques like normalization, on code, and various convenient data-directed techniques.

Short example:  http://lambda.pentaside.org/article/symbols.html

This does not contradict ML.  I have little doubt ML could make use of a notation which exploits symbolic expressions.  As I understand, OCaml users are rather happy with their preprocessor; I assume it's not symbolic but it still seems to give them power.

[ Parent ]

That's a rather bad example... (none / 1) (#54)
by Smerdy on Wed Mar 17, 2004 at 10:45:22 PM EST

...because you can implement essentially the same technique in ML with some modifications to the precise lexical tokens used. Instead of using a general programs-as-data macro system, you can just make use of ML's infix operator facilties.

[ Parent ]
Example? (none / 2) (#55)
by Tayssir John Gabbour on Wed Mar 17, 2004 at 10:53:52 PM EST

Could you kindly point to an example, which executes in a crossplatform manner among language implementations?

I'm not particularly interested in infix binary operators; it's kind of sad seeing Bird/Wadler explaining their complexity in their Haskell introduction.  But if ML systems can handle them well, I'm sure you can present them well.  I'd be grateful.

[ Parent ]

OK (none / 2) (#62)
by Smerdy on Wed Mar 17, 2004 at 11:14:36 PM EST

Here's a standard parser combinator implementation for the first example on the page you linked. In a "real" implementation, the grammar type would be something like a function from streams to parse trees. Here, I just make this type a literal representation of the grammar, for simplicity. This leads to the fun fact that calling any of the nonterminal functions that I define leads to an infinite loop, which is not a problem that would be present in an implementation that produces an actual parser.

infix ++



datatype grammar =

   @@ of string

 | ++ of grammar * grammar



fun $ x = x ()



fun Sentence () = $NP ++ $VP

and       NP () = $Article ++ $Noun

and       VP () = $Verb ++ $NP

and  Article () = @@"the" ++ @@"a"

and     Verb () = @@"eats" ++ @@"drinks"

and     Noun () = @@"man" ++ @@"woman" ++ @@"dog" ++ @@"blog"



[ Parent ]

my programming style (none / 3) (#56)
by jacob on Wed Mar 17, 2004 at 10:58:51 PM EST

I use macros constantly. Today, for instance, I was modifying some code that embeds a sublanguage for describing and visualizing programming language semantics into Scheme -- I added a thing that deconstructs a literal syntax expression, looks for ellipses, and does some rewriting around them when it finds them. I have written a reasonable-sized language that provides a sublanguage for defining web-based online questionairres based largely on macros that do static analysis of terms to identify important properties during expansion. In the process of building that language I wrote a quasistring macro package that provides somewhat perl-like interpreted strings in Scheme. I once implemented Haskell-style list comprehensions as a Scheme macro because I needed them for something. Almost every Scheme program I write these days uses a macro for something you cannot easily get from higher-order functions alone (sometimes lazy evaluation would do it, other times not). Literally my programming would be entirely different if I didn't get macros. (I know this is a topic I barely touched in this article and my while example gets the point across very lamely. But see the new issue of Dr. Dobb's Journal for better coverage of PLT Scheme's macro system.)

As for types: like I say, it's a mixed bag. I don't want to give the impression I don't like static typing, I like it very much, but I find that frequently it's more of a pain than it's worth. I don't want to have to use some new non-embedded sublanguage every time I come up with a novel kind of data or way at looking at data, and I don't want to have to wait for people to come up with great solutions to as-yet unsolved research problems either. (Not that I don't think XDuce is a cool idea.) As it stands, in Scheme I can do whatever I want and occasionally get bitten having a hard time finding the source of an error when a static type system would find it immediately. With SML, I have to endure lots of syntactic pain all the time and occasionally get stopped from structuring my program the way I want to.

(I'm deliberately stating my opinion more strongly than I actually believe it here. I really like type systems, honestly, and SML's type system does have a lot of benefits I appreciate.)

--
"it's not rocket science" right right insofar as rocket science is boring

--Iced_Up

[ Parent ]

Those wacky macros (none / 1) (#118)
by Smerdy on Thu Mar 18, 2004 at 11:49:53 AM EST

Recently I tried a little "challenge" with a strong Scheme advocate. His task was to show me why I should want macros in languages I use. He kept naming SRFI's, and I kept showing how these were already easy using features of ML. He was trying to show how core Scheme can have few features but allow use of macros to build up some very useful stuff. I was largely unconvinced, since I don't care how the useful stuff that I use came into being as long as it's there.

I'd be curious to see you choose one of the examples you named that you think would be a pain in ML and give more detail on the problem and how you solved it in Scheme. Anything that is almost identical to a feature of ML, Haskell, etc., is a poor example, since I would then argue that it just argues for using that language or whatever next generation functional language comes along and combines the best of ML/Haskell/wherever else you find the feature. (Of course, if we count Template Haskell as part of Haskell, then it should be easy to match any reasonable macro that you come up with, but I'd agree that that would be "cheating." ;-)

[ Parent ]

There is some truth to that... (none / 1) (#126)
by joto on Thu Mar 18, 2004 at 03:05:24 PM EST

Lisp evolved macros, partially because S-expressions are so horrible to work with, that they were necessary. You wouldn't want to work with a programming language with XML syntax either.

For many of the most frequently displayed uses of lisp macros, there probably already exists a domain-specific language with a similary nice (or nicer) syntax to express your problem in.

So, I can agree that examples of while-loops (huh, you have to actually code such a basic thing in scheme?), pattern matching (almost all functional languages do it nicer), or rule-based databases (prolog and most expert systems do it nicer) in lisp, aren't really a killer feature. You can use that other language. What you are asking for is examples of when lisp-programmers create new mini-languages that doesn't already exist. This happens, depending on the programmer, almost never, or almost all the time.

Some lisp programmers see almost any programming problem, as something that is best solved by a new mini-language, and goes on to build that. Others are mostly happy to use what the base system offers, and thinks in ways similar to what you would do in other languages. So the benefits vary from programmer to programmer.

Unfortunately, I don't think there's any way to "grok" the benefits of macros without actually using them in a (at least close-to) real-world situation. It would be kind of like preaching to an old BASIC programmer with 10 years experience in abusing GOTO about the benefits of functional programming. But don't take this the wrong way, I am not comparing you to a cave-man. Not all problems lend themselves to macros, and it just might be that for your programming situation, the benefits are small, and you don't need it.

If you want to look at some real-world examples of lisp-code that uses macros in actually useful ways (i.e. not just because you need them, 'cause S-expressions are a pain, and you are devoted to use lisp anyway), I can recommend Norvigs book. There is also Grahams book that focuses only on macros, but from your perspective, I will guess that the examples there won't impress you much. You will learn to write all kinds of macros from it, but not necessarily why it's a good idea. By the way, both of these books are about Common Lisp, not scheme, but that's not just my bias, as I am not aware of similar texts for scheme.

But if you just want the essence: The main benefit of (lisp) macros is that

  1. It's easier to cook up a new mini-language with them, than it is with lex&yacc
  2. Your new mini-language will have a fast high-quality compiler as soon as you're finished writing the parser/macros, as it will be compiled the same way all your other lisp code is.
  3. It can generate code for you without having to use an external tool (e.g. lex&yacc again).
  4. The language you write macros in, is the same language you write programs in (i.e. not like C++ templates, or the C preprocessor). I'm not sure to which extent this is true about schemes hygienic macros though...


[ Parent ]
Misses the issue (none / 1) (#129)
by Smerdy on Thu Mar 18, 2004 at 03:45:53 PM EST

It's quite easy to make embedded domain-specific languages in Haskell and ML. I've given an example alternate implementation of something from Norvig that someone pointed me to. (See my reply "OK" in this thread.) Why would I gain much from Lisp macros when I have user-defined infix operators?

[ Parent ]
Exploratory programming... (none / 1) (#133)
by joto on Thu Mar 18, 2004 at 04:12:02 PM EST

In Norvigs book, he presented several programs working on data like this. The initial version would look at it as data. Another version would compile the data, as your code example does. And there's nothing stopping you from creating a parser in the same way. The grammar is a mini-language that can be used for more than one purpose.

Your ML example is still just a program, if you want to change implementation strategy, or use the grammar for a different purpose, you will have to rewrite your grammar in a different way.

If you could write your example as

val np = ...
val vp = ...
val sentence = np ++ vp

...and still be able to use it to generate code in the same way, then I would agree that you have the same power as lisp macros here.

By the way, your implementation is slightly buggy. It may not be obvious, but stuff inside parenthesis is concatenation, while stuff without a parenthesis is alternation. So you would need another infix operator for that, e.g ||.

[ Parent ]

I could make it like that. (none / 1) (#134)
by Smerdy on Thu Mar 18, 2004 at 04:18:05 PM EST

I'd just change references to non-terminals to use strings or some other kind of indirection, then make the whole grammar a set of rules named by strings, for instance. I could get exactly the sort of syntax you're asking for, modulo some straightforward substitutions for specific lexical tokens.

[ Parent ]
I'm not sure I understand you here... (none / 1) (#137)
by joto on Thu Mar 18, 2004 at 04:27:24 PM EST

'd just change references to non-terminals to use strings or some other kind of indirection, then make the whole grammar a set of rules named by strings, for instance.

Yes, that would be even better. Put each rule into a hashtable or something like that, searched by the name of the nonterminal. That would be in Norvigs spirit.

I could get exactly the sort of syntax you're asking for, modulo some straightforward substitutions for specific lexical tokens.

Yes, you get the syntax. But can you still compile the grammar in the same way your original example does. Can you add another version that uses the same data to write parse-trees instead of sentences? In the same program? Without preprocessing of any kind?

[ Parent ]

The lack of understanding is mutual. ;-) (none / 1) (#148)
by Smerdy on Thu Mar 18, 2004 at 07:54:01 PM EST

The syntax that I'm proposing (and the Lisp syntax from the original example) has no special semantic associations. It's about the simplest form of raw data to describe a grammar.

Something like this would compile just as easily, with appropriate definitions much like those in my original code:

val mygrammar = [("nonterminal", $"nonterminal" ++ $"terminal"), ("terminal1", @@"hi" || @@"everyone")]

It's easy to write functions that operate on the grammar data type. Yes, it is possible to write multiple functions that operate on a single data type in the same program, with no preprocessing.

[ Parent ]

Hmmm.... (none / 1) (#185)
by joto on Fri Mar 19, 2004 at 12:06:21 PM EST

It's been a while since I used ML, so you'll have to forgive me if I'm guessing in the wrong direction. But is your idea that you now redefine your $, ++, and || operators in such a way that they e.g. build multiple functions instead of simply one function, as your first example did?

And then you can access the first function (e.g. generate sentences) as #1 (hd mygrammar), and the second function (e.g. generate syntax trees) as #2 (hd mygrammar). Or perhaps stuff those generated functions away somewhere more convenient, and access them from there...

I guess that is possible. Hmm, nasty stuff... But so is macros ;-) Of course, all of this is possible, because ML shares an important trait with lisp: the full ML environment is available at compile-time, something most "common" languages don't have.

The remaining uses of lisp macros I can think of, would be for "utilities", e.g. while, saving keystrokes (as in fancy use of a preprocessor), or speed hacks (e.g. a map macro that doesn't need a closure argument, but takes an expression that can be inlined). Since we eliminated utilities, and keystroke savings with infix operators are as about as good as macros, the only remaining category is the speed-hacks, but those few hotspots can be inlined by hand anyway. I agree, you won the debate!

[ Parent ]

Almost (none / 1) (#193)
by Smerdy on Fri Mar 19, 2004 at 01:48:23 PM EST

I am suggesting having a single definition of the operators that will always be used. These operators build an abstract syntax tree for the grammar. You then write functions to transform the abstract syntax trees into particular representations useful for particular tasks, such as parsing functions. This is really analogous to macros, except that we manipulate values in a type/language of our own definition rather than a single hard-coded language (Scheme).

[ Parent ]
Thanks (none / 1) (#200)
by joto on Fri Mar 19, 2004 at 04:41:22 PM EST

I think we both make sense again now. I really enjoyed this debate.

[ Parent ]
macros (none / 1) (#135)
by jacob on Thu Mar 18, 2004 at 04:20:04 PM EST

can do a lot more than functions, even the more-computationally-powerful infix functions. :-P

PLT Scheme macros are arbitrary source-code transformation at compile time. This is provably more expressive than any function. There are several different complete OO systems with different semantics just as powerful as Java's or Smalltalk's written entirely in macros (there's also at least one aspect-oriented system out there), or that we've got a lex/yacc replacement as an embedded piece of syntax that generates regular old function you can map or whatever, or that you can implement the quasistrings I pointed to I pointed to in another post purely as a macro, or that you get to have all of these things working together for free rather than having separate non-interoperable compilers for each. I want to program in a language that lets me do those things easily if need be; if you don't see that as compelling I don't really know what will convince you.


--
"it's not rocket science" right right insofar as rocket science is boring

--Iced_Up

[ Parent ]

Indeed, I don't see it as compelling. (none / 1) (#147)
by Smerdy on Thu Mar 18, 2004 at 07:48:33 PM EST

I believe that the set of useful features of the kinds you described is very small. I expect each to be implemented only once. Therefore, I would prefer them to be implemented as part of base language implementations. This removes complexity from the language exposed to the average coder. You get one less rope to hang yourself with or have to worry about people who write the libraries you use tripping you with. I think macros are a much more potentially dangerous/confusing rope than the sum of all features found in ML or Haskell, for example.

[ Parent ]
Meh (none / 1) (#151)
by jacob on Thu Mar 18, 2004 at 08:41:30 PM EST

I use it all the time for exactly that reason. I'd say that 90% of the projects I write these days have a new macro or two somewhere (of course they all use other people's macros extensively because most of PLT's syntax is defined in terms of macros). Several have been nothing but giant macros (e.g., a cgi-let form that bound the values returned by a form submission to given names, but instead looped with an error message if the form submission violated certain validation requirements). When adding a new language construct doesn't mean hacking on the dark and forboding recesses of a compiler to make their own nonstandard non-easily-distributable version or building a new language from scratch, you'll find people find more and more reasons to do it.

In any event, the "rope to hang yourself with" argument is the same argument the Python people made a LL3, but I frankly think it's condescending, and the sort of mindset that pushes you down the road to Java (not that I dislike Java particularly, just that it's a good example of a language designed by people who've decided they know better than you). Maybe there's an ecological niche for languages that don't let you do things that someone decides you can't handle, but I don't want to program in it. Besides, the hypothetical horrors of incomprehensible/mystical/program-destroying macros running amok and turning everything into an unintelligible mess seem only to be conjured up by people who've never used them: frankly, I find e.g. PLT Scheme's lex/yacc macros much nicer to work with than ML Yacc (you get reasonable error messages! No "bizarre error blah blah on line 8000 of some file you've never seen before involving variables you've never heard of"!), and the macro facilities make life easier on the implementation maintainers. For instance, there's nowhere in the PLT Scheme implementation where you'll get an error that doesn't refer to source syntax, even when the code that caused the error was introduced by a macro; that's still not true of SML/NJ despite its much longer lineage due to the fact that it's just not easy to deal with generated code in SML/NJ (ironic that Scheme would be better  than ML at any part of being a metalanguage, but in the case of allowing programmers to develop seamless syntactic abstractions it's true).

--
"it's not rocket science" right right insofar as rocket science is boring

--Iced_Up

[ Parent ]

Static typing == high performance. (none / 1) (#87)
by tkatchev on Thu Mar 18, 2004 at 07:29:25 AM EST

It is true. The biggest performance hit in modern languages comes from trying to figure out the type at every memory access. Not only that, but the memory overhead incurred is also enourmous.


   -- Signed, Lev Andropoff, cosmonaut.
[ Parent ]

Latent typing? (none / 1) (#94)
by Tayssir John Gabbour on Thu Mar 18, 2004 at 07:51:50 AM EST

Languages exist that allow one to specify type, as an optimization step.  Right now common lisp does this; you are meant to code well algorithmically, and when you use the profiler to determine a hotspot, you give the compiler such information in order to speed it up.

Every so often I hear lispers interested in using type annotations for safety, in the sense static typers use the term.  There are certainly some lessons to be derived from the static typing camp, just as lisp was inspired by Simula/Smalltalk/etc for OOP.

[ Parent ]

Yes, but. (none / 1) (#103)
by tkatchev on Thu Mar 18, 2004 at 08:35:11 AM EST

The problem is that it is extemely difficult to make typing "optional" and preserve type safety at the same time. The best solution, IMO, is simply to do it the way ML does and infer types automatically.


   -- Signed, Lev Andropoff, cosmonaut.
[ Parent ]

Sure, I'm fairly agnostic (none / 1) (#109)
by Tayssir John Gabbour on Thu Mar 18, 2004 at 09:15:13 AM EST

I feel I should write a disclaimer that I'm not out to flamewar, despite being a lisper. ;)  Lisp for me is a substrate on which to hang good ideas, but it certainly doesn't have the monopoly on them.  At the same time, I hope other languages adopt good ideas from lisp.

One theme in (common) lisp is that of flexibility.  For example, it has a quality OOP system, but people are not prodded to use it.  And many eschew functional style for imperative, with all these crazy iteration constructs.  I naturally code with map/filter/fold, but often people have a different view.

Therefore I believe the lisp world will have to find a clever way in which to absorb the lessons learned by good static languages.

Anarchy or multiparadigm; the choice of words to describe this reflects personal taste.  And I can not imagine a world where people boringly agree.

[ Parent ]

what's lambda (none / 2) (#68)
by alias Mark on Thu Mar 18, 2004 at 12:46:58 AM EST

don't know the language, but you use it a few times without mention.

---
"Gentlemen, you can't fight in here. This is the War Room!"
Lambda (3.00 / 4) (#69)
by kesuari on Thu Mar 18, 2004 at 01:07:53 AM EST

Lambda is, essentially, a function without a name, so defining functions is sort of like creating an annonymous function and then binding it to a variable. There's a form of calculus called lambda calculus from which Lisps' lambda comes form.

[ Parent ]
Why the defun style syntax? (none / 1) (#70)
by mge on Thu Mar 18, 2004 at 01:20:35 AM EST

A style question. Why: (define (foo bar baz) ... ) Instead of (define foo (lambda (bar baz) .. )) ?

The former .. (none / 1) (#78)
by Peaker on Thu Mar 18, 2004 at 05:01:01 AM EST

.. allows sticking the function name into the function object.

[ Parent ]
Not true (none / 2) (#98)
by jacob on Thu Mar 18, 2004 at 08:04:17 AM EST

(define (fun args ...) body)

macro-expands in one step to (essentially)

(define fun (lambda (args ...) body))

They're completely equivalent. I just like the more compact style better.

--
"it's not rocket science" right right insofar as rocket science is boring

--Iced_Up

[ Parent ]

Note I said "allows" (none / 1) (#199)
by Peaker on Fri Mar 19, 2004 at 02:59:30 PM EST

I do not how a specific Scheme implementation expands it, but the compact definition *allows* sticking the name, and many Lisps (and other languages with anonymous functions) do this.

[ Parent ]
put this in wikibooks too! (none / 2) (#75)
by The Shrubber on Thu Mar 18, 2004 at 03:18:50 AM EST

Perhaps you would consider donating your article to the Wikibooks project?  For example, the one on Scheme or the one on functional programming ?

wrong urls (none / 2) (#76)
by The Shrubber on Thu Mar 18, 2004 at 03:20:33 AM EST

Functional programming and
Scheme

[ Parent ]
Just out of curiosity (none / 1) (#79)
by Richard Stallman on Thu Mar 18, 2004 at 05:09:37 AM EST

I know you said you didn't really want to get into it, but is there some reason for preferring Scheme over Common LISP? I'm not too clear on the differences, never having used Scheme.

There is no way of telling which is better, yet (none / 1) (#82)
by voblia on Thu Mar 18, 2004 at 06:19:51 AM EST

The main differences are:
Scheme has hygienic macros CL has not.
Specification of scheme is like 50 pages the ANSI spec of CL is 1500.
Scheme has one workspace - CL has separate namespaces for functions and variables.
Scheme treats it's code as strings - CL treats code as a list.

Scheme is more academic than CL and CL is more academic than C/C++ whatever that means ;)

Imho it's just another Holy War. So you can definitely google a few flamewars about each topic.
--
Soldier boy made of clay now an empty shell ...
[ Parent ]

I've never heard anyone claim the last one (none / 0) (#97)
by jacob on Thu Mar 18, 2004 at 08:01:39 AM EST

but all the others are certainly reasons.

And I'd just like to point out how funny it is that "Richard Stallman" is asking me this. :)

--
"it's not rocket science" right right insofar as rocket science is boring

--Iced_Up

[ Parent ]

l-1, l-2 (none / 1) (#102)
by PigleT on Thu Mar 18, 2004 at 08:33:10 AM EST

Scheme is a lisp-1. CL is a lisp-2. Compare:

CL:
(defvar foo "moo")
(defun foo () "quack")
and evaluate:
foo
(foo)

Scheme:
(define foo "moo")
(define (foo) "quack")
and evaluate:
foo
(foo)

Basically in CL, any one symbol has both the function and variable natures, and you can distinguish between them with ' and #' quoting methods. In scheme, tough, one symbol is either a function or static data.

And now for the really trivial difference: I prefer scheme, partly because of the above, partly because the standard (R5RS) core of the language is pleasantly tiny, and partly because IME CL-ers are a bunch of offence-{giving,taking} gits who prefer to play at Zen-like games "maybe the problem is what you're doing", rather than admit their language still has no simple database APIs and cross-platform support is rather limited, while it's the schemers who are helpful and bang out code to help solve newcomers' problems.

But I still seem to be hacking most in Ruby, this month... :)
~Tim -- We stood in the moonlight and the river flowed
[ Parent ]

Offensive gits (none / 1) (#122)
by Tayssir John Gabbour on Thu Mar 18, 2004 at 12:41:56 PM EST

I don't disagree with your point about offensive gits.  While I think it was just one person... well anyway..

I used to think it was pretty entertaining to go through that, actually.  The most interesting thing for me is to question myself, especially the truly foundational things.  There are people who did enjoy that, and perhaps you simply had a little more sanity than them.  So I miss it and find it a little bit less wild now.  It's more pleasant for most people.

[ Parent ]

mmmm...lists. (none / 0) (#224)
by voltairemodern on Mon Mar 22, 2004 at 11:46:18 AM EST

"Scheme treats it's code as strings - CL treats code as a list." This is a reason to like clisp, not to hate it. It's just so damn useful in these languages.

[ Parent ]
Common Lisp sucks. (none / 1) (#89)
by tkatchev on Thu Mar 18, 2004 at 07:34:38 AM EST

Really, it's the C++ of its time, and needs to die as soon as possible.


   -- Signed, Lev Andropoff, cosmonaut.
[ Parent ]

Yes there are some reasons... (none / 2) (#108)
by joto on Thu Mar 18, 2004 at 09:04:24 AM EST

And mostly they depend on taste. Do you like a clean, simple (sometimes too simplistic) language stemming from academia (scheme), or a more practical language unfortunately bloated from a misguided attempt at achieving backwards compatibility with older lisps (common lisp).

In the end, scheme isn't actually useful for anything, although some scheme implementations can be very practical. PLT scheme, and bigloo, are both very useful implementations. PLT is good for scripting (as Python, Perl, etc), whereas bigloo is fast, and can be used as a replacement for C.

In common lisp, you get speed (sbcl, cmucl and most commercial implementations are as fast as C (or bigloo)), a more complete standard, meaning it makes sense to switch implementation during development (clisp is small and light, and good for quick development), and practicality (the standard was written by people having enormous experience with lisp programming). What you don't get is something as simple and conceptually clean as scheme.

Unfortunately, there seems to be nothing that shares the best traits of both languages. Various attempts at doing so, such as EuLisp, have mostly failed at being useful enough to attract a critical mass of users. Hot new lisp dialects to watch out for are goo, and arc.

[ Parent ]

I've not looked (none / 1) (#81)
by gazbo on Thu Mar 18, 2004 at 06:19:12 AM EST

But most languages' compilers are written in that language. So I would imagine that a Scheme compiler would be written in Scheme.

-----
Topless, revealing, nude pics and vids of Zora Suleman! Upskirt and down blouse! Cleavage!
Hardcore ZORA SULEMAN pics!

Er... (none / 2) (#83)
by skyknight on Thu Mar 18, 2004 at 07:16:59 AM EST

I am pretty sure that Perl's compiler/interpreter/whatever is written in C. I'm pretty sure that Java's compiler and JVM are not written in Java. I think we're going to need a pretty loose definition of "most".

It's not much fun at the top. I envy the common people, their hearty meals and Bruce Springsteen and voting. --SIGNOR SPAGHETTI
[ Parent ]
JVM perhaps not (none / 1) (#90)
by gazbo on Thu Mar 18, 2004 at 07:35:13 AM EST

But I'm equally pretty sure that javac is written in Java. Have to see which one of us is prettier, as it were. As for interpreters, I would hope that they /weren't/ written in a non-compiled language, as the performance would be horrible (not to mention the potential for certain logical impossibilities). The difference between compilers and interpreters is somewhat important to bear in mind here.

-----
Topless, revealing, nude pics and vids of Zora Suleman! Upskirt and down blouse! Cleavage!
Hardcore ZORA SULEMAN pics!

[ Parent ]

Just to clarify (none / 1) (#91)
by gazbo on Thu Mar 18, 2004 at 07:38:15 AM EST

When I said "JVM perhaps not", what I meant was that it also falls under the same category as interpreters (really it is just a jazzed up bytecode interpreter), and so it wouldn't make sense to have it written in Java.

-----
Topless, revealing, nude pics and vids of Zora Suleman! Upskirt and down blouse! Cleavage!
Hardcore ZORA SULEMAN pics!

[ Parent ]

true (none / 1) (#143)
by pantagruel on Thu Mar 18, 2004 at 05:28:26 PM EST

those are after all half of the known languages in the world! I think as soon as you have a compiler for your language it will often seem reasonable to write a compiler for that language in that language, hence a lot of languages that have been around for a while use compilers written in that language. I think though that this division between languages that have compilers written in that language and languages that have compilers written in some other language will most naturally fall between languages that are academic or community developed, and languages that are proprietary.

[ Parent ]
An interesting exception to all of that IIRC... (none / 2) (#144)
by skyknight on Thu Mar 18, 2004 at 05:40:44 PM EST

is Pascal which, if I understand correctly, had it's original compiler written entirely in Pascal. It was an incremental process. A bare bones version of the language was first written in Pascal, and the code was hand compiled, and then that initial incantation of a primitive compiler boot strapped the development of the full language. At least, that's what I remember some CS prof telling me years ago...

It's not much fun at the top. I envy the common people, their hearty meals and Bruce Springsteen and voting. --SIGNOR SPAGHETTI
[ Parent ]
I don't know about Sun's JVM... (none / 1) (#190)
by Vulcannis on Fri Mar 19, 2004 at 01:31:27 PM EST

...but Jikes is mostly written in Java. And it has very good performance to boot.

(That's an old link, unfortunately I can't seem to reach any of the more recent sites right now)

---
If it's not black and white, you're not looking close enough.
[ Parent ]

... I don't think so (none / 0) (#227)
by Peter Moore on Mon Mar 22, 2004 at 10:59:11 PM EST

AFAICT, jikes is entirely in C++. A quick browse of the CVS only shows .cpp. I would be very surprised if it has any java since it starts up so quickly.

[ Parent ]
Some (none / 1) (#84)
by bugmaster on Thu Mar 18, 2004 at 07:23:22 AM EST

I don't know about compilers, but writing a Scheme interpreter in Scheme is actually pretty easy, and a requirement to pass CS61A. I am actually not sure how a compiler for any kind of Lisp dialect could be written... Every time I try to imagine it, my mind boggles.

Maybe someone can write an article about this (how Scheme compilers work) ? I'd very much like to read an article like that.
>|<*:=
[ Parent ]

A compiler can be written in anything. (none / 1) (#88)
by tkatchev on Thu Mar 18, 2004 at 07:33:48 AM EST

Anything that can read in text files and output binary can be used to write a compiler.

A compiler is nothing but a fancy text processor that expands macros and outputs machine code.


   -- Signed, Lev Andropoff, cosmonaut.
[ Parent ]

There is a difference between can and able. (none / 1) (#213)
by Doc Olczyk on Sat Mar 20, 2004 at 02:51:29 AM EST

There is a difference betweec *can write*
and *being able* to write.

In theory most programming languages can be used
to implement a compiler. You can use Fortran
and COBOL to write compilers, but most people
are not able to do so.

The first FORTRAN compiler took seven years to
write ( in assembler ).

It is also possible to write a compiler in perl
or python, but I am not going to use said
compiler to compile anything over 100,000 lines.

[ Parent ]

No, no. (none / 1) (#215)
by tkatchev on Sat Mar 20, 2004 at 09:04:25 AM EST

A compiler is nothing but a text processor that outputs assembly code.

Thus, anything that is able to output a text file of assembly code can be used to write a compiler.

The efficiency of such a project (both in terms of machine time and programmer time) is a different matter.

But in reality, a compiler is a very trivial program that is no more difficult to write than, say, a Gaussian transform for a matrix.

(I think this "eliteness" aura that compiler writing has stems from incredibly complex, clumsy and stupid tools like yacc. Stop using yacc and all your problems will go away.)

   -- Signed, Lev Andropoff, cosmonaut.
[ Parent ]

Not so trivial (none / 0) (#229)
by bugmaster on Tue Mar 23, 2004 at 05:37:01 AM EST

True, it's reasonably easy to take high-level code and output assembly. What's hard is making that assembly code actually work in a reasonably efficient manner. This requires both high-level optimizations (dead code elimination, subexpression combining, etc.) and low-level ones (aligning instructions to minimize pipeline stalls, etc.). That's what makes compilers quite a bit more difficult than "hello world".
>|<*:=
[ Parent ]
compilers (none / 1) (#140)
by jovlinger on Thu Mar 18, 2004 at 04:37:56 PM EST

It is alot like writing a compiler for an OO language.

In scheme, the trick is how to compile a first-class function (call this a closure); one which has references to lexically scoped variable bindings, and which takes some arguments.

So the closure has two slots: one which points to some compiled code for the function, and one which points to a record (or chain of records) which points to all variables in scope at that point.

Compare this to how you would represent a class instance: two slots, for a pointer to a table of compiled method bodies, and one which points to a set of records for the fields.

Scheme has other issues (like call-cc) which make it a bit of a pain to compile; that is mostly engineering, tho.

[ Parent ]

Engineering (none / 1) (#207)
by bugmaster on Fri Mar 19, 2004 at 07:25:23 PM EST

Yeah, well, space colonization is also "mostly engineering", and we still can't do it :-)

Basically, I understand how to make a simple Scheme compiler; what I don't know is how to actually make it efficient -- what with all the recursion and the bindings and stuff. I know that existing compilers claim to produce binaries that are as fast as, if not faster than, pure C -- and I have no idea how they do it... Guess I should do more research.
>|<*:=
[ Parent ]

Scheme is fundamentally inefficient. (none / 1) (#216)
by tkatchev on Sat Mar 20, 2004 at 09:06:22 AM EST

Mostly due to the dynamic typing. If you can figure out a way to infer the types in a Scheme program statically, then I think all your problems will magically solve themselves.

   -- Signed, Lev Andropoff, cosmonaut.
[ Parent ]

sounds fun (none / 1) (#141)
by jacob on Thu Mar 18, 2004 at 04:42:09 PM EST

I'll have to do that at some point. I agree, I had a lot of trouble imagining how a Scheme compiler might work before I learned how to do it. Scheme just seems too mushy to have any association with something as concrete as assembly. :)

--
"it's not rocket science" right right insofar as rocket science is boring

--Iced_Up

[ Parent ]
lisp compiler (none / 1) (#175)
by OneEyedApe on Fri Mar 19, 2004 at 10:53:50 AM EST

CMUCL (http://www.cons.org/cmucl/) is written mostly in Common Lisp. A nice implementation, too.

[ Parent ]
It's in SICP (none / 1) (#201)
by Piquan on Fri Mar 19, 2004 at 05:02:18 PM EST

You know that SICP book that 61A only covers about a third of? Check out chapter 5. It's all about how to compile Scheme code.

It uses a simulated register machine with a Lisp-like assembly syntax. But it's designed to be fairly straightforward to adapt to real assembly systems.

[ Parent ]

PLT Scheme (none / 0) (#92)
by jacob on Thu Mar 18, 2004 at 07:47:16 AM EST

is mostly written in Scheme, with some C++ for windowing and bootstrapping.

--
"it's not rocket science" right right insofar as rocket science is boring

--Iced_Up

QED what? (none / 1) (#105)
by tkatchev on Thu Mar 18, 2004 at 08:40:59 AM EST

The OS is written in C or C++, so you're forced to use C++ if you want your code to interface with the operating system.

In Windows, if I recall correctly, you cannot bypass their C++ libraries even if you code in assembly.

If Windows were written in Scheme, then all your C and C++ compilers would have been forced to use a Scheme backend to interface with such an OS. (Just wait till Longhorn comes out, and if Microsoft holds on their promises -- all C compilers will be forced to output IL, or whatever it is called.)


   -- Signed, Lev Andropoff, cosmonaut.
[ Parent ]

Yes, I do. (none / 2) (#120)
by tkatchev on Thu Mar 18, 2004 at 11:57:47 AM EST

Show me the code, d00d.

There are NO syscalls under Windows. Even assembly programs have to be routed through kernel32.dll.

(Which is usually no big deal, since the assembly calling convention is roughly compatible with the DLL stdcall. It could be a major pain if you plan on using a different calling convention and/or optimising function calls.)

Secondly, you are a clown and a know-nothing nincompoop. Have a nice day and be sure not to fail out of undergrad CS.


   -- Signed, Lev Andropoff, cosmonaut.
[ Parent ]

I sincerely doubt (none / 1) (#150)
by aphrael on Thu Mar 18, 2004 at 08:39:03 PM EST

that there is no assembly-level code in the interior of kernel32.dll.

[ Parent ]
Of course there is. (none / 1) (#162)
by tkatchev on Fri Mar 19, 2004 at 05:23:54 AM EST

The problem is that Microsoft does not expose it to the outside world.

   -- Signed, Lev Andropoff, cosmonaut.
[ Parent ]

Dumbass. (none / 1) (#163)
by tkatchev on Fri Mar 19, 2004 at 05:24:27 AM EST

Come back when you finish your intro to CS for engineers, dumbass.

   -- Signed, Lev Andropoff, cosmonaut.
[ Parent ]

plonk (none / 1) (#217)
by tkatchev on Sat Mar 20, 2004 at 09:06:51 AM EST

I think that is what your kind says in the situation. Good bye.

   -- Signed, Lev Andropoff, cosmonaut.
[ Parent ]

Your favourite language is SCHEME?!?!?! (1.00 / 21) (#101)
by James A C Joyce on Thu Mar 18, 2004 at 08:21:42 AM EST

Are you some kind of FAG?!?!?!

I bought this account on eBay

I like fancy languages... (3.00 / 7) (#121)
by jmzero on Thu Mar 18, 2004 at 12:01:03 PM EST

And then I remember what it is I actually spend my time doing.  No matter what the language, I'm still going to have to talk to Bob to find out if the approval has to go to Mary, and whether it's OK to have a different address in this box than in that box.  And I can't pass these decisions on to the users - not really because they're lacking technical ability, but because they don't understand the system and how it works.  

10 years ago, I spent a lot of time doing stuff like validating input or parsing this or fiddling with that - stuff that would maybe have been easier in "some other" language.  Now, my job is to formalize business rules and processes, and integrate them into a coherent system.  My hobby job is writing games, and similarly I find the work I do is at a higher and higher level - almost all outside of compiled code.  

It's difficult to express this - I guess what I'm saying is that programming, whether you're writing a game or a business app, should now largely be a matter of "content creation" rather than "engine creation".  A fancy base language like Scheme is great, but should be largely irrelevant for most programmers because "the engine" isn't where you should be spending your time - it's a sign you're doing something wrong.  

I guess my point is is that people spend far too much time talking about programming tactics (which should be something that you have down), and too little time talking about programming strategy (design), which is what most programmers flub.  Too often, these two are confused, and language becomes part of design rather than tactics - and this is a mistake.
.
"Let's not stir that bag of worms." - my lovely wife

Speak for yourself (none / 1) (#123)
by Smerdy on Thu Mar 18, 2004 at 01:15:09 PM EST

You're only talking about limited segments of software development. There are plenty of people developing fundamentally new software, myself included. Regardless of the rarity of being in this position, we can still make objective statements about what tools and methods such people should use.

I'd also like to mention that much of what you think of as "design" for projects implemented in mainstream languages can be handled much more efficiently when folded into the process of programming in a language like ML, where you have a rich type system allowing you to express relationships between parts of your code in a machine-checkable way.

[ Parent ]

Meh. (none / 1) (#131)
by jmzero on Thu Mar 18, 2004 at 04:00:32 PM EST

You're only talking about limited segments of software development. There are plenty of people developing fundamentally new software, myself included

I've actually done a bunch of different kinds of things over my career - from embedded work to boring business apps to machine learning to games.  While individual projects have been more "engine" projects, on balance most of the time is spent doing "content".  I think the idea of maximizing "content" and minimizing "engine" is valid across most software projects.  In the remainder, often the content is simply going to come from someone else.  This is just one strategy idea that has helped me, and I didn't mean to suggest it's a tremendous insight.

And, to be clear, I didn't mean to say that different languages couldn't have important advantages or that they are not worthwhile.  I think it's important to choose an appropriate language and an appropriate development environment for a task.  I just think that a lot of people have notions that language dictates design, or that it supplies strategy.  

For example, a lot of people program in C++ and are convinced that they are programming in a superior object-oriented sort of way.  Often, they move similar functions into objects, and sometimes will find some use for inheritance.  Meanwhile, their actual strategy is the same as it would have been under C - they're just calling their functions in a different way.

I'd also like to mention that much of what you think of as "design" for projects implemented in mainstream languages can be handled much more efficiently when folded into the process of programming...

Strategy still comes before tactics.  If a particular language means you can implement a solid strategy in an efficient way, that's great - and that should be a factor in picking that language.

Regardless of this, good design exists outside of the scope of language.

I'd also be interested in what "fundamentally new software" you're making.  I'm guessing we would define this phrase somewhat differently.
.
"Let's not stir that bag of worms." - my lovely wife
[ Parent ]

Answer (none / 1) (#132)
by Smerdy on Thu Mar 18, 2004 at 04:04:58 PM EST

I don't agree that design is outside the scope of programming languages. You can get a lot of mileage through conceptualizing a project within the ML module system. If you have never used ML, Haskell, or a relative, then you probably have no idea how this could be. If so, I would recommend reading an ML tutorial.

The "fundamentally new software" that I am working on is program verification stuff: programs that generate proofs that other programs behave in certain desired ways.

[ Parent ]

Huh. (none / 1) (#136)
by jmzero on Thu Mar 18, 2004 at 04:22:22 PM EST

I tried CAML a while ago, but I'm no expert and certainly haven't kept up.  I don't remember how the module system worked - but it's true that I don't remember a lot of things these days.  If the ML module system helps you conceptualize a good design, that sounds like a very good thing.

"fundamentally new software" that I am working on is program verification stuff:

Yeah.  That's about what I figured.  You should make sure your program halts when it runs on itself, or if you rename it to Quake3.exe.

Happy trails.
.
"Let's not stir that bag of worms." - my lovely wife
[ Parent ]

Let me guess... (none / 1) (#124)
by Gluke on Thu Mar 18, 2004 at 01:33:09 PM EST

You're one of those folks who never worked on their own car. Because, *gasp*, you're actually paying someone to work on it, right? You're just a driver, then.</sarcasm>

[ Parent ]
The syntax of your markup is incorrect [n/t] (none / 1) (#125)
by melia on Thu Mar 18, 2004 at 01:49:22 PM EST


Disclaimer: All of the above is probably wrong
[ Parent ]
Wuh? (none / 1) (#127)
by jmzero on Thu Mar 18, 2004 at 03:25:20 PM EST

You're one of those folks who never worked on their own car.

In this case, I built the car.

you're actually paying someone to work on it

If the car breaks down, I'd still be the one to fix it.  However, if I did good design, hopefully nobody will have to look under the hood for a while.

Good design is about making it so you're able to put in a radio without knowing anything about the transmission.  You still might take it to the shop to install (I am the shop in this case), but the shop isn't even going to have to pop the hood.

It isn't a very good analogy really, but you're the one who brought it up.
.
"Let's not stir that bag of worms." - my lovely wife
[ Parent ]

OT: regarding the analogy (none / 1) (#139)
by Gluke on Thu Mar 18, 2004 at 04:36:43 PM EST

I brought up the analogy because it seemed to me that you were referring to "tactics" and "strategy" as if they ultimatelty belong on different levels of abstraction away from the extensional world of the machine.

The way I see it, both can be either on the same level, or on different levels, or on all levels at the same time, they are not mutually exclusive.

You mention "spending your time in the engine" as if it's something that cannot belong on the same level of abstraction in the project as "design" and I disagreed, albeit in a non-explicit way. Sorry about that. It's already way OT, anyway.



[ Parent ]
I suppose I wasn't clear (none / 1) (#142)
by jmzero on Thu Mar 18, 2004 at 04:59:50 PM EST

I didn't mean to relate engine/content and strategy/tactics.  The balance of "engine" and "content", neither of which I bothered to define, is a strategy (design) issue - and I only brought it up as an example of a such an issue.  

My point, I guess, was that language is a tactics thing rather than a strategy thing and that it gets inordinate attention.  It appears from other comments that some people are more tied to language in conceptualizing their designs than I am.  Perhaps that makes me wrong and I'm OK with that.  

In any case, have a good day.
.
"Let's not stir that bag of worms." - my lovely wife
[ Parent ]

I pretty much agree (none / 1) (#145)
by Tayssir John Gabbour on Thu Mar 18, 2004 at 05:56:40 PM EST

Here's something ontopic.  At a recent lisp meeting, I made a similar point.  I mentioned that while there's something addictive about lisp, I definitely feel it is dangerous to be typecast as a "computer person."  I do not want to be working on pure computer things my entire life.  There is so much out there to model, which requires an understanding of the outside world.  So there is the fear of becoming a "lifer":  Qualified for nothing but glorified machine plumbing.  It keeps you from being a complete person.

Now that said, I disagree with some specifics of your point.  What is wrong with passing time talking about something good?  It entertains me to watch this odd academic war between MLers and lispniks, which is often played out on usenet.  The only thing I care about in computing is beautiful formalism, which far outstrips the inelegant notation from the math world, and that happens to be lisp for me.  (I believe the intro to Stoy's denotational semantics book explains this well.)  I accept that I am somewhat overfocussed from a computer tinkerer's perspective, but the computer world does not mostly contain what I love.

[ Parent ]

"Academic"? (none / 1) (#149)
by Smerdy on Thu Mar 18, 2004 at 07:56:12 PM EST

Why do you qualify "war" with "academic"? Do you think that the participants are somehow not using their respective languages for "serious work"?

[ Parent ]
Oh, stop that (none / 1) (#161)
by Tayssir John Gabbour on Fri Mar 19, 2004 at 04:51:27 AM EST

I would hope that you are all using your languages for "serious fun."

Do you personally think that the academic world has produced nothing of enormous value, or believe that academic debates have no serious value?  Because I wonder why you think I've used the word "academic" in its negative connotation.  Business debates, which I suppose would be the opposite notion, would likely have a far worse meaning.

[ Parent ]

What I meant (none / 1) (#173)
by Smerdy on Fri Mar 19, 2004 at 10:46:44 AM EST

I think the rules of discourse are such that you use a word to describe something only if the original something did not adequately express your point of view. I am interested in what makes an "academic war" different from a "war."

Your use of "serious fun" seems to validate my view of your original post. It is absolutely not the case that Scheme and ML are any less used than mainstream languages for about any kind of program there is, adjusting for the number of people who know them well compared to how many people know the others. (And, no, we don't create this wide range of programs "just for fun" any more than the authors of C++ counterparts do.)

[ Parent ]

Ahh, now it's clear (none / 1) (#183)
by Tayssir John Gabbour on Fri Mar 19, 2004 at 12:00:38 PM EST

Smerdy wrote:
I think the rules of discourse are such that you use a word to describe something only if the original something did not adequately express your point of view. I am interested in what makes an "academic war" different from a "war."

If you wish to take a formal view of language (I am more an English user than an academic), I would say that a "war" connotes something rather physically violent.  I might be claiming that when a lisper goes to an ML biker bar, she had better be armed.

With those engaged in an "academic war," the scenario would be the same, except the lisper need only be armed with wit.

Smerdy wrote:
Your use of "serious fun" seems to validate my view of your original post.

I am sure you can validate any view you choose.

Smerdy wrote:
(And, no, we don't create this wide range of programs "just for fun" any more than the authors of C++ counterparts do.)

Then you have my condolences.  I sincerely hope you find work in a better line of fun.

After all, a dead Greek explained, "Man is most nearly himself when he achieves the seriousness of a child at play."

[ Parent ]

Misunderstanding (none / 1) (#191)
by Smerdy on Fri Mar 19, 2004 at 01:40:28 PM EST

And man is most nearly alive when he spends some time worrying about practical matters that help him survive. "Just for fun" does not imply lack of fun, but rather that the utility of software is also an important criterion.

[ Parent ]
Incidentally (none / 1) (#188)
by Tayssir John Gabbour on Fri Mar 19, 2004 at 12:34:02 PM EST

If you take offense from the viewpoint that I claimed the discussion below was "pointless," I sincerely did not have that in mind.  I just think calling a disagreement "war," especially in these times, is just as exaggerated as someone who calls a person who copies a number a "pirate."

Really, I was enlightened and I'm glad you began the thread.  While I did perceive humor in how frequently I see MLers and lispniks argue, that is second-order and does not diminish the educational value of such confrontations.  Mainly I was entreating the person I responded to, to take it all with a dose of humor, because there is nothing wrong with discussing the finer points of what you wish.

I reiterate that I had no intention of trivializing any argumentation that occurred within this article, and I ask no one take that message from my post.

[ Parent ]

Actually (3.00 / 4) (#153)
by JayGarner on Thu Mar 18, 2004 at 09:27:07 PM EST

Ever since I started I noticed a kind of 'brain rot' programmers suffer from where they get so tied in to the language they are using they can't think any other way. So when a new language or technology comes along, instead of thinking of how to really use it, they focus on 'how do I make it act like my other programming language'.

Many program languages are similar enough you can get away with that, kind of, but fancy languages are nice for getting you to think in a different way. So, kind of nice for the brain, brocolli for your brain.

[ Parent ]

program languages -> programming languages (none / 1) (#154)
by JayGarner on Thu Mar 18, 2004 at 09:28:18 PM EST

Program language is: 'Tonight Stella is played by Theresa Jones'.

[ Parent ]
Bad experince trying to use DrScheme (none / 0) (#158)
by MonkeyMan on Fri Mar 19, 2004 at 12:13:19 AM EST

I have an application where I would like to define pure n-cons structures. A normal cons structure is a 2-cons structure. A wider structure such as a 3-cons one has three pointers per node rather than 2. A "pure" n-cons structure (for some fixed value of n, say m) has only m-cons structures as the values of the pointers of an m-cons structure (i.e. no atoms, integers, or other types). This is kind of cool because the set of all pure m-conses forms a non-distributive lattice bounded by a 0 (in which all of the pointers of 0 point to the same 0) and a 1 (in which all of the pointers of 1 point to copies of 1). And it turns out that the lattice has lots of atoms (elements with nothing between them and 0) and lots of increasing sequences (which can be used to define integers), so external definitions of atom and integers are not needed.

I like Lisp and Scheme. Years ago I worked on a Lisp Machine and editing, testing, and finding out what the definitions and requirements of functions was very easy.

So, I downloaded DrScheme v206p1. Then I tried to define both a 3-cons structure and a generic m-cons structure, and maybe write some macros so I could use the reader to suck in a text file containing macro definitions of say 5 named explicit 3-cons structures.

However the help seems to be next to useless for someone who does not know the required keywords (e.g. is it "def-" or "define-"). And the editor has lousy access to the function definitions (on the LM this was Meta-Point in Emacs). For example, some of the documentation accessible from the help button says something about displaying things as vectors which I think I want to use but I have found no way of searching the documentation for information about vectors.

If some kind person would like to tell me how the active documentation system is easier to use than I found it, and how to set up structural definitions and define macros so that I can use the reader to read in a file of say 5 particular named 3-cons elements, I would greatly appreciate it.

And finally, is there a good XSLT package for Scheme? I searched but wasn't able to find one. I would like to spit out results as XML and then use an XSLT homomorphism to map my XML to HTML for display. I don't want to write extra Scheme code in order to emulate a simple transformation.

"vectors" yields no documents (none / 0) (#164)
by jacob on Fri Mar 19, 2004 at 07:56:15 AM EST

but "vector" yields a ton of them. Be warned that there is a lot of documentation in PLT Scheme and some of it is irrelevant to you -- the "beginner", "intermediate", and "advanced" documentation are for use with How to Design Programs and for your purposes the R5RS and MzScheme Language Manual docs will probably suit you better. (Though skimming HtDP will probably answer your questions about how to implement n-conses too.)

When you're just starting out, it's better to browse through the manual by chapter rather than using the search feature. Open Help Desk, click "Manuals" under the "Software" heading, and then take a look at any of the first four manuals: the tour gives you an overview of some of DrScheme's basic features, Teach Yourself Scheme in Fixnum Days gives an introduction to Scheme, R5RS gives you standard Scheme in reference form and PLT MzScheme: Language Manual tells you all you could ever want to know about DrScheme's substantial extensions to R5RS (except for the gui stuff, which is in the MrEd language manual, and library documentation, which is in various manuals under the "Libraries" heading below.

It's unfortunate that the search button isn't as powerful as, say, Google's, and I get frustrated by that myself sometimes, but the help is all accessible in other ways. I think you'll find the help system is actually very good compared to just about any other programming language distribution.

By the way, you might want to look up "define-struct" for your n-cons needs.

--
"it's not rocket science" right right insofar as rocket science is boring

--Iced_Up

[ Parent ]

Scheme is not for the masses (none / 3) (#165)
by grumbel on Fri Mar 19, 2004 at 08:10:07 AM EST

While I agree that Scheme is all nice and cool and fun to work with, I still find it rather unpractical for any real work. The reasons for this come from a number of issues:
  • Scheme isn't very well standardized, sure there is R5RS, but that doesn't go very far, no package system, no libraries, it just specifies the core language, so for every real work one has to get unportable. Even the simplest example can become unportable across Scheme implementations:

    (define (range low high)
      (cond
       [(> low high) null]
       [else (cons low (range (+ low 1) high))]))

    My Scheme implementation of choice doesn't like the []-brackets, it only accepts (). Neither does it like null, instead it wants '(), some other implemations are also happy with (). And this example didn't even started to use functions that are not part of the core library, so for any real work it gets far worse.

  • Recursion is ugly. Well sure for a lot of problems recursion is really nice, but for counting from 1 to 10 it for sure is not. Sure I can write me a nifty macro that gives me a standard for construct or use one of the loop or do things that some schemes provide, but then again I get unportable, R5RS doesn't provide iterative constructs. Beside that recursion, as teached in beginner books, doesn't work that well, again, see the simple example above:

    guile> (range 1 1000)
    <unnamed port>:53:1: In procedure cond in expression (range 1 10):
    <unnamed port>:53:1: Stack overflow

    So lets rewrite it:

    (define (range low high)
     (define (myrange ret low high)
      (cond
       ((> low high) (reverse ret))
       (else
        (myrange (cons low ret) (+ low 1) high))))
     (myrange '() low high))

    Ok, this one works and doesn't stack overflow. But now I used a local helper function, which isn't much portable as well I think, correct me if I am wrong, but I have seen all kinds of constructs for local functions, so I am not sure how a standard conforming implementation has to handle them.

  • My biggest complaint however is the lack of structured data-types, sure lists and vectors are all nifty and good, but there is no similar thing to a simple C-like struct. With macros one can patch-up something similar, but it never gets the same, thus one ends up with overly verbose names in Scheme programms most of the time, see:

    (define-struct sometypename ...)
    (define foo (make-struct sometypename ....)
    (sometypename-someelementname-set! foo 5)

    While in other languages one just has to type:

    (set! foo.someelementname 5)

    This lack of structured datatypes makes a lot issues much more verbose then necessary. Especially with OOP it really gets ugly. Again one can try his luck with a bunch of macros, lambdas and procedure-with-setter (unportable again?) and fixup something like:

    (set! (foo 'someelementname) 5)

    But that doesn't win a price in a beauty contest (think of structs containing structs) and anyway, again it isn't much standard, but a homebrewn construct, so every library will use something else. A related problem exists with the vector data type, which again is overly verbose:

    (vector-set! data 5 2) (vector-ref data 5)

    Why so compilcated and not instead:

    (set! (data 5) 5)

    Which again can be build up relativly easily with

    (define (create-vector . lst)
     (let ((data (apply vector lst)))
      (make-procedure-with-setter
       (lambda args
        (cond ((null? args)
         data)
        ((= (length args) 1)
         (vector-ref data (car args)))))
       (lambda (i val)
        (vector-set! data i val)))))

  • Last not least there is also the parenthesis, people might claim what they like, but they are a problem, even after some years playing around with Scheme, I still often get them wrong, yes, even with an editor with parenthesis-highling and such. The fact that some Schemes allow [] instead of () alone show that they are a real problem. One thing good about them is that they allow a extremly powerfull macro system, but giving up the readablity of the language itself for macros is probally a bit too much.

It pretty much boils down to the fact that I love Scheme probally as much as I hate it. It provide a whole lot of really interesting stuff, but on the other side it provides equally much things that make it impractical or at least ugly. Scheme is a language that is really worth a look, but I really wouldn't recomment it for any real work, since its simply to weird for most of the people and even after years of toying around with it I am still far from feeling comfortable in it.



Mostly agreed (none / 1) (#166)
by jacob on Fri Mar 19, 2004 at 09:16:54 AM EST

I totally disagree about recursion being ugly -- recursion is just a different way of going about things. I also think that like any language with weird concrete syntax, once you get into the parentheses or whatever just kinda falls away and it's no longer a big deal. I find explicit grouping of everything basically elegant these days. There's none of this "I don't remember whether || or ++ or -> has the higher precedence so I have no idea what this line does" stuff; every grouping is mandatory and there's no possible ambiguity.

That said, your other criticisms are definitely valid of Scheme. You'll note, though, that I was talking about PLT Scheme in particular, not Scheme in general, and there was a reason for that :) The various Schemes are fractured enough that I think it's valid to consider, say, Bigloo and Guile to be completely different languages that share a common base, like C and C++.

--
"it's not rocket science" right right insofar as rocket science is boring

--Iced_Up

[ Parent ]

recursion doesnt work for all data types (none / 0) (#177)
by grumbel on Fri Mar 19, 2004 at 11:03:56 AM EST

Well, yes recursion is just a different way of going about things, the problem is that its Schemes only way to go about things and for a lot of things/types its not the most convenient one. How for example do you iterate over a vector?

(let loop ((i 0))
 (display (vector-ref vec i))
 (loop (+ i 1))

vs.

(for (i 0 (vector-length vec))
  (display i))

The second one is a lot more clear, since the first one is just a cludge to get the counter going from start to end, which it not something you want to do very often manually. Scheme fails to provide a way to handle this situation out of the box. Its just a simply convenience function/macro missing here, but it kind of hurts badly in the long run. Ok, in the long run it might be even better to have a generic for-each that can travel across all kind of sequenzes.

Recursion is nice if you want to go over a recursive types, but for a vector thats not possible in a nice way.

About the parentheses, no I don't think one gets ever used to it. I have tried it for now maybe four years and am still not comfortable with them, the reason is rather simple, while in C you have different kinds of parentheses and delimiters for blocks, for functions calls and for variable definitions, in scheme you have only one kind of parenthesis, thus its way easier to mix them up, ie. you remove a few line here and copy a few lines there and soon your code 'floats' into the block of the variable definitons without noticing it. Automatic indention helps a bit, so does parenthesis matching in an editor, still I ran quite frequently into situations like this. The lack of static-typechecking makes it even worse, since some syntax errors are so only detectable at runtime, not at compile/parse-time, ie. is (foo 5) a function call or do I define a variable of foo with value 5? Errors like this can happen easily when one restructures a let-costruct a bit.

[ Parent ]

recursion (none / 0) (#179)
by jacob on Fri Mar 19, 2004 at 11:12:48 AM EST

Yes, I agree, you sometimes want loops more expressive than recursion. In fact, I wrote a whole section about that in the article to which this comment is attached. :)

As for parens, I think we just have to disagree here. I like explicit parens much more than I like C's syntax (or ML's, for that matter, though not Haskell's) for everything except arithmetic.

--
"it's not rocket science" right right insofar as rocket science is boring

--Iced_Up

[ Parent ]

Loops more expressive than recursion? (none / 0) (#180)
by Smerdy on Fri Mar 19, 2004 at 11:42:04 AM EST

Are you talking about something not possible with fold functions?

[ Parent ]
i meant (none / 0) (#181)
by jacob on Fri Mar 19, 2004 at 11:46:35 AM EST

'expressive' in the colloquial rather than the technical sense - simply that you convey more information to human readers with (map f x) or with while (x = getchar()) { ... } than with the equivalent recursions written out explicitly. (Of course it's the other way around if you mean 'expressive' technically.)

--
"it's not rocket science" right right insofar as rocket science is boring

--Iced_Up

[ Parent ]
OK... (none / 0) (#182)
by Smerdy on Fri Mar 19, 2004 at 12:00:13 PM EST

I still don't understand your post that I replied to. For instance, the "problem" given with recursion on vectors is handled in ML with standard functions for iterating, mapping, folding, etc., over vectors.

[ Parent ]
right (none / 0) (#186)
by jacob on Fri Mar 19, 2004 at 12:10:59 PM EST

that's how the problem is handled in Scheme too. You build those out of recursion in Scheme, but then you use them as regular old loops.

--
"it's not rocket science" right right insofar as rocket science is boring

--Iced_Up

[ Parent ]
Scheme is a beautiful language (none / 1) (#167)
by cpghost on Fri Mar 19, 2004 at 09:58:45 AM EST

My biggest complaint however is the lack of structured data-types

That's simply no true. How about this?

guile> (define some-employee
'((name xyzzy)
(phone 555-2323)
(salary 12345.11)
))

guile> some-employee
((name xyzzy) (phone 555-2323) (salary 12345.11))

Now use your favorite accessors to extract data.

You can also implement recursive data structures in Scheme. It's very expressive!


cpghost at Cordula's Web
[ Parent ]
Selfbuild stuff doesn't replace a standard library (none / 1) (#171)
by grumbel on Fri Mar 19, 2004 at 10:38:43 AM EST

The problem is not so much the lack of a structured type, hell, you can build thousands of different ones when you like, the problem is that there is no standard one and especially no standard shortcut syntax to access it, ie you can't do:

(set! some-employee.name 5)

but instead you have to do:

(assoc-set! some-employee 'name 5)

Which is a lot less readable, especially when you go nested datatypes, mix them with vectors and such and soon:

(some-employee.name.lastname 0)

becomes:

(vector-ref (assoc-ref (assoc-ref some-employee 'name) 'lastname) 0)

And anyway, the problem is not that you can't build something in Scheme, but that you have to build pretty much everything yourself, 'cause there is no standard library that comes with all this stuff.

[ Parent ]

assuming you want it to be portable across (none / 0) (#172)
by jacob on Fri Mar 19, 2004 at 10:40:54 AM EST

Scheme implementations, that is.

--
"it's not rocket science" right right insofar as rocket science is boring

--Iced_Up

[ Parent ]
Your range function. (none / 0) (#206)
by Estanislao Martínez on Fri Mar 19, 2004 at 07:25:05 PM EST

You incur a performance hit with (reverse ret). Instead, build the list backwards, from the high to low, like the version here.

--em
[ Parent ]

I like scheme, but not PLT scheme. (none / 1) (#168)
by cpghost on Fri Mar 19, 2004 at 10:20:31 AM EST

Scheme (as defined in R5RS) is a beautiful language. Studying it using the Wizard Book is not only enlightening, it's also a lot of fun.

What I really dislike, are some implementations of the language. Especially the so called "extensions" to the language. Square brackets in PLT are IMHO one of the most ugly examples of gratuitous syntactic sugar. A scheme program with square brackets simply doesn't look right.

Other implementations are better than that, though they also introducte a lot of idiosyncracies and idioms, that would prevent portability. Even guile (which I like a lot) is not innocent w.r.t. creeping featurism.

What I'd like to see in a revised scheme report, is the standard ability to call functions in dynamic libraries, functions with C (and C++) linkage. Just like python.

I would also like to have a repository of scheme modules, just like perl's CPAN. Want to access your OpenLDAP server? Download an interface module from your nearest CSAN server, and you're set. Want to interface with Qt? Get that module, and voila, your scheme program can access a nice GUI framework.

If the scheme standard specified such an external interface, and if all scheme runtimes/interpreters/compilers adhered to this standard, a scheme program would not only be portable accross all scheme implementations, it would also be able to use the gazillions of external libraries on Unix, Windows and other OSes.

It's relatively easy to whip up a scheme interpreter in ISO C[++] that is R5RS compliant. The problem here is not the coding, it's the lack of standardization of this very important area. It's irrelevant wether PLT or Guile or others provide their own way to access our favorite .so or .dll...


cpghost at Cordula's Web
hm (none / 2) (#170)
by jacob on Fri Mar 19, 2004 at 10:38:29 AM EST

What I really dislike, are some implementations of the language. Especially the so called "extensions" to the language.

You can learn with R5RS Scheme, but you can't write useful programs in it. You need extensions to get real work done. Nobody -- not even the R5RS authors -- seriously disagrees on this point.

Square brackets in PLT are IMHO one of the most ugly examples of gratuitous syntactic sugar. A scheme program with square brackets simply doesn't look right.

Meh. Suit yourself.

What I'd like to see in a revised scheme report, is the standard ability to call functions in dynamic libraries, functions with C (and C++) linkage. Just like python.

There is an R6RS committee and the process is happening, but I really doubt that'll be part of the standard. There are so many different Schemes that do this different ways, and the differences in FFIs come from such fundamental implementation differences, that I suspect it'd be impossible to come up with a single standard everybody can agree with. Python gets around this just because there has been essentially one implementation of it ever, and whatever that implementation that happens to do becomes the language definition; if there was a standard and 20 different non-coordinating implementations, the story would be different.

I would also like to have a repository of scheme modules, just like perl's CPAN. Want to access your OpenLDAP server? Download an interface module from your nearest CSAN server, and you're set. Want to interface with Qt? Get that module, and voila, your scheme program can access a nice GUI framework.

Heh. Funny you should mention that. Check back with PLT Scheme when version 300, the next major release, comes out.

--
"it's not rocket science" right right insofar as rocket science is boring

--Iced_Up

[ Parent ]

SRFIs go a long way (none / 0) (#238)
by voodoo1man on Thu Mar 25, 2004 at 08:23:14 PM EST

In terms of implementing extended functionality. However, I have to disagree with your assertion about FFIs. Python and Perl are good examples of languages with well-specified APIs, but that is because they are single-implementation languages with single-language APIs (well, Python supports C++ really well, so I guess you can say it has one-and-a-half). I can go on about how you can't really interface with Pascal and Fortran (in practice you can since the GNU Compiler Collection has a pretty standard function calling convention), but it's not necessary when there's Jython: it may have a really nice interface to Java, but there's nothing for C++, and while you can wrap some glue around Java's FFI facilities, the results will likely differ in functionality, performance and consistency (bugs) from the Python FFI.

Consider also that there are quite a few Scheme implementations designed for embedded use and robotics; they really don't need or want an FFI.

[ Parent ]

Currying (none / 1) (#204)
by azul on Fri Mar 19, 2004 at 06:26:00 PM EST

Lisp is one of my two favorite programming languages as well (the other would be Haskell).

I find Scheme slightly preferable over Common Lisp.  Scheme seems to be much cleaner but CL seems to be much more advanced and the implementations seem to be much better (and the code is much more portable across implementations).  My favorite implementations of CL would be CLisp and CMUCL.  My favorite Scheme implementation would be Chicken.

I, however, have used Scheme far more than any of both CL and Haskell.

Something I really miss of Haskell when coding Scheme is currying and some advantages of being purely functional.  Consider this example Haskell code:

opvals x y = opval (lookval x) (lookval y)
opvallist x l = map (opvals x) l

What would be the equivalent Scheme code?

The following definitely is not:

(define (opvals x y) (opval (lookval x) (lookvap y)))
(define (opvallist x l)
  (map (lambda (y) (opvals x y)) l))

There is a very important difference: the Haskell version only needs to evaluate ``(lookval x)'' once, whereas the Scheme version is forced to evaluate it once for every element in l since each call could have its side effects!  This is a nice side-effect of being purely functional, I suppose.

To get the same computation (with the same number of operations) you'd have to explicitly make it so ``(lookval x)'' only gets executed once, which costs you readability.

Sure, in this simple example the loss is not that big, but I've found that in practice it tends to add a lot of visual clutter to my programs.

Do you know about delay and force? (none / 1) (#218)
by jacob on Sat Mar 20, 2004 at 09:44:29 AM EST

delay suspends a computation, force runs it and remembers its value. The computation only actually gets run once and then its value is cached, just like in call-by-need languages.


> (let ((p (delay (begin (printf "hi!n") 72))))
    (+ (force p) (force p)))
[to stdout:] hi!
144

See R5RS sections 4.2.5 and 6.4 for more details.

--
"it's not rocket science" right right insofar as rocket science is boring

--Iced_Up

[ Parent ]

Certainly (none / 0) (#230)
by azul on Tue Mar 23, 2004 at 10:48:15 AM EST

Yes, I know about delay and force and have used them for a long time.

They do not help at all in this situation as they still force you to capture the results in variables and add complexity to your code.

Consider my simple example:
(define (opvals x y) (opval (lookval x) (lookval y)))
(define (opvallist x l)
  (map (lambda (y) (opvals x y)) l))

To make (lookval x) executes just once, you'd need to write something like:
(define (opvals x)
  (let ((lx (lookval x)))
    (lambda (y) (opval lx (lookval y)))))
(define (opvallist x l)
  (map (opvals x) l))

delay and force provide no help here, really.  All they could perhaps do (in this context) is perhaps avoiding computations that are not needed.  But they don't help keep the code clean.

Imagine that opvals has a lot more code than just mapping lookval to its arguments and then applying opval.  The code starts to get ugly with these manipulations that explicitly calculate and "cache" the values.

If opvals received a variable number of arguments (or a list), it would get even more messy.

Granted, you could rewrite things passing complexity from opvals to the caller, but that's precisely what I'm trying to avoid as it will certainly make the code dirtier.

Thanks for taking the time to suggest I look them up, though.

[ Parent ]

PLT might support this (none / 0) (#233)
by Tayssir John Gabbour on Tue Mar 23, 2004 at 08:22:18 PM EST

I dunno if they currently support this feature, but if you can get the PLT guys to support it, you can code a little command that grabs a current function definition and replaces it with a caching version of itself.  That way it does a simple table lookup after it's been evaluated with the same arglist once.

This is a current common lisp feature, so it's nothing radical.

[ Parent ]

His result degrades gracefully (none / 0) (#210)
by Smerdy on Fri Mar 19, 2004 at 07:57:54 PM EST

By only doubling memory usage, you can get a negligible performance difference between garbage collection and stack allocation. I.e., it is small compared to the difference between speeds of average computers from one year to the next.

I think I support normal stack allocation myself, but I don't find the SML/NJ approach intrinsically flawed.

Whoops.. wrong thread (none / 0) (#211)
by Smerdy on Fri Mar 19, 2004 at 07:58:51 PM EST

I meant this to be a reply to the reply to my link to the Appel garbage collection paper.

[ Parent ]
Why I don't use Lisp/Scheme languages (yet) (none / 1) (#223)
by Anonymous Hiro on Mon Mar 22, 2004 at 09:32:00 AM EST

Coz it seems you're expected to be a genius who can write everything yourself - and often do coz everything else sucks anyway, or nothing similar has been written by anyone else before.

With perl, I get to be the lazy idiot who "leverages" modules written by great programmers. ( How's that for practical object/code reuse ;) ).

Maybe one day, I'll join the elite ranks of the Lisp Wizards and build my own Wizard Towers molecule by self-building recursive molecule in a blinking of an eye, and they shall pierce the heavens in their perfection and self-completeness.

But till then, I shall resort to prefab and lots of duct tape.

(While Java has tons of modules/libraries, too many of them looked like they were written only because Project Leader said that XYZ has to be there, not because the actual programmer of XYZ needed it to do something useful - things seem to be improving as programmers actually start trying to use stuff ;) ).

I agree with you 100% (none / 0) (#228)
by jacob on Mon Mar 22, 2004 at 11:06:42 PM EST

and I've actually been doing something about it. Watch PLT in the near future.

--
"it's not rocket science" right right insofar as rocket science is boring

--Iced_Up

[ Parent ]
There is an answer... (none / 0) (#231)
by pattern on Tue Mar 23, 2004 at 11:35:43 AM EST

At least for Common Lisp:

http://www.cliki.net/asdf-install

Works kind of like CPAN for perl.



[ Parent ]
A bit sparse (none / 0) (#240)
by Anonymous Hiro on Fri Mar 26, 2004 at 08:50:15 AM EST

If CLSQL is the DBI/ODBC for Common Lisp, then that's a bit sad.

Looking at http://clsql.b9.com/manual/

How would you quote stuff so it isn't interpreted as SQL?

Anything like bind variables or better?

To me that this should be a required part of a standard RDBMS interface, otherwise you'd have SQL injection issues and crap like that (look at PHP - peardb was a bit late, so there are tons of vulnerable PHP apps).

I'll wait till the Lisp geniuses fix this or come up with something even better and make it available for free. If Lisp is as powerful and easy as its proponents say, then I wouldn't have to wait long right? I mean they could practically write it in their sleep or something. Whereas I'd take years.

Unless the Lisp geniuses have moved from Lisp to other programming languages? Or are very tightfisted with their code. Perl/CPAN has grown in usage and range through the generosity of many contributors.


[ Parent ]

I don't like using Scheme (none / 0) (#226)
by trav on Mon Mar 22, 2004 at 02:55:31 PM EST

Hiring seasoned professionals to do Scheme is nearly impossible.  If you do find them, they're usually very academic and not so good at practical software development, or with other languages like C++.  We've ended up with a severe shortage of Scheme talent, so bug fixes and feature additions take forever and rarely work at all, while the C++ stuff flies by with few problems.

There doesn't seem to be a single Scheme interpreter that supports everything we want to do - we have to choose the best one and then hack together the remaining features (like database access) with external C++ programs.  Not to mention that none of them seem to be as well ported as Gnu C++, the amount of work we do to port and maintain our Scheme interpreter alone is horrendous.  And god forbid a new version is released, as we then have to go and port all of our patches to the new released version, THEN re-port it all to the systems we use.

As a language it's probably perfectly usable.  In reality, it's just supported well enough.

What interpreter are you using? (none / 0) (#239)
by voodoo1man on Thu Mar 25, 2004 at 08:36:04 PM EST

And why not switch to one that translates to C or sits on top of the JVM? That should simplify a lot of things. From what you say, it sounds like you use some kind of open source implementation - if so, why bother upgrading if you're doing customization anyway? Have you considered forking? Last of all, why don't you just stop using Scheme? What you're doing sounds pretty painful, and if for some reason you can't find people to work on it, well, too bad. You seem lucky enough to have good C++ developers (in which case, I suggest that you do not adopt Java, as less than a third of your potential employee pool actually knows the language).

[ Parent ]
scheme (none / 0) (#241)
by trav on Tue Mar 30, 2004 at 07:11:46 PM EST

We use Gambit Scheme. I don't think we have to worry about future upgrades, as it has apparently been about 6 years since the last one, so maybe I spoke hastily there. I don't think switching to another interpreter would be feasible, as there doesn't seem to be a lot of compatibility between them. Not to mention our custom patches and various kinky hacks that always work their way into code. There would be a lot of porting work that we can't afford.

We can't stop using Scheme because we don't have the resources to do so. It's a big system, rewriting the whole thing in another language would be a non-trivial project. And who are we going to find that can understand scheme and another language well enough to do the work? It's not like we have a spec or anything. And on top of that, what do we do with the people that designed the system in the first place? I'm not going to be the one that goes to the big boss and suggests firing a team of long time employees because their programming language of choice sucks .

It's sort of like having a high interest loan. We don't like it, but we pay the fee because we can't afford to pay the whole loan off. Know what I mean?

We also have some fortran code kicking around, but it's FAR more reliable and manageable than the scheme stuff. I think there's more fortran developers out there than scheme. New development is generally done in perl, c++ or (for web services) java. Perl might be a little bit TOO accessible, as we now have non-developers writing (predictably low quality) software. But that's another story.

I didn't mean to turn this into an attack on scheme, I just want to point out that a language spec itself is only a small piece of the big picture. As flawed as c++, java or perl may be, they are far more practical than scheme in most situations.

[ Parent ]

Why I Like PLT Scheme | 236 comments (210 topical, 26 editorial, 3 hidden)
Display: Sort:

kuro5hin.org

[XML]
All trademarks and copyrights on this page are owned by their respective companies. The Rest © 2000 - Present Kuro5hin.org Inc.
See our legalese page for copyright policies. Please also read our Privacy Policy.
Kuro5hin.org is powered by Free Software, including Apache, Perl, and Linux, The Scoop Engine that runs this site is freely available, under the terms of the GPL.
Need some help? Email help@kuro5hin.org.
My heart's the long stairs.

Powered by Scoop create account | help/FAQ | mission | links | search | IRC | YOU choose the stories!