Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Why didn't Common Lisp fix the world? (quora.com)
83 points by weeber on May 17, 2016 | hide | past | favorite | 124 comments


I'll cons this onto my list of reasons why Lisp "failed". Other explanations are:

Worse is Better: https://www.jwz.org/doc/worse-is-better.html

The Bipolar Lisp Programmer: https://groups.google.com/forum/#!topic/comp.lang.lisp/eicqv...

The Lisp Curse: http://winestockwebdesign.com/Essays/Lisp_Curse.html

Where Lisp Fails: http://www.loper-os.org/?p=69

Lisp is still my language of choice. I'm more productive in it than other languages (C, Java), not because I know the other languages less well, but because they're less expressive. So, for me, Lisp fixes the world.

Why aren't people using it? There are three things often missed.

(1) People's choice of programming languages perhaps isn't as rational as they might like to believe. There's a large amount of personal taste involved. For a personal project, that's no barrier, you can use your favourite language, but if you're working with a dozen other people, Lisp is unlikely to be adopted because more people on the team will prefer Java, C++, or Python. Multiply over all the commercial projects and you can see why you can't build a career around Lisp programming.

(2) There are accidents of history. When DEC decided to discontinue the PDP-10, the days of the hacker community built around it were numbered. Those involved then went on to build Lisp machines, but they were undercut by cheaper, commodity hardware which ran Unix. Then, the AI winter arrived. It was bad for Lisp, but Prolog fared far worse.

(3) Linguistic relativity. The Sapir-Whorf hypothesis might be dubious for natural language, but for programming languages it certainly does apply. If you're uncomfortable with Lisp, you won't be tackling problems at which Lisp excels, or (at best) you'll do them in a completely different way, and it will take you longer. The result is that people think their language of choice is fine for everything that matters, and the problems they don't tackle don't matter to them.


(2) There are accidents of history. When DEC decided to discontinue the PDP-10, the days of the hacker community built around it were numbered. Those involved then went on to build Lisp machines...

Nit on this bit of history: the PDP-6/10/Decsystem 20 architecture was a dead end due to its maximum address space of 1 MiB (of 9 bit bytes), but was still a very healthy line of computers when the Lisp Machine effort started circa 1974, about a decade before DEC dumped the whole line, then a 100 million a year business (a lot of us looked at that and decided DEC's business sense was so poor we weren't going to invest any more then we could in their systems, which of course turned out to be absolutely right).


For those who don't know Peter Norvig, let me mention that he wrote one of the best Lisp books:

http://norvig.com/paip.html


Not just one of the best Lisp books; but one of the best books on programming, period.


I think it's a programming book that happens to use Lisp as an example language to explain AI concepts and algorithms, rather than a Lisp book.


Actually not. The book is full of beautiful code, combined with deep Lisp lore how to write that code.


Why on earth would it fix the world? It was a step forward in the history of programming languages; but somehow the end of it, the final fix? Why would anyone expect that?

Programming languages are about providing a useful way for humans to formulate and implement their ideas. Starting by turning things on their head reverse polish notation style is bad start as it is counter intuitive to nearly everyone.


but somehow the end of it, the final fix? Why would anyone expect that?

Have you met a Lisp advocate?

More seriously, programming is as you say a way of turning thoughts into mechanical implementations. But not everyone thinks the same way or finds the same things intuitive. So some people stumble across Lisp or Forth and find it matches their way of thinking so perfectly that they can't understand why not everyone agrees.

Pragmatism counts for a lot in programming language deployment too. There was no big platform that required or encouraged Lisp in the way that Sun did for Java, the browser does for Javascript, or web backend did for PHP and Perl.


My theory is that because programming is mostly about fighting incidental complexity, it is very easy to believe that if there were some way of getting rid of it, programs would write itself and programming would be reduced to sipping daiquiris on a beach.

Lisp (especially compared to C) indeed helps to reduce incidental complexity to some extent. So it is easy fall into almost religious belief that Lisp is the way to pure programming bliss. Of course the promise is never fulfilled because domain complexity and even incidental complexity is here to stay.


> helps to reduce incidental complexity to some extent

This extend stays untapped. You cannot even imagine how far this ability to eliminate complexity extends.

Of course, it's not just Lisp, it's Lisp (or any other meta-language) combined with a certain design methodology.

A methodology which pretty much boils down to a notion that "everything is a compiler". And, since compilers are trivial and there are well known techniques for eliminating any complexity you can find in a chain of compiler transforms, this way you can eliminate all possible complexity, no matter what your domain is.

Yet, I find it amusing that only a handful of people are even aware of this ultimate methodology.


since compilers are trivial

No.

and there are well known techniques for eliminating any complexity you can find in a chain of compiler transforms

Any complexity? Really?


> No.

Why would you say so? There is nothing simpler than compilers. They can be broken down into pieces as small as you like which are still independent and fully sequential. Most of the rewrites you'll ever need to do can be done in a total language - i.e., you don't even need Turing-completeness for most of your code.

> Any complexity? Really?

I have not met a kind of complexity that cannot be decomposed into trivial pieces mechanically using this technique. And I've seen a lot of very different things.

P.S. And try not to confuse a size with a complexity. E.g., C++ spec is huge, and implementing a compiler for the full language is a daunting task. Long, but not complex.


Well, one thing I can tell you is that I've heard that much of the woe that came from the Common Lisp standard started with the line "any sufficiently advanced compiler..." in a discussion.


Are we talking about optimisations now? Or still about a compilation? Optimisations are a totally different story. They can bring some complexity in, but the nice part is that they're totally optional and not required for solving the problem.


Minimally acceptable optimizations, because if your implementation is too slow, they're not optional.

The examples are legion; to go to another domain, look at one of the reasons the VLIW Itanium failed, except for some numeric code the compiler(s) couldn't schedule enough operations for those Very Long Instruction Words.

(Perhaps a more primary reason was that x86 code ran too slow, denying customers an easy upgrade path that due to volume sales might have resulted in more effort being put into the VLIW compilers.)


Remember, we're not talking about high performance C compilers and all that here.

We're talking about the programming paradigm where everything is a compiler. Network protocol parsing is a form of compilation, all file formats parsing is a form of compilation, reacting to the UI events is a form of a compilation, processing data in whatever ways is a form of a compilation.

Yet, you can only invent any optimisations at all for a small subset of compiler transformations, so for most of the cases optimisation is just totally irrelevant.

As for the minimally acceptable optimisations, they're all trivial anyway. What you'd normally expect to see in most of the IRs is: constant propagation/folding (trivial), ADCE (trivial), algebraic simplification (trivial, no matter how ugly it is in, say, instcombine pass in LLVM - they're just doing it wrong), partial evaluation (trivial), inlining (trivial), backtracking for the latter two (trivial, but rarely seen, most would resort to awfully complex heuristics instead), CSE (trivial), escape analysis (trivial).

What is not trivial is the stuff you'd rarely need in the high level languages, and this paradigm is all about the high level languages indeed.

Not trivial: loop fusion, polyhedral analysis, vectorisation, and, finally, everything related to the code generation: instruction selection, register allocation, instruction scheduling, VLIW packing, all that.

You'll never see this stuff in any high level domain specific language - you already have a common low level backend to take care of it for you anyway.


try not to confuse a size with a complexity

Before we go any further you're going to have to define complexity.


The kind of complexity that is important for software engineering is defined by the size of the atomic modules you can get (those that cannot be broken down any further and still be practical) and number of dependencies between such modules.

This complexity defines how well the labour can be split between team members - i.e., defines the development scalability. This complexity defines how easy it is to understand any individual module and to change anything.

Of course we can start nitpicking, go into a definition of the Kolmogorov complexity and all that (but, remember, Chaitin defines this complexity with Lisp, btw.), and it will all be irrelevant to the problems of the software engineering.


That is a brave statement to make. How about you write a Lisp program that (serving as a perfect embodiment of "code is data" philosophy) takes another Lisp program as input and determines whether it would terminate? That would be just a handful of Lisp macros, right?


Now you're talking about an infinite complexity. It cannot be reduced no matter how hard you try. It has nothing to do with the kind of complexity that software engineering have to deal with.

But, for all things practical, yes, it's just a handful of Lisp macros. Because in most cases it worth designing your DSLs as total languages, and totality proof is totally trivial, as well as a proof of an opposite.


Now that's moving the goal posts. You then claim that I am nitpicking and Lisp eliminates not all complexity but only the complexity software engineers care about (not true either: even Lisp programs don't write themselves and someone still has to type in the letters). After that the conversation moves into wishy-washy domain of opinions which can be debated forever without ever reconciling them.


Pay attention to what you're answering to. It was you who moved the goal posts.

> there are well known techniques for eliminating any complexity you can find in a chain of compiler transforms,

Can you imagine finding a working termination proof in a compiler pipeline and finding yourself in a dire need of simplifying this complexity down to something manageable?


vitaly is that you?


Is it a problem? Is it even relevant at all?


Well, sorry for shying away from technical discussion and mildly doxing you. But it is truly remarkable that your style remains recognizable across decades, languages, nicknames and discussion forums. I am actually honored to have a small discussion with you :) Hope you are well.


By "fixing", I suppose, they meant "to eliminate the known evil that rooted deeply in the software world, such as unsafe languages, unexpressive boilerplate languages, and so on". Making sure the evil is dead for good would have been quite an achievement, sufficient to be called a "fix". Definitely not a final fix, but a step in a right direction indeed.


>Why on earth would it fix the world? It was a step forward in the history o

Exactly. Haskell took until 1998 to really come together! ;-)

(And cabal hell lasts forever.)


> (And cabal hell lasts forever.)

Obligatory: "have you tried Stack?"


Obligatory: I'm dealing with a package so old that lts-1.0 has a Base too recent for what I'm dealing with. But it's research code whose recent versions have gone in a totally different direction from what I need right now, so I have to cope.


I often think it's because Common Lisp was one of the last pre-web languages to be designed (or standardised anyway). Newer languages like Java/Javascript/Clojure/etc. had the advantage of being able to build a strong community on the web. Whereas Lisp was just on its way out as the web was rising.

The hardware and the long standardisation process didn't help either. That, combined with it being both an old and a new language, meant that when the spec was done there wasn't the excitement you get with a new language like Python or Clojure. It missed the boat with building a web community and missed again in building hype.

However, Norvig is completely correct in saying that just because the CL language itself may not have taken off, doesn't mean the ideas didn't make it everywhere.


There's also the factor that the over promising of what expert systems and AI in general could do that lead to the AI Winter were conveniently blamed on Lisp, seeing as how it couldn't have been simple failures to execute, or execute the impossible.

Or so goes one version of the Official Story, I wasn't really a witness to more than the beginning of that, when they were grossly over-hyped. Or solved the wrong problem, my favorite example there being DEC's configuration -> part numbers to order expert system. Sun made it simple, at the lowest level of a workstation, the major choices of the sort DEC was optimizing came down to which keyboard and power cord you ordered to fix your country's standards.


Perl and python, and c++. Perl especially was influenced by newsgroups, so maybe that doesn't count.


Yeah, but they arose as the web was starting in the 90s. Whereas Lisp had been around since the 50's. The standardisation effort wasn't the development of an exciting new language, just the finalisation of an old one.


While i do like the the idea of DSLs and they certainly do have a place, what I would love to have is to be able to enforce a strict subset of a language in a piece of code.

Where first there was discipline and hardly passed on conventions, there now would be statically enforced rule coerction.

I deeply believe that the utility of language is not in what it enables, but in what it forbids. For example Clojure and Rust have neat mechanisms to deal with mutability, where you need something extra to mutate a value. Contrast this to C where you need to go extra lenghts to make things const (ie nobody'll do it).

Is this even possible without writing your own tools?


Racket.

It's built as a platform for defining arbitrarily simple or complex languages and integrating them together. You should take a look at it. See http://www.ccs.neu.edu/home/matthias/Thoughts/Racket_is____.... :

> To a language designer, Racket is a programming language laboratory. This does not mean that the language is unstable. The designers do not change the language in a whimsical manner. That is, Racket comes with a unique collection of linguistic mechanisms that enable the quick construction of reliable languages, language fragments, and their composition. These tools are so easy to use that plain programmers can design a language after a little bit of instruction. So when a well-trained programmer decides that none of the available dialects is well-suited for a task, he designs a new dialect and writes his program in it. As Paul Hudak said, “the ultimate abstraction is a domain specific language.”


You can have both a flexibility of DSLs and strict rule enforcement. Just write a restricted DSL.


Some languages have this. For example, SPARK is a subset of Ada that can guarantee certain properties about your code. Similarly D has a subset that the compiler will verify is type and memory safe.


It's interesting to see Peter Norvig replying, but his answer is a little unsatisfying "oh but it did..."

The more interesting and honest reply in my opinion is ranked in second place by Robert Smith - sounds like he did some proper reflecting on what it is to be a Lisp programmer nowadays and what the language's (or implementation, since this question is for CL specifically) shortcomings are


"And you're right: we were not out to win over the Lisp programmers; we were after the C++ programmers. We managed to drag a lot of them about halfway to Lisp. Aren't you happy?" -- Guy Steele, Sun Microsystems Labs (about Java)

http://people.csail.mit.edu/gregs/ll1-discuss-archive-html/m...


If Java is halfway to Lisp it's no surprise Lisp didn't take over the world.


In what world is Java halfway to lisp?


In a world filled with COBOL, C, Powerbuilder, CICS. Oh, and TELON.


And there may have also been a language GOOFBOL (where BOL is Business Oriented Language), but I may have just made that up.


The world of its time.


Having recently been introduced to the lisp world (~1 year) it has been a revelation to me. It really changed the way I see programming and my expectations of it.

I do not understand how it has not taken over the world.

Background: I have ~15 years of experience in programming. In my spare time I learn new languages.


> I do not understand how it has not taken over the world.

> In my spare time I learn new languages.

A moment's reflection on the latter would explain the former.


i feel like a lot of lisps are trying to drag this "everything is a list" abstraction way too far than they should. clojure takes more practical and pragmatic approach of giving native residency in a language for useful data structures like arrays, maps and sets and it (along with many other features) has totally won me over. it's incredibly important for a language to be convenient.

edit: and not only me it seems: https://www.google.com/trends/explore#q=%2Fm%2F03yb8hb%2C%20...


>i feel like a lot of lisps are trying to drag this "everything is a list"

I think you are speaking out of ignorance here. "everything is a list" is just a metaphor. Everything is either list or atom and almost everything is atom.


Common Lisp has a fairly rich set of standard data types - including hashtables, streams, arrays and the ability to define structures and then there is the exciting world of CLOS on top of all of that...

https://www.cs.cmu.edu/Groups/AI/html/cltl/clm/node15.html


> Everything is either list or atom and almost everything is atom.

how can you both say this and that i'm talking out of ignorance? you're basically confirming my words.

larger point i was making is that "everything is a list" abstraction/metaphor influenced the decision to not include native representation of other data structures into the language. because parens are enough to represent everything. well they aren't enough for people looking for convenient, practical and simple languages.


No, you're wrong. Common Lisp has hash-tables, arrays and pretty much any data structure you need. But they are not lists, they are atoms. Hence why almost everything is an atom. It's well known that linked lists are inefficient in many ways. Nobody writes programs using only lists.


absence of data literals. i'm not sure how can i state it clearer.


Atoms in Lisp aren't what you think they are. All data structures except dotted pairs (from which lists are made) are atoms. For example, arrays and hash tables are atoms.


It's the Lisp nomenclature that confuses you.

In list everything is either atom or a list. `atom` type is equivalent to `(not cons)`.

CL has plenty of native data structures including hash tables, streams, files, functions, arrays, strings, numbers, characters, structures, ...

http://clhs.lisp.se/Front/Contents.htm


having functions to manipulate some data structures doesn't mean it's native to language. by that standard everything is native everywhere.

lisp stretches the meaning of "native" by power of macros, but something being possible still doesn't make it practical or convenient.


Consider this snippet of C code:

    struct foo {
        int x;
    }
Do you care how the structure definition is represented in your compiler? is it a list, a vector or a custom data-structure? You don't know, and it doens not matter to you. Besides, it is totally different from what you manipulate at runtime (for type definitions, not much). Consider the second snippet:

    for(int i = 0; i < n; ++i) {
        // something
    }
Should the internal representation for the block delimited by braces be necessary the same as the one used to represent the fields in the previous structure? I mean, do the brace characters always map to the same internal data-structure? Probably not. In C, the concrete syntax has generally nothing to do with the abstract syntax. This is not the case in Lisp, because each syntax is used to parse a specific type of data.

In Clojure, you write #{...} and the reader (LispReader.java) sees the sharpsign followed by brace and builds a set containing the values inside the braces. That set is contained inside the AST, and acts as a literal value. That value is shared among all invocations of your code, meaning that if you call the enclosing function at different places, the same literal object is shared. Since this data is immutable, this is not really a problem because when you add more elements at runtime, the original literal is not modified. OTOH, you can still invoke EvalReader, #=(), which evaluates code at read-time. I believe that you can put mutable data-structures into the AST and obtain funny results by mutating it during execution.

How is it different from Common Lisp? Not much.

Common Lisp is less restrictive because the readtable can be customized. Also, the standard data-structures are mutable (however, the behavior is undefined if you try to mutate literal data; some compilers warn you about that). I don't think I ever needed to have hash-table as a literal, it seems practically useless because if you want a hash-table in your source code, odds are that it contains few elements. If so, an association list is simpler (and more efficient).

If you really really need a complex data-structure in your source code, you are more likely to use LOAD-TIME-VALUE anyway, because it works better with a compiler: not all data-structures are (nor should be) automatically serialisable in the object file (they could containt transient data, for example). In other words, your original code has little to do with the machine code that is produced and eventually loaded (possibly in another environment). LOAD-TIME-VALUE is a way to execute code at load-time to produce such data-structures in your code: I have a parser which pre-computes some regexes, and this is not something that has an equivalent in Clojure, as far as I know.

So my first point is: you can have literals of any type in your source code in Common Lisp (like in Clojure). All of them have dedicated syntax, like vectors of arbitrary dimensions #2A((1 2)(3 4)), bit vectors #*1111, complex numbers #C(0 1), pathnames #P"/tmp/foo", and of course strings. Likewise, you have a generic syntax for structures. Suppose you define a structure FOO with a single slot X, then #S(foo :x 10) is a literal of that type.

However, not all data-structures have such syntax: maybe it was not deemed necessary to represent literal hash-tables in source code, or maybe it did not make it to the standard, I don't know. I personally don't miss it and I am happy to use auto-complete the few times I need a hash-table. However, if you want you can customize your Lisp easily; for example, load the FSET library and provide a custom syntax over immutable data-structures. If you look at Maxima, it has a lot of custom syntax. And I am not talking about macros, but about changing the readtable which reads Lisp objects from a stream. An example of this is RUTILS[0], which provides the #h(equal "k1" v1 "k2" v2) syntax for hash-table. But note that this notation is only used to produce a list, namely the code required to produce a hash-table at runtime and populate it.

Which brings me to my second point: there is a big difference with Clojure in that Common Lisp doesn't use different types to provide syntactic sugar. For example, in Clojure [] is for vectors, {} is for maps and #{} is for sets. That literally means that when you have a binding, you allocate a vector. When you destructure a map with map-notation, you use internally a map. So the language is mixing two concepts: syntactic sugar and actual internal representation. I find this rather hackish but I can live with it because it probably has little incidence. I think it adds complexity to the compiler which has to walk different kinds of code. Besides, the usage of such syntax is not regular: sometimes vectors are for unevaluated data, sometimes for bindings or destructuring, so you still need context to know what you are doing. Those are minor points, and I understand that Clojure can look more convenient to use. However Common Lisp being a little more wordy is not a problem for me.

[0] http://lisp-univ-etc.blogspot.fr/2016/05/improving-lisp-ux-o...


you're talking from the perspective of an experienced lisper. i'm talking from perspective of somebody that wasn't convinced by CL but was later convinced by Clojure. that is the whole point of the thread - why hasn't lisp caught on, and i'm an example of both failure and success in that regard.

i mostly agree with all you've said, just few comments:

> I think it adds complexity to the compiler which has to walk different kinds of code.

either you deal with complexity or your users will have to. it's not always clear where the boundary should be though i like where clojure puts it.

> I understand that Clojure can look more convenient to use. However Common Lisp being a little more wordy is not a problem for me.

s/me/experienced commonlisper/

aaand that's the whole point.


> you're talking from the perspective of an experienced lisper.

If I am an experienced Common Lisper, it is because I was once a noob Common Lisper and already not bothered by syntax. Maybe that's because I already had experience with several other languages, which have all some superficial negative aspects. That being said, I prefer to use CL but I don't mind using Clojure.

> either you deal with complexity or your users will have to.

I did not say the complexity was necessary present in one case or another. Biased as I am, I see no complexity with a simple, regular usage of parenthesis. Brackets and braces are good in the eye of other kind of people, sure.

> that is the whole point of the thread - why hasn't lisp caught on, and i'm an example of both failure and success in that regard.

Well, should we generalize from your example then, when it is not appropriate to do so in my case?

Besides, I was mostly responding to your posts about literal values.


> Biased as I am, I see no complexity with a simple, regular usage of parenthesis.

> If I am an experienced Common Lisper, it is because I was once a noob Common Lisper

> should we generalize from your example then, when it is not appropriate to do so in my case

if we do so carefully? obviously there is some spectrum of how mentally exhausting various aspects of language's syntax are for people, and how prepared/willing different people are to wrap their heads around those aspects.

crude analogy would be saying unary number system is better than decimal because it's simpler (via having less "things"), all operations are more intuitive and there is no complexity in dealing with all those digits.

just as we use decimal instead because we have physiological reasons to do so, it's not outlandish to suppose that we have psychological reasons to prefer syntax that is more rich than what CL provides out of the box but also not too rich because it's mentally exhausting. clojure seems to be hitting the sweet spot(at least within lisps family) and while many factors are at work here, the evidence is that clojure quickly became more popular than cl.

to be clear, i'm not saying here that popular = better, of course.


While Lisp had big, influential ideas, it wasn't at any time the shortest path to getting their immediate task done, for "enough" programmers. The successful languages managed to be the shortest path to "done", for more use cases.

(Which makes me think about my project for managing the world's knowledge, http://onemodel.org and how to be more practical like fortran or cobol were in their prime, and not primarily idealistic like Lisp in its prime. Hmm.)


It lacks a graphical application development framework that's portable and usable. Java eventually solved this problem (more or less) but neither Garnet nor CLIM ever reached a level where most Gtk/Qt/* programmers would seriously consider using them. Web front ends are find for certain things, but for a lot of the application domains where you might consider Lisp you want a proper desktop GUI application. (This would have been especially true in the early years.) The expressiveness and power of Lisp look a lot less attractive when you run headlong into the practical problems of combining that power with a modern GUI.

There are still a few problem domains (like Computer Algebra) where Lisp provides enough benefits to be worth bridging the gap between its world and modern operating system environments, but by and large the inability to be a complete solution I think serves as a disincentive to use Lisp.


For portable and easy to use there is LTk, and of course there are bindings to Gtk and Qt. I think though, if the Lisp community were a bit larger, these bindings would see more development and more refinement.


It might not have fixed the world, but it has been an influence on other PLs and programming in general. It helped bring several interesting concepts to the forefront and it at least provided an alternative, which is nice and healthy. LISP didn't "fix the world", but in my view, it helped make it slightly better.


Lisp is only a general idea about syntax and macros. If you want numerical computations you want a big library like blas, and the problem is about efficient memory allocation, cache and garbage collection. If you want deep learning you want a big library.

Lisp is like arithmetic, without more concepts or libraries it doesn't work, you have to reinvent the wheel.

R provides near 8000 useful packages, that's what makes a language a useful tool to solve a problem, and it can call routines in C, fortran or use C with armadillo. To sum up, a language without a great community and specific libraries is not a way to fix the world, on top of that there are few jobs using Lisp. Clojure is trying to take advantage of all those java libraries, that is a good step in the right direction. Clasp is another idea trying to use C++.


For the same reason a lot of people can play the guitar but not the violin; it's easier (at least to get started) and in many cases get the job done so the other option is not missed.


I'm pretty sure the biggest reason it hasn't caught on is the syntax.

http://jordi.platinum.linux.pl/piccies/lisp.png

It's the most frequent complaint uttered against Lisp for a reason. All of the other things mentioned in the top responses are addressed by modern Lisp variants. If Rust or Swift were implemented with sexps and everything else about them were equal, they would have a lot of difficulty winning hearts and minds.


Usually people that complain, don't spend time counting the amount of times () [] {} appear on their beloved programming languages.

   print(args) => (print args)

   if (cond) { exp1} else {exp2 } => (if (cond) (exp1) (exp2))
   
   array[index] => (index array)

And so on. On average there is probably the same amount, but visually it kind of appears to be more.


I think it's not just the number of special symbols, syntax matter. When the eye see array[index], the brain can make some assumptions - 'array' is some sort of, well, array or something else indexable, and index is most probably some simple type. With (array index) one must pause and think is array a function? array? something else? Is index another some simple type? Or another function? Something else entirely?

I know that at the end there's no distinction (lisp syntax is AST, and every other source code ends up as AST), and the idea that someone would accept syntax restrictions as a good thing is probably strange to the true lisper, but people care about the syntax. By eliminating syntax and forcing the programmer to write AST directly lisp gave enormous power to the user, but at the end not enough programmers were willing to give up on comfort to acquire all that power.


Haha, yes. The number of times I've looked at javascript with something like this:

  	})
      })
    });
...trailing almost every function definition is equally absurd. At least with Lisp I only have to deal with one delimiter type and just count the number instead of the })})}); mess of javascript and some other languages.

Although I do have to say I think Python got it right in trying to do away with as much of it as feasible.


You took the examples where the number of parentheses is the same. Here is a more honest example:

    if (a && !b) {expr} => (if (and (a) (not b)) (expr))
That's a simple expression and I'm not even sure I managed to match the () correctly


I agree it's hard to write lisp without a proper editor. Having seen the light I just can't go back now. The power, ease, and simplicity are not something I've experienced with non lisps.


I don't know much about Lisp, but at ILC 2014 [1] there were about 40 participants. In a show of hands, absolutely all of them used Emacs to write their Lisp. I've never seen Emacs unanimity anywhere else.

--

http://ilc2014.iro.umontreal.ca/


Incedentally I also use Emacs for editing lisp. (emacs lisp and Clojure.) Learning the hotkeys/concepts behind structural editing of lisp code was life changing metamorphasis, but quite difficult overall. There's been some fantastic work done by the Parinfer [1] team to bring the awesomeness of structural editing of lisp to the masses. The basic idea is that your parenthesis should infer where they belong by way of your indentation.

I installed Parinfer it on my project: CLJSFiddle[2], and it's not what I'm used to and doesn't have everything but for a novice it is highly usable - check it out!

[1] https://shaunlebron.github.io/parinfer/ [2] http://cljsfiddle.com


Your example is also misleading. The `a` and `expr` are written as variables in the left example, but as function calls in the Lisp version. More correctly it would be either

  if (a && !b) { expr } => (if (and a (not b)) expr)
Or

  if (a() && !b) { expr() } => (if (and (a) (not b)) (expr))
Now the amount of parentheses is either 4/6 or 8/10 which is not a very big difference (and caused only by the syntactic sugar for `not`), especially considering that the left example uses `!`, `&&` and two kinds of parentheses (perhaps a `;` as well).


You're absolutely right, I didn't know the difference between "a" and "(a)", sorry.

My point was more about the two extra parentheses for each && or ||. Imagine converting

   if (a && b && c && d || e)
to a LISP.


    if (a && b && c && d || e) {
      doSomething();
    }
becomes

    (if (or (and a b c d) e)
      (doSomething))
It's just a little more explicit because operator associativity and precedence aren't worked out for you. (For better or for worse.)

EDIT: I think you were imagining something like this:

    (if (or (and a (and b (and c (and d)))) e)
      (doSomething))
..which is definitely not the art.


Adding to this, in a real situation those variables are probably not going to be nice single letter variables. With longer names the Lisp syntax with unambiguous structure/order of precedence becomes much nicer.

  (if (or (and foobar
               (some-predicate-p quux)
               (= qwerty 432))
          (a-long-function-call-with-two-args bar ytr))
      ...)
Compared to something like:

  if (foobar &&
      somePredicate(quux) &&
      (qwerty == 432) ||
      aLongFunctionCallWithTwoArgs(bar, ytr)) {
      ...
  }


Don't put parenthesis around "a", this is a function call.

   (when (and a (not b)) 
     expr
     ...
     expr)


This is absolutely true. Worse, those brackets each define a different modal state. Even more worse, some languages ran out of brackets for all the purposes they wanted brackets for, so they invented their own - is there anything less thought-out than the double underscores in python? No, there is not.

In general, the reliance of syntax on random punctuation instead of meaningful symbols (a'la APL) or clear, human-parsable text is maddening - LISP and its derivatives do it far less than other modern languages, but still too much for my taste.

I'd also prefer clearly spelled-out syntax in this age of autocomplete rather than a jumble of weird abbreviations while I'm wishing for unicorns, and languages with code editors that do a good job of visually representing the flow of code rather than rely on punctuation, white-space or some unholy combination.

Programming seems kinda primitive compared to other aspects of computer-enabled content creation at times.


I think in terms of adoption it might be much more important than it is made out to be. A lot of potential new users are people who have many other choices and very little free time. If the language is such that they cannot immediately recognise a fundamental pattern like a function call then they are going to stop reading. Even if they keep reading for a couple of days and still find it hard to relax and read fluently then by that time they are probably going to give up because they simply don't have the time. So s-expressions are what gives the language part of its power and lisp should not change but don't be surprised if it has less users because 80% of the usual first time users of a language get discarded at the start. It is very hard to make it back up from there.

I think the reason people find it hard to adjust to the new pattern is that they have been trained all their lives to recognise the pattern where the name of something is distinct and outside of the thing itself. In mathematics obviously people have been used to expressing functions using f(x) since an early age (10?). But physically also. Think of these things:

- The name of a shop or a village tends to be on a sign on the outside.

- The name of a book is on the cover.

- The title of a folder is on the outside.

- The name of an OS file etc.

Naming things is very important. Its the basis of a lot of communication, understanding and abstraction and people are used to the name of something being fundamentally distinct and outside of the thing itself.


The point being made in the image is that experienced Lispers don't notice the parentheses. They look at the indentation.

Lisp doesn't really have a syntax. If you want something which looks more like Python or C, you can either rewrite read, or use read macros. This has been attempted several times (Lisp 2, CGOL, Dylan), but in the end, the parentheses won.


Agreed, but it's not just the parentheses - "car-or-zero"? Really? If you have a language where you can say "at least it's not APL", that's not a feature...


That's where Common Lisp's roots that go back to the first version that ran as subroutines to punched card FORTRAN on a vacuum tube IBM computer show. Contents of Address and Decrement Registers -> car and cdr, for normal list processing they're often renamed or aliased first and rest.

BUT, they hang around because of old code, and they're composable, e.g. the 2nd Lisp Machine design and first to be semi-mass produced was the CADR, as in (car (cdr list)), or (first (rest *list)), is in the 2nd item in the list.

So, yeah, if you don't want to have to learn a lot of arcane stuff that's incidental complexity purely because of history, go for newer dialects like Scheme and Clojure which had the opportunity to rationalize a lot of this (but Scheme at least still has cadr and company because they're too useful).


Oh, I know the history, I just dislike "because we've always done it this way" as a rule. Imagine having a code full of "moo x > 5", with "moo" instead of "if" because that was the sound made by the favorite pet of the initial programmer...


I bought Graham's "Ansi Common Lisp" book and read it briefly, and tried to write some code as well.

I also read "Successful Lisp" briefly.

Hoping to give constructive feedback and not to start a flame, here are some opinions/impressions.

Please note that :

1. These are mostly impressions, not complete opinions.

2. I am only talking about Common Lisp. I am not talking about Scheme, Clojure or other stuff.

Ok then:

1) Lisp feels... Messy. Expecially when you're learning it.

2) Maybe... Too many implementations? This is not a problem per se, but sometime it happens that not all the implementation implement all of the features, or implement them differently enough that you have to write the same code for different interpreters.

Example: https://github.com/stumpwm/stumpwm/blob/master/make-image.li... see the various #+clisp, #+sbcl and stuff.

3) Batteries not included. The ANSI standard defines the language and pretty much nothing more. Yeah you have libraries and quicklisp, but it feels fragile and quite "opaque". Also, I always used SBCL and I am not 100% sure all the libraries work with all of the interpreters/compilers.

I feel it would be nice to have a new standard that defines a set of additional API that implementors should implement.

Ideally, it would be nice to have common lisp to be "batteries included" like Python. But please let's avoid the Scheme SRFI mess.

Consider this: it is a fact that MIT introductory programming class switched from Scheme to Python because among other things, students were spending too much time reading libraries manuals and implementation references instead of writing code.

5) The Common Lisp HyperSpec. Let's say I find it to have a poor usability.

4) Stuff is generally poorly documented, and fragmented. While things like quicklisp work, it's visible that those are basically one-man projects. This isn't reassuring.

5) Given the heritage, there is not such thing as a "Lisp community" and the lack of governance is quite evident in the sense of lack of direction and lack of uniformity in pretty much everything.

6) Emacs. "It's lisp!". Well actually emacs lisp and common lisp are different languages, and differences will come up.

Emacs is my editor of choice, but I' starting to grow worried about it.

These are some of the reasons why I decided not to "invest" in Common Lisp.


Consider this: it is a fact that MIT introductory programming class switched from Scheme to Python because among other things, students were spending too much time reading libraries manuals and implementation references instead of writing code.

Nope, the decision was entirely political, driven by panic when post-dot.com crash enrollment dropped by more than half after being a steady 40% of the undergraduates for decades.

And I believe your specifics are entirely wrong, but I don't know exactly how the course evolved, e.g. they threw in a couple of weeks of OO that isn't in SICP as that became popular. But at least in the beginning of the course, it's all self-contained in the book.

Much of the rest of what you say is spot on, and are some of the reasons I gave up on mainline/Common Lisp 30+ years ago.


Norvig nailed it - instead of being sissy about macros altogether the mainstream languages must rather enforce a certain discipline. And the rules for the macros done right are very simple and well known.


Honestly, I don't get the fuss about macros. I haven't seen many (any?) use cases for them, where they don't make the code harder to understand while doing something that isn't already straightforward in the base language.


> Honestly, I don't get the fuss about macros.

DSLs are an ultimate solution. And DSLs are best implemented with macros.

> where they don't make the code harder to understand

Macros are there to make code easier to understand.

I just explained it in detail elsewhere: https://news.ycombinator.com/item?id=11705170


Yes, I get theory, but I haven't seen much use for them in practice.


It puzzles me too, that people are not using the most powerful development technique.

I cannot imagine not using macros for pretty much everything I do.


I've written a handful of Clojure and Emacs Lisp, but I don't get macros. How is a macro different than passing a quoted list to a function?

Maybe I just need to start slinging more lisp, but I don't see how they're particularly different from functions.


> How is a macro different than passing a quoted list to a function?

Macro is a function that is executed in compile time. Think of it as a way of extending your compiler with new functionality. For example, your language does not have support for pattern matching originally. It's not a problem - you can define a macro that adds such a functionality to a language. You cannot do it with functions - it must be compiled and optimised before execution, and functions cannot introduce new identifiers into a scope.

Another simple example would be something like list comprehensions - also introducing variables and also requiring optimisations in compile time.

And at extreme level, macros can be used to turn your host language into something totally different. Not just add few new features, but turn it into another language. Lisp can become an ML or Haskell, or Prolog, or Java, or whatever else you can imagine. Macros can change syntax, can implement new semantics. None of it can be done with just functions.


Yes. I have. People with gray beard, strong views on programming languages, politics and UFOs.

I always assumed that were dressed up for the occasion and not really real; like someone doing a role playing game or maybe being Santa for the kids during Christmas.


We detached this subthread from https://news.ycombinator.com/item?id=11712496 and marked it off-topic.


You should meet Robert Smith or Peter Norvig, then, shatter some of those stereotypes.


Hmm.. my beard isn't fully grey yet.


(Don't worry - it will be). Just making on joke what a "Lisp advocate" is.

But seriously; if you meet people who really understand programming language design they will have full understanding for other ways of designing a language than whatever is their specialty. After all there is a reason why PL is still a developing field.


> (Don't worry - it will be).

This assumes that weavie actually has a beard.


Ok; HN really don't like jokes :-(


Common Lisp doesn't have type checking and well defined namespace. And it has eval. That's certainly a no go for people like me.

Most programmers in the world don't have to invent/implement a radical new idea in five minutes.


Eval? The entire point of Lisp macros is toying around with evalulation in a safe fashion. It's other languages that do it so dangerously that gives eval a bad name. Lisp is the shining example of how to do it right.


AFAIK macros in Lisp are turing complete and executed at compile time.

This means (in principle) that there is no guarantee that a Lisp compile will ever terminate.

I don't know why the OP is been down voted. Lisp is not for everyone and every application; I think that is a perfectly reasonable point of view.


C++ templates are also turing complete, executed at compile time, and can result in extremely high compile times: http://cpptruths.blogspot.co.uk/2005/11/c-templates-are-turi...

I don't think that failure to terminate compiles is too big a problem in practice; treat it as just another form of compile-time error and fix your program.


It is a problem for tool support; one of the reasons why Java and C# has better editors.


It's only a problem if your tools are done the wrong way. The proper tools can benefit from the macros instead. Of course, some discipline in how macros are implemented is required.


It is even better. Common Lisp macros are not a special macro language, but rather plain Lisp functions, which are run at compile time. This is important, because it means, you can write proper code in your development language for macro expansion instead of some obscure macro expansion language. This makes the process of writing macros rather robust and of course powerful. But of course, this means that like any program you write, you cannot guarantee that they terminate. For real-life macros that is rather a non-issue as they usually contain just a rather few lines of mostly simple code.


The OP is being downvoted because he lies.

Common Lisp does have strong and optional static typing. It does have namespaces.

As for CL having `eval` - well, this is true at least. But I fail to see how in the world could that be a bad thing...


A good implementation also has total dynamic type checking, and the better allow you to relax that for run time speed (although it was nearly free on Lisp Machines and can be pretty cheap on out of order superscalar CPUs like Intels, as long as you don't enable Hyperthreading which tends to use up the otherwise free execution engines that get used for it).


If a language has an eval construct it is hard/impossible to reason about types, security and correctness. It also makes it hard to compile it to machine code.


That would be true if we were talking about normal languages eval'ing strings. Lisp on the other hand treats the code as data and has the full compiler available at runtime. It can check types at both runtime and compile time, and it will still compile it down to nice machine code on the fly.

Lisp eval is not the same as other languages, you use it implicitly all the time. Any occurence of macros or something like `(1 ,(+ 2 3) 4) is playing around with evaluation behind the scenes.


Macros do not need to be implemented with eval, unless they're lexically scoped macros. They're just normal function, normally complied just like anything else.

What is important is an incremental compilation, one top level statement at a time.


Yes, I didn't mean to imply that you needed to explicitly eval with macros. Rather that the nature of macros themselves is saying, eval this bit at compile time and that bit at run time. You are implicitly and safely doing what most languages would need an explicit eval call to do.


Parsing Lisp is not so hard, but as you said. It requires a JIT compiler or an interpreter. There are still some platforms/scenarios where that is not wanted.


sbcl has `eval`, but it does not have a JIT compiler or an interpreter.


Common Lisp both has strong dynamic type checking as well as optional static type checking. For example, you can declare the static types of function parameters and the return results. For a good compiler using static type declarations, look at SBCL, which makes excellent use of type information to get up to C like performance of the compiled code. It also is very good about type inferencing. So if it knows the input types for a function call, it usually can infer the function result type and use that for further type reasoning.

Common Lisp of course has defined name spaces. They are called packages.

Yes it has eval, but I have not seen a Lisp program in years which used it. And it would be easy to lint programs so it is not used, if you had a reason to be afraid of it.


How does static type checking work in CL?

One of the most important use cases for static typing is safe modification of existing code, like during refactoring. So a typical example: I have some struct with a field of one type, say a string. I use it in a few functions, reading and writing it. Now I change the field to a different type, say an array of strings instead. I want the compiler to tell me about all the code that's now invalid. How do I do that?

I quickly wrote up a simple example of the (statically) untyped code I mentioned: http://pastebin.com/f7KWETvT (I can also write a matching example in some common static language (like C or ML), but I think it's clear what I mean.)

What type annotations do I add to make that basic use case work? So what do I add such that it first type-checks when name is just a string, then change the annotation to make name an array of strings, and get two compile-time errors because the functions are now wrong.

(Unfortunately, I only know a little bit of CL (and even then mostly due to familiarity with Emacs Lisp), so my example might be a bit un-idiomatic. Feel free to turn it into idiomatic CL if something is too weird!)


If you wanted to define a type-safe person struct, you could do it with:

(defstruct person (age 0 :type fixnum) (name "" :type string))

Now every access to that person structure should be checked by the compiler. You can create a person like:

(setf p (make-person :age 7 :name "fred")) => #S(PERSON :AGE 7 :NAME "fred")

But not with: (setf p (make-person :age 7 :name 8))

which causes SBCL to error with: The value 8 is not of type STRING. [Condition of type TYPE-ERROR]

Equally, if you tried wrongly to define

(defun print-hello (person) (let ((name (string-upcase (person-age person)))) (format t "Hallo ~a!~%" name)))

You get at least a warning you should heed: ; caught WARNING: ; Asserted type (OR (VECTOR CHARACTER) (VECTOR NIL) BASE-STRING SYMBOL CHARACTER) ; conflicts with derived type (VALUES FIXNUM &OPTIONAL).

So, for refactoring, you would change the type signatures of your struct and then recompile all code to see whether the compiler generates errors/warnings.


Thanks! I vaguely remembered there being some problems with that, but playing around with it, it works fine and as I'd expect. Sweet! Gotta revisit CL soon then.


I don't even remember when was the last time I saw eval being used outside of the macros. And the latter is also quite a specific and rare use case.


And I don't remember when I saw an "eval" used inside a macro either. There are almost no reasons to use "eval" in any Common Lisp program.


A use case for eval inside a macro is, for example, when you want to evaluate into a constant some part of a code constructed by this macro, but you do not want to do macro staging for whatever reason.

I only did it a couple of times.


I take it, you don't use JavaScript, then?

What do you mean with “well defined namespace”?


Late edit: Sorry, I meant that CL doesn't have a well defined scope, not a namespace.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: