Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It is true that macros are regarded as a last resort mechanism: whenever you can use functions, or generic functions, you are encouraged to do so.

Also, macros are nothing more than functions: they operate on the syntax tree, at compile-time, but they are still functions. All language implementations (compilers, interpreters), will have somehow a need to manipulate syntax trees: in CL, this facility is exposed to the programmer (hence, the "programmable programming language" quote from John Foderaro).

> If they need more things, add those things as first-class features,

That's not the philosophy behind Lisp. You don't wait for a new relase of the standard (C++11, C++14, ...) and for compiler support: if you need to do crazy things with syntax trees, you are welcome. Then you may pulbish it and other people will use your libraries, like it is done for any other libraries in any languages.

> they still lead to unreadable code because you can't tell what a given piece of code does until you understand which parts are macros and understand all of them

So, in order to understand a pice of code, I must: (1) know what is, or isn't, a macro, and (2) if it is a macro, know what it does. The same applies for any function: don't apply a function if you don't know what it does.

You can rely on docstrings, macroexpand, as well as live coding techniques (describe, inspect) to help you understand complex operations. But saying that macros make code inherently "unreadable" is not justified: what about monads? do they simplify code or complexify it? like macros, they introduce an abstraction (loop) and can help with consistency (with-open): some people dislike it, because they feel they loose sight of what is "really" happening, but this is not a bad idea by itself.

> But don't throw up your hands and let every library rewrite code willy-nilly

Strawman: people don't really do that.

Regarding your example, I really would like to know how typeclasses can be used to solve `(setf (car list) x)`. If you have a concrete example of what you mean, for example in Haskell, please explain.



> So, in order to understand a pice of code, I must: (1) know what is, or isn't, a macro, and (2) if it is a macro, know what it does. The same applies for any function: don't apply a function if you don't know what it does.

The problem is the nonlocality. A function call can only ever have an effect at the exact point in the syntax tree where it is (side-effecting functions are bad); you can work your way up, understand each argument in isolation, then understand the function call applied to those arguments, and then you understand the value of the whole function call expression and can continue working your way up the tree. There's no such assurance with macros, because a macro arbitrarily far up in the tree could be changing the meaning of a token many lines further down.

> Regarding your example, I really would like to know how typeclasses can be used to solve `(setf (car list) x)`. If you have a concrete example of what you mean, for example in Haskell, please explain.

I'm a Scala guy rather than Haskell. You can't do it with language-level state, but you shouldn't be using that anyway; lenses (inside a state monad) give you that symmetry between get and set (apologies if I mix up the syntax, there are a couple of different implementations):

    (for {
      x ← list_ |-> head toState
      y = x * 2
      _ ← list_ |-> head := y
    } yield {}).run(someList)
:= is just a method and Lens is just a typeclass; you can write your own instances (though for most "normal" cases you just derive them with a 1- or even 0-liner) for your own types and then the same state-manipulation code will work when that state is contained inside one of your objects.


I forgot about lenses, thanks.

About non-locality of macros vs. functions:

Macros, being functions, provide this kind of assurance, too at the syntax tree level: you only operate on the local, current subtree. They can indeed modify the meaning of symbols in that subtree, but I think you are taking things to the extreme. Off course, an arbitrarly nasty macro "far up in the tree" (how deep are the typical functions, anyway?) could do horrible things. Also, we must ban sugar because if someone eats too much sugar, that someone dies.

However, the average usage is more subtle. For example, when you DEFUN a function, you are registering a function in your current environment (side-effects) through a macro, which does not dramatically change the semantics of the body of the function, compared to, say, an anonymous LAMBDA (the only change is, I think, the ability to RETURN-FROM a named function).

CL is not a functional language: you can loop over lists or across vectors, you can setf special variables and mutate strings. You have static and dynamic non-local exits (return, throw/catch, signal) and even goto's (tagbody). That does not mean that you should use them all and go against best practices and common sense. Just because a feature could be misused does not make it irrelevant.

For the record, I have nothing against functional languages (or any other language, really (except of course PHP ;-)). It just seem unfair to attack macros. They should be avoided whenever possible, but they have some uses: C (macros), C++ (templates), and even Haskell and Scala try to have them:

http://scalamacros.org/usecases/index.html

https://downloads.haskell.org/~ghc/7.6.1/docs/html/users_gui...

http://neilmitchell.blogspot.fr/2007/01/does-haskell-need-ma...

Finally: macros are not a hack used to fix a supposedly missing feature of the language (e.g. "I don't need macros because I have lazy evaluation", etc.). Macros allow programmers to interact with the compiler (reader macros, compiler-macros): they are the primitive provided to perform meta-programming tasks.


Yeah, I think they're a mistake in Scala too. Hopefully I'm wrong.

I think "the primitive" is a good characterization; I view writing a macro the same way I view using a mutex directly, or as I said, goto (or, less trollishly, manually expressing a program in continuation-passing-style). That is, these things are occasionally necessary, but it's usually a sign that your higher-level constructs are inadequate.


Sure, macros can help define otherwise lacking features: you don't have promises in CL? (ql:quickload :cl-async-future). Here, you have promises.

But that is not my point. What I tried to say is that macros are orthogonal to high-level constructs because they are used at the compiler level, for meta-programming purposes.

For example, take "cl-ppcre", a.k.a. perl-like regexes.

When you do (ppcre:replace-regex "<my regex>" ...) with a constant string, there is a compiler-macro that calls create-scanner at compile-time to avoid parsing and building the scanner at runtime.

Maybe you think that regexes should be built-in and that a sufficiently smart compiler should be able to do the same job. But how is this different than what we have in CL? the compiler is sufficiently smart because we allow the programmer to interact with the compilation process.

Macros are adequate high-level constructs when you want to manipulate programs as data.


A sufficiently smart compiler probably should be able to inline constant expressions, yes. And that's a transform that we know is safe, and I would want my compiler to have quite a lot of assurance that its constant-inlining worked correctly (probably strong typing and high test coverage). I think it's worth imposing a bit of structure on any code that runs in the compiler (e.g. compiler plugins at least have a test suite and a release process - and even then I wouldn't use a plugin (other than maybe a readonly analyzer) on production code). And of course I'd like my compiler code to be structured well, such that it could be used as a library (or indeed, such that it is a library, from the point of view of those compiler plugins). But having a good library for transforming source code doesn't mean I want to allow random one-liner snippets to transform my source code.


The regex example is more about partial evaluation than constant inlining. So instead of commenting on "random one-liner snippets" and "structure", which I find boring, I'd prefer to share an example of partial evaluation techniques applied to Haskell:

http://repository.readscheme.org/ftp/papers/coutts_transfer_...

The whole essay is interesting. Then, at section 4, we discover that the author relies on Template Haskell:

Using the Template Haskell infrastructure confers a number of advantages. There are the straight-forward obvious advantages that TH provides an abstract syntax tree, a parser, pretty printer and a monad providing unique name supply and error reporting. Because it is part of a compiler, the compiler also does the other ordinary semantic checks including typechecking. This is obviously useful since our partial evaluator can only be expected to work for well formed and typed Haskell programs and so we are spared from performing these checks. A less obvious advantage is that we may be able to do binding time checking in terms of a slightly modified Haskell type system and implement it using the existing compiler’s type checker rather than having to write one from scratch.


> That is, these things are occasionally necessary, but it's usually a sign that your higher-level constructs are inadequate.

Could you clarify with some examples? What constructs are you referring to that are inadequate?


If you're using a mutex directly, it's because your higher-level concurrency constructs - task queues, actors etc. - are inadequate. If you're using goto, it's because your higher-level control flow constructs - if/for/while, exceptions - are inadequate. Likewise if you're using macros, I'd argue it's usually because your within-language constructs are inadequate; things like LINQ (which I've seen people use macros to emulate in other languages) are important and common enough that there should be a standard way of doing them in the language.


Maybe I'm not seeing the forest for the trees here but I still don't understand what constructs you are referring to which are inadequate. Most Lisps are built from a very small selection of "special forms" and the rest is built in Lisp itself. From this perspective it's not a very large language at all. Is there a specific construct you had in mind?

Mutexes, goto... I'm afraid I don't understand. CL, for example, is an ANSI specification and makes no mention of threads or threading. That hasn't stopped implementations from providing OS support for threading and library authors from maintaining cross-implementation compatibility libraries. Mutexes are just one well-known method for solving a very specific problem with shared-memory threads. It's not even a primitive in the language itself. Macros helped us make threading possible without having to call together a new ANSI committee to extend the language specification.

In a similar vein I'm sure there are plenty of LINQ packages available in CL with all of the attendant parallelization patterns, etc.


> Is there a specific construct you had in mind?

No, what I have in mind is the things people use macros for. I've talked about assignment or LINQ; maybe you should talk about a specific case where you think macros are a good solution?

> Macros helped us make threading possible without having to call together a new ANSI committee to extend the language specification.

ANSI committees aren't just for fun. Maybe you should've called one - threading is a pretty fundamental language feature and worth standardizing across all implementations.


> I've talked about assignment or LINQ; maybe you should talk about a specific case where you think macros are a good solution?

If SETF didn't convince you (and it's much more interesting than the original author made it out to be) then perhaps a Lisp->GLSL compiler will[0]?

Or perhaps the various destructuring macros (WITH-OPEN-STREAM WITH-OPEN-FILE) which handle the bindings and returns over the primitive forms for you?

Or a more contentious example: LOOP and FORMAT. LOOP generalizes almost any looping pattern imaginable into a small DSL. Some dislike it because you can't macro-expand into it but then that's why we have ITERATE. Whichever you choose just compiles down to the primitives ultimately.

There is already a significant body of literature which investigates the amazing benefits of Lisp macros. I suggest reading a few. Let Over Lambda is a particularly interesting tome.

I ask for your reasoning only because I've never encountered it before. I've heard the usual, "it's too powerful," argument enough to not be swayed by it anymore. However... macros as a sign of the inferiority of the language itself? That is too rare and, it appears, unfounded.

> ANSI committees aren't just for fun. Maybe you should've called one - threading is a pretty fundamental language feature and worth standardizing across all implementations.

Plenty of people new to Lisps have waltzed into c.l.l or #lisp over the years and asked why a new ANSI committee hasn't been formed to "modernize" the language. The answer I've heard repeated ad nauseum is that the specification was drafted so that another committee wouldn't need to be formed. The language definition only defines a very primitive baseline and the rest is extensible by the implementers and users.

Threading works absolutely fine in any supporting implementation. I've written highly threaded code without a hitch in CL. It's not a problem that it's not defined in an ANSI specification.

And perhaps I'm wrong but I don't think "threading" is not a language issue so much as a facility provided by the operating system and underlying hardware. POSIX is the standard that defines the threading API for most Unix-like operating systems. Windows defines its own.

And I'm pretty sure that the IEEE committee didn't publish standard until 1995 which is a year after the ANSI CL specification. I would hazard a guess that the committee at the time didn't see the point in crystalizing the specification around an incomplete and as-yet unpublished specification for a single platform. That was something the implementers could handle on each platform they supported... and the library authors could handle with cross-implementation compatible packages.

So no, I don't think I need to call together an ANSI committee and I'm pretty sure if I did I would be laughed out of #lisp.

Threading is important and it was taken care of. I don't really have a problem with the way it was built in CL. I just use the BORDEAUX-THREADS package if I'm writing a library or stick to the implementations APIs if I'm writing an application for a specific platform... which is oddly how you'd do it if you were writing a threaded C application too.

[0] https://www.youtube.com/watch?v=hBHDdYayOkE&list=UUMV8p6Lb-b...


> perhaps a Lisp->GLSL compiler will[0]?

I'm all for having multiple compilers and reusing the same AST-manipulation code in them. I just think the application of that to code should be a bit more structured, and not rely solely on programmer restraint.

> Or perhaps the various destructuring macros (WITH-OPEN-STREAM WITH-OPEN-FILE) which handle the bindings and returns over the primitive forms for you?

I'd expect a language to be able to do that; in Scala I do it with a library and for/yield. For/yield is still a bit magic, but it strikes a very good syntactic balance - code in a for-yield looks very similar to normal code, but the slight difference (← instead of =) makes it clear that something special is going on, even if you're many lines down from the line that tells you which monad it is.

> Plenty of people new to Lisps have waltzed into c.l.l or #lisp over the years and asked why a new ANSI committee hasn't been formed to "modernize" the language. The answer I've heard repeated ad nauseum is that the specification was drafted so that another committee wouldn't need to be formed. The language definition only defines a very primitive baseline and the rest is extensible by the implementers and users.

And as a result there are multiple incompatible solutions to the problem, and I get the impression that's contributed to lisp's unpopularity - admittedly I'm not a lisp expert. I've watched a similar scenario cause problems with async i/o in perl - multiple incompatible implementations, libraries that attempted to abstract over them but added their own layers of incompatibilities, and a language that became very unapproachable. I think the approach python has taken to async i/o - allow a lot of libraries to experiment with their own approaches initially, but then once the issues have been worked through and reached some level of consensus, standardize the API as part of the language - is a better one and makes the language more accessible.

> Threading is important and it was taken care of. I don't really have a problem with the way it was built in CL. I just use the BORDEAUX-THREADS package if I'm writing a library or stick to the implementations APIs if I'm writing an application for a specific platform... which is oddly how you'd do it if you were writing a threaded C application too.

I'm spoiled by Java (well, the JVM), which did standardize threading cross-platform as part of the language, and I think has reaped rewards from doing so.


> I'd expect a language to be able to do that

CL does do that. Those macros are defined by the specification. If they weren't then you could write them yourself.

If you need special syntax to help you understand your program you have access to the reader and can dispatch special macros to transform your syntax into ordinary lisp forms. I've done it to implement readers for little assemblers.

I don't see how the comparison to Scala is relevant. We're talking here about what the weakness in Lisp is that macros cover up. There are only 27 primitive forms described in the CL specification that everything ultimately "compiles" down to. Do you think there is a form which doesn't exist or one that is impaired which if replaced or fixed would make macros unnecessary?

Macros are such an intrinsic part of the language that I'm surprised to hear criticism from someone that they're a symptom of a design problem!

> I'm spoiled by Java (well, the JVM), which did standardize threading cross-platform as part of the language, and I think has reaped rewards from doing so.

It's really not that big of an issue. There are plenty of highly-threaded cross-platform applications written in nearly every language under the sun. CL does better than most because it can abstract, at the language level, the differences in platform implementations without touching the underlying implementation or specification.

I think the CL spec could have actually left more things under-specified like file systems. One ugly wart in CL is the file path specification. At the time the committee felt it was important to crystalize part of the specification dealing with the multitude of file system addressing available at the time. And a couple of decades later we find that wasn't really necessary as few of those file systems are in use today... yet a conforming implementation must provide this API. Terrible.

On the other side of the coin it was decided by the JVM designers that TCO wasn't necessary. Today they are wringing their hands trying to get it into the next release. The ANSI CL specification doesn't require implementations to implement TCO and yet most do without harm to anyone's code. The JVM decided the right level of abstraction was bytecode and now they have to be super careful not to break the world whenever they make a change. Lisp chose the language to abstract upon... which makes sense since it's a symbolic processing language (list processing, i believe, is a misnomer and an artifact of history).

They didn't specify a threading API and we have much more flexibility today than if they had settled on one.

What remains astounding to me is the quality of machine code some of the open source implementations can produce. SBCL is a wonderful compiler. CCL is also really good.


> We're talking here about what the weakness in Lisp is that macros cover up. There are only 27 primitive forms described in the CL specification that everything ultimately "compiles" down to. Do you think there is a form which doesn't exist or one that is impaired which if replaced or fixed would make macros unnecessary?

Macros as a compiler implementation technique for standardized features are great; it's non-standardized macros I object to. That said I do think that in this case things would be improved by a single language-level feature, the equivalent of do notation or for/yield (and sure, probably implemented via a (single) macro). Then WITH-OPEN-FILE/WITH-OPEN-STREAM could become ordinary library functions and the common part would be evident, rather than macros that happen to have similar implementations but not in a way that that similarity is exposed to the programmer. And a programmer wanting to add support for a similar WITH-MYCUSTOMTYPE construct would only need to write a function, not a macro, just like they do in Scala.


The common part is the special operator UNWIND-PROTECT. You can use it to define a function (not a macro). This is in fact a recommended way to design your program: define a CALL-WITH-X function and write a simple WITH-X macro that expand in a funcall to that function, with a more convenient syntax (see http://random-state.net/log/3390120648.html).

Scala resource management techniques are built upon try/finally blocks, aren't they? If you want, you can use Scala ARM, which is a non-standardized library. Then, you can use a monadic approach, an imperative one, etc. This boils down to convention and idioms, which are not standardized either.

You seem to think that there should be an autorithy which acts as the gatekeeper between the good things and the bad things that are added to a language, and that the ability to "mess" with the abstract syntax tree leads to chaos (I guess you alignment is Lawful Good). We disagree. CL is designed to evolve smoothly over time, and macros are one of the features that enable this.


scala-arm is not standard but it's just a library; it provides ordinary functions that still obey the kind of locality I was talking about. That is, any construct that's using a managed resource has a "<-" at the point of use, alerting you that something funny's going on.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: