Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Programmer as wizard, programmer as engineer (2018) (tedinski.com)
166 points by matfil on Jan 2, 2019 | hide | past | favorite | 74 comments


It would be interesting if programmers took "sketching" to be a valuable and necessary part of programming. It's common practice for painters to make a pencil draft first. It's common in industrial design to produce prototypes.

However, when it comes to code we treat it similar to writing. We may have a first draft, but the final draft is often nothing more than a cleaned-up draft. I could be wrong. I never wrote professionally.

It would be interesting if we had languages that would be great for prototyping but designed to be unusable in production. However, I'm having a hard time imagining properties that don't already exist in languages like Python and JS. You want weak typing of course, but you'd be ok with poor security. Maybe we'd some nice features that would make the language run slowly since it running in prod would be a non-goal.


I'm a big fan of literally sketching. One of my favorite things to do when approaching a hard programming problem is to enumerate a list of things I already know and then try to literally piece those together into an algorithm. I typically do this on paper, making diagrams of how things will fit together and examples.

But my "notation" here is always miles away from what code would look like, and I think "sketching" in pseudo-code or real code would fail to provide this same springboard. One of the biggest things with writing code is that data structures (especially in the form of objects) quickly lock you down to a certain design and it becomes progressively harder to think of the problem in any other way than your initial view of it. Plus, the hard parts typically require huge amounts of infrastructure to be in place before you can run them for the first time, which works much worse than an abstract brain model in my experience.

>It would be interesting if we had languages that would be great for prototyping but designed to be unusable in production.

The reality here is that people would find a way to make it usable in production and this vision of a good first draft would quickly fall apart :/


I think that TDD is in a way "sketching" for programmers, you draw the outline (the expected results), and then you fill in and refine the details one by one to make the code work. When dealing with the complex code I also have a habit of structuring the code with comments first: I write a comment for each step in the flow (like a subtitle for that block of code: e.g. parse input, calculate this, get that, output result, etc.), and only when that logic as a whole makes sense I start to implement the actual code below the comments one by one. I find this very helpful to visualize the logic flow.


Part of the reason I like C-style header files for declarations is that I use these as a 'sketch' / 'story map' for the piece of code I'm writing. I tend to spend much more time per line writing my headers than my code, because once I've thought through the public interface and it's close to its final form, the internal code pretty much writes itself.


I notice that I have two modes of programming. This is clearest with haskell.

For simple code I make type errors into runtime errors and let the compiler figure out type signatures. This emulates dynamic languages with a great linter pretty well.

For complex code I write rigid type signatures first and then play type tetris. This works best with dependently typed languages and is called type driven development. Idris can fill in large parts of the program with this - since type signatures are equivalent to propositions and programs to proofs this is basically proof search.


It would be interesting if programmers took "sketching" to be a valuable and necessary part of programming. It's common practice for painters to make a pencil draft first. It's common in industrial design to produce prototypes.

The issue is that in art or industrial design, someone can't reasonably sell the prototype as a completed work. With software, particularly anything that is web-based, as soon as a sales person sees anything that vaguely resembles something they can sell, they will sell it. Sales means bookings/income, so that prototype becomes the MVP.


There is a saying, "real writing is rewriting."

Ideally the final draft is something that has been very aggressively refactored, multiple times, with input from a Refactoring Engineer.

(Did I just invent a new job category? I don't think there's currently any equivalent of an Editor in the Software Engineering world; code review is a chaotic approximation.)

Unfortunately there is usually pressure, and maybe also desire, to just make it work, maybe with tests, and move on to the next thing, at least in companies.


I believe that a Refactoring Engineer would be similar to a contractor specializing in performance, security or systems optimization. I've had the pleasure of working with a couple of these people, and I have to say that I felt an initial (pride|arrogance)-based resistance but they were all friendly and had some really valuable insights especially since their expertise seemed to be lower-level than mine and they were able to look at working code and find places to improve.

In essence I agree with you that this should be an on-staff role, but perhaps the reason we are not seeing this is that the job is usually fairly quick (a couple of weeks/months) and for the business its hard to justify a full-time staff member to perform this service. There is also something to be said for how people in this role are able to gather a wealth of experience by working on a high number of code bases instead of being stuck continuously working on the same few software projects all year.


"Build one to throw away"

- the mythical man month


"Can't we just spruce up the working prototype and deploy it to production?"

-- Management


If it only were an actual question...


I've learned in time to never make a working ux in my prototypes. just raw data showing the innards working, and that's it. functional or pretty ux just gets shipped in prod no matter what.


Don't programmer do this when they create a prototype and reiterate from there until they have a fully functional product? Unlike painting, programming can refactor parts and rearrange everything. There is no need for an additional sketch if the sketch can become the painting.


> Unlike painting, programming can refactor parts and rearrange everything.

You've never seen a "//I don't know why this works, but don't touch it" in code?


I mean, some writers like to make up outlines and stuff to plan out books, though so do stream of consciousness style and clean it up in later drafts. I can see that planning out your code can be beneficial and honestly ideal. Just break it down in a sort of outline what all the program needs and then what each part of the code would need, etc.


Well, I sometimes start by writing psuedo-code first before turning it into actual code, later.


How about all the libraries and tools turn over every two months? Oh, there's JavaScript!


Node.js turns 10 in a few months. TypeScript is 6 years old. Webpack is 6 years old. React is 5 years old. Redux is 3 years old.

Front end JS and TS went through a phase of rapid change 2 to 6 years ago, but I don't think that rapid change is still happening. Certainly there are new libraries, and maintainers still like to play fast and loose with backwards compatibility, but it's not obvious that you have to keep up to date with every single new framework anymore.


Isn't that just domain modeling in UML?


I'm sceptical of his argument that "we've gone from dynamic being trendy back to typed(java), because 'people' had to maintain dynamic codebases".

An equivalent but probably equally not-the-real-explanation argument would be that we've gone from an era of opportunity into an era of oligopoly as the internet titans have emerged and so the "coolest kids" who everyone cargocults have gone from being fast-growing startups to members of the big-5 elite. Kind of the same as his argument just with detail on who his "people" are and why, but it changes the implications.

I think it's more as he touches on that the two are converging. I think eventually some sort of pluggable typing will win (the "proofs" on your system won't be a single compilation pass but maybe different typing for different parts of the system, and specific proofs run between compile and runtime as the two blur), which will look more like gradual typing.


According to the definitions provided, it seems that engineering is the wrong approach for the vast majority of systems (especially popular web-based platforms). We should be leaning more towards wizarding.

Most systems change all the time. So long as a company needs executives to make decisions and steer in different directions depending on the economic conditions, companies also need the flexibility to change their code.

Even the Linux Kernel which is now decades old is still being changed all the time. If you use tools which assume that every line of code you write is not going to change, then unless you're programming an aircraft or a medical hardware device you're probably using the wrong tools.

You should not assume that just because some low level module is deeply nested within the code, that it means that it should not be changed or thrown out.

That's why I prefer dynamically typed languages for web systems; they start with the rigth assumption about the ever-evolving nature of the project.

If JavaScript was designed to be statically typed from the beginning, web browsers would not have attained the usefulness or popularity that they have today.


I disagree with your assertion that dynamic languages are easier to change. In my experience programs written in a typed language are much easier to change, because IDEs can exactly tell you how something interconnects (and thus where breaks might happen) and the compiler can give you some confidence that you didn't miss anything, or alternatively throw an error if you did miss something.

You can partially provide the same benefits in dynamic languages with tests, but at that point you're paying the same cost as with a type system.


Statically typed languages encourage bad programming practices precisely because they make it easier to track type references across many files.

Ideally, types should not traverse too many files; there should be a clear class hierarchy and each level should provide more abstraction. Also it's better to encourage passing simple types like strings, numbers or clones of objects instead of active instances... Because if an instance of a class is referenced in many parts of the code, it's difficult to track which part of the code was responsible for changes made to that instance; also it's harder to maintain your train of thought when traversing many files to debug a simple operation related to a specific instance type.

On a related note, I've noticed at multiple companies that when they force developers to use specific IDEs which make it easy to find stuff, the directory structure of the projects tends to suffer (since existing developers don't rely on the directory structure to find things; so they stop caring about it) - This makes it harder for newcomers to make sense of the code and makes the project totally dependent on the IDE.


I feel like "each level should provide more abstraction" is at odds with "encourage passing simple types like strings, numbers".

Also, I'd go further than "pass clones of objects instead of active instances" all the way to "only use mutable objects when performance demands it, and keep the scope of the mutability small".


Agreed. In my experience, the cost of reasoning about a code base is much higher than the cost to type the code in an editor. And reasoning about a code base in a language like Java using an IDE like IntelliJ is much less costly than reasoning about a large code base in a language like Perl using VIM and the terminal.


> That's why I prefer dynamically typed languages for web systems; they start with the rigth assumption about the ever-evolving nature of the project.

I agree with your assessment that we need to design for evolution.

I disagree with your assessment that dynamic languages provide that. Dynamic languages certainly do hit a sweet spot for fast iteration on small projects. As soon as your project gets large or long running, the assurances that static types provide get really nice. Witness how much effort has gone into building type systems for dynamic languages: Typescript (MS) is exploding and had vigorous competition from Flow (Facebook). Python has mypy plus a bunch of corporate backing: MonkeyType (Instagram) and PyAnnotate (Dropbox).

I feel much more comfortable aggressively refactoring in static languages.


To add to you point, more-and-more PHP is growing support for types with every major release. Starting with 5.0, then 5.1, 5.4, 7.0, 7.1, and 7.2 – each of the aforementioned releases added support for greater typing hinting in function definitions and support for specifying types.


Another problem is highlighted by Joe Armstrong and Rich Hickey, which is that as soon as you move beyond the confines of a single program and start dealing with a system of programs, type guarantees become less relaible. So you have these types that may make complete sense in your program's world, but they drift out of sync as the larger system evolves. This is a general problem with any sort of proof mechanism. I still like types and think they can be useful, but they are only a single tool in designing a robust system. They have limitations and drawbacks like everything else.


> types that may make complete sense in your program's world, but they drift out of sync as the larger system evolves.

At the risk of being inflammatory, I would say that you've been looking at poorly designed systems.

A running joke about Google engineers is that all they do is shuffle data from one ProtoBuf to another. It's largely true. The upshot of that is that every single service/system has common, typed definitions of the data structures they use and require. Data never drifts out of sync.

To be sure, a lot of the problems I dealt with there were around data structure migration on large systems — but there was never any uncertainty about the structure of the data itself.

I like JSON APIs, but I think JSON especially contributes to people getting hand-wavy and casual about the structure of data across systems in stupid ways (see NoSQL vs SQL).


I don't think that's inflammatory at all. I tend to agree with you that a very good engineering organization would enforce data-first design and maintenance practices and build types around that. But unfortunately a lot of places don't operate that way and so my comment was meant in more of a general sense, not in a "types cannot possibly be useful at a systems level" way.


A robust system is typed. A robust ecosystem is typed. Everything else is evaluated and dependent upon some agency.


I'm not sure to what extent I believe this.

There are a lot of overlapping distinctions, some blurry, I think. He chooses some for wizardry vs engineering but I don't know whether they're best. For example he puts "magic"=implicit, but I would normally say highly implicit code is "engineering" because it implies that someone has understood and explored the problem enough to write a very tuned framework or library. I think reasons for putting very implicit code in with wizardry could be that it might be that the person using it doesn't understand it, and that it's somewhat in opposition to strong typesystems

wizardry/engineering high uptime/downtime possible efficiency important/efficiency not important large codebase/small codebase not implicit/implicit (I disagree most with this one) problem is understood/code is exploratory (I think this is the most important distinction) high specialism/low specialism of coders changes slowly/changes quickly typed/untyped

I would say websites are engineering because the problem space is well understood, they need high uptime and they tend to be written by specialists (React guy, django guy etc).


> Most systems change all the time.

The way I think of it is "wizardry" has good initial velocity (as it relates to change), but velocity decreases with time, whereas for "engineering" velocity is terrible at first (initial investment), improves a bit over time, but most importantly it settles to a constant level.

Engineering is O(log(n)), wizardry is O(N*N).


That may be true, but immediately makes me think of Rich Hickey's "constants matter".


Mostly if they're different order of magnitude, and they're not.


>That's why I prefer dynamically typed languages for web systems; they start with the rigth assumption about the ever-evolving nature of the project.

The inverse can be said: a statically typed language makes it easier to replace parts later, do automatic refactorings, etc., and know you got everything right.

Dynamic just let you have less ceremony (no type definitions) but forces you to keep this all in your head (you still don't want to pass the wrong type of thing to the wrong receiver) and doesn't give you any assurances.

With type inference and/or autocomplete, statically typed languages are faster to write and easier to get right than dynamic.


Gradual typing does not seem to work very well. It requires a runtime check every time you pass data from a "dynamically-typed" to a "statically-typed" part of the program, and then a conversion step from statically-typed to tagged data for the converse. These things eat into performance, in the end it's no better than what dynamic types would give you, and a lot worse than what one could achieve with static types.


"Wizard, Engineer" reminds me of Yegge's "Software Liberal/Conservative" approach to risk. https://plus.google.com/110981030061712822816/posts/KaSKeg4v...

Albeit, rather than "wizards like implicit/magic, engineers prefer explicit/boilerplate/maintainability", the difference Yegge suggests was management of risk.


Thanks, that's a pretty interesting take.

(Also a reminder that we're probably going to lose some interesting stuff when Google+ goes kaboom...)


Lisp is the ultimate "wizarding" programming language.

But when I miraculously was called upon to maintain an enterprise code base in Common Lisp, it was an absolute joy. Because whenever they encountered a roadblock in maintenance, the Lisp wizards who had come before me just wizarded up a solution. One of the things that stuck out was that it had its own custom test framework, that was head and shoulders above XUnit, Mocha, or any other commonly-used test framework. Adding a new test was virtually a one-liner; the test would generate test data , send it to the server, and check the server's response against an XML template provided by the test case.


A really good alternative to Python and C: Clojure + Java

Mostly because

- Clojure is very very terse

- Java has the battle-tested libs

- they run on the same (J)VM - so no FFI required in your code


I think the problem with this is, if I'm going to say "completely rewrite this [function/class/module] into another language", then there's a good chance you want that language to be about as fast as possible. I guess because a lot of problems fall straight from "speed absolutely doesn't matter" to "this is the bottleneck of the whole thing".

I think that is a reason why there's a lot of python/c++ in hedge fund land. I've written some clojure but don't know the c interop story for it.


> completely rewrite this into another language

You just opened the box of pandora for a million reasons most of which are unbeknownst to the both of us :) But for the sake of argument I will continue under this assumption

> language to be about as fast as possible

If you're interested in Mathy stuff like Machine Learning, FFT, ... then maybe. But even for those you usually have JNI bindings, so it's easy to use most of those mathy C libs if necessary.

But I guess that 95% of all software isn't about speed but about something else (correctness, maintainability, safety against threats, portability, ...) because costs today are usually dictated by manpower costs or those arising from safety/security incidents and much less often by hardware costs compared to say a decade ago.


My limited experience taught me that doing anything mathy in Java is painful due to types (well, more lack of math-specific built-in types). The input/output is usually going to be in primitive types while the library likely uses some home-brew custom typing(or Commons, if you're lucky) for Complex numbers, matrices, etc. So you have to do the type conversion song-and-dance.

The Python libraries never seem to care. Just give them a list of a list of number-ish values and off they go.

Java might have some great ML/Math libraries, but the fact that Python dominates data science suggests that my experience is a real-world pain-point.


When I say completely rewrite, I mean rewrite a class, method or library which is what we're talking about (since you probably started writing in python/clojure). Sorry, it was poorly phrased.


Arguably you could mix and match any of the jvm languages to your liking. Jython, JRuby, Kotlin, Scala are all reasonable alternatives to Java while being easier to hire for than Clojure.


Back then I used to use beanshell for the most changing part of code (prototypes/logic)


> What do we do?

For me, it's "simple": I never ever ever ever write one to throw away (and I do mean that in a literal, absolutist sense, which is rare for me). This does mean that I often can't use wizard tools (so it can be less fun to build). I never use dynamic languages, and build on the JVM (Scala or Java), because I know it will scale, and there are battle-tested libraries that do nearly everything under the sun floating out there. (If your org has a different "blessed" platform for production services, then use that.) It isn't quite as quick to build the MVP as if I was just hacking something together. But I can do it fast enough, and still end up with a maintainable, evolve-able code base.

It's not a perfect process. My first version usually has only a few tests that verify behaviors that I had trouble modeling clearly in code and didn't feel confident about. Sometimes I miss error cases here and there that someone else has to find and deal with later. Also note that, because I use strongly-typed languages, I can push a decent amount of correctness verification onto the type system, so the compiler catches a ton of errors that I'd need a giant test suite to catch using many dynamic languages. The tests that I do write focus on logical correctness, not code correctness.

But at the end of the day, I deliver products on time that I feel much more comfortable being robust in a production environment than build-one-to-throw-away prototypes. Stuff that I'm fine holding a pager for if I need to. I have several "prototypes" that are still running in production several years after my first release, maintained by other people after I've moved on. And by and large they still contain a lot of the original code, and the design remains close to (and/or continues to be heavily influenced by) the original design.

On the flip side, I've had to deal with code that's been thrown together with the expectation that it could be thrown away later (of course it never can be), and it's incredibly difficult to bring it up to a robustness level that would be deemed acceptable for a generally-available product. These code bases constantly set off pagers for dubious reasons and write unactionable crap to logging systems... and it doesn't have to be that way!


I paint (mostly acrylic) and code. To me, the notion of building a practice run is very liberating; it enables me to think about components, how they should be named/organized, how they interact without the pressure of "getting it right the first time". The time-box for throwaway work should be small. Small enough such that you feel a firm confident grip on the plan at hand. If you have something fully end-to-end operational then more than likely the throwaway has been overworked.

I encourage you to consider trying out more throw-away work before writing the real thing. You find yourself with not only a more lucid vision of what goes in to the "real thing" but a little more muscle memory in getting started down the right path.


Unfortunately you cannot throw away a building with a fresco, and many useful applications are of this sort of size.

(Even something as "simple" as email.)

The best you can do is paint the fresco on paper, digitize, design the building outline and iterate 3D building designs. And you have not considered materials and structural design at this point and do not even have a scale physical model.

Prototypes work for small, enclosed apps with limited functionality. Not even video games most of the time and these are relatively small and specific. Word processor? Good luck. DAW? Oh my. CAD/CAM tool? You've got to be kidding me. IDE? Nope. A compiler? You'll rewrite it a few times. Desktop environment? A good recipe to lose users. Even CRUD...

You cannot iterate a painting into a building with a fresco.

The thing is that currently an MVP is definitely huge, as much as startups would like everyone else to believe otherwise.


In my experience even things which are explicitly prototypes can be dangerous, as it can appear to be in the short term interest of the business to take a prototype and modify it as little as possible to get it shipped. I've seen this result in massive headaches and thousands of wasted man hours, and for what? Shipping an "MVP" that can't be effectively iterated on 2 weeks sooner? The worst case of this I saw, the technical aspects of this were bad enough that it was (in my opinion) what caused the product to fail.


If you go the prototype route, it is critical that management knows that it is an exploration and no running code will result from it - just knowledge. I am a super big fan of prototypes in my domain. They take under a week usually. We use them to crack the hardest nuts on an upcoming project and usually learn something(s) very valuable that guide the design of the v1 product/solution. Usually these are the learnings we would not have gotten to until mid-way or more in a real project, but we can prototype it, preventing all the potential rework.


I don't think that this is a constructive post. It is probably totally true - in whatever seat you're sitting in, there's no situation where code is exploratory, but unless you're saying something about how common that is then it's not interesting for the problem. I think you might be implying that there are no/few seats where an exploratory approach is useful, in which case I really disagree.


Cute way to put it. Just like many I'm in the midst of painful transition from quick and dirty mvp to well engineered end product.

Everything Rich Hickey has been giving talks on and clojure itself seem to be the best _solution_ I've seen.


There are quite a few false assumptions here.

> People have certainly managed to create test suites that make it harder to maintain the code.

Sure, but pretending that this is the norm is just not true. And arguing against an extreme case can be done against anything .

> I get the impression Google has been able to migrate a lot of C++ and Python to Go using this approach.

Not only is this heavily suspect (I'd love to read a citation for this) but if there's a language that represents the "engineering" side (?) pretty clearly, it is C++.

> Gradual type systems has started to garner a lot more interest.

No they haven't. Those have existed for over five decades . Another problem with these articles is how they seem oddly ignorant of the history of computer science. This is specially odd coming from a PhD (assuming that I looked up the right person in google).

And, last but not least, some of the best engineered pieces of code are precisely shells, dynamic languages and frameworks.

It's things like these what makes these articles seem like they were written by java developers annoyed because java is no longer the trendiest toy.


> No they haven't. Those have existed for over five decades.

There is nothing in "started to garner more interest" that implies something is new. It's even the other way around.


Using PyTorch (and the broader space of machine learning algorithms under the “deep learning” category) really makes me feel like a wizard. But the downside to being Python-dependent is that putting PyTorch stuff into products not easy. I hope PyTorch 1.0 will change that.

Note: This article was not written with Machine Learning in mind, and I will have to re-read the article to better articulate my thoughts on “Machine Learning Wizardry” and juxtapose my own ideas with those of the article author.

Kudos to author: The article’s main metaphor is excellent because it got my creative juices flowing (i.e., brain working at 110% for a few brief moments).


You can always use ONNX to convert PyTorch trained model to any other format. https://onnx.ai/


Universal, component-based application is the answer of the "boundary" in the article. You got both wizard and engineering solution, which is simple, easy to delete and FUN.


Brad Cox described this in “Object-oriented Programming”, a book I first read in 1989. Building applications. simply by plugging together off-the-shelf components remains a beautiful vision, and it is almost entirely unrealised 30 years on.


> “Object-oriented Programming”, [...] Building applications. simply by plugging together off-the-shelf components

I've read just here on HN that microservices architecture is none other than an implementation of the original concept of OOP.


Possibly. But my understanding of “the concept of OOP” involves a separate identity for each object.

Suppose the thing you are dealing with is “Customers”, then with OOP each Customer is a separate object, while with MSOP you must communicate to the service which Customer you are talking about. The thing you talk to and the thing you are talking about have different identities.


Is it what DLL files on Windows are/used to be?


My understanding is that it is more like COM, CORBA and NetBeans. I guess it is one of those things that can mean different things to different people. At its core, it's just separation of concerns, usually branded and (re-)packaged. Often times, that core idea seems to get lost somewhere around the way...


The core idea of integrating components written in multiple languages got totally lost along the way from COM (a way for components written in different languages to interoperate while avoiding the DLL Hell problem) => ActiveX (a web-friendly marketing name for COM because you couldn't google for "COM") => Java Beans (a marketing name for vaporware to express the idea that Sun had an alternative to ActiveX, which was misleadingly positioned as a replacement for AWT, but when it was finally implemented was only useful for web servers, and certainly wouldn't ever work with anything but 100% Pure Java) => POJO (Plain Old Java Objects, which have nothing to do with user interface widgets or any other language than Java).


I will assume that you meant JavaBeans (the component classes) and not NetBeans (the IDE).


Indeed, thanks!


I wholeheartedly agree that software development as a whole, especially in the past few years, has been more in the spirit of 'wizardry'.

But with the speed at which the sheer amount of new software 'stuff' comes out each year, there's simply hasn't been enough time to develop rigorous engineering specs or best practices for all of those tools.

Perhaps we're starting to see an initial version of a universally accepted model, at least for the frontend, with the mentioned Typescript + Flux (I'll just say Redux).

Many assume that Redux is only a state container, and it is at face value, but more semantically it is (when implemented correctly) a very logical boilerplate where it puts everything related to state in its proper place, so any developer can look at the code base and immediately get a general idea of where state is set, what events exist and how the state is used throughout an application.

I'd like to see things like these very strict patterns emerge for other tools, like Node. For example, I could imagine a snippet of THE universally accepted express boilerplate for a login, given a specific backend... (can an email + password login really get _that_ customized?)

...argh, on second thought I suppose it can, but even then such a boilerplate could have sections with a freebie spot for customizations

Eh, it's late and I wonder if such universal pattern ideas are a pipedream... I suppose only time will tell...


"Python + C" I am taking this approach working/learning with the raspberry pi. Although it can do both python and C, I know in electronics working with low level C will be a lot beneficial for my learning and code itself. When I migrate to more advance topics to scripting and ML, I will use both PYthon and C to implement what is needed.


It sounds like optionally typed languages like Typescript should be really good if you want to start in "wizarding" mode and then change the code more into an "engineering" solution.


A wizard then should have a spellbook, one filled with all sorts of spells written out for immediate use. Maintainable hacks if you will. Those are probably just scripts though I suppose?


The problem with spellbooks is one seen in media on the subject of wizardry, that the wizard spends a lot of time memorizing spells. This is why the most useful spells get turned into artifacts that the wizard does not have to always memorize to know how to cast correctly.

What is needed is an artifact like spellbook, that the wizard when faced with a situation could describe it to the spellbook and get back the correct spell or combination of spells to solve the situation. Attempts have been made to create such an artifact, but unfortunately the resulting spellbooks still take a long time to find the correct spell and when reading the spells you often find that there are missing ingredients or a complicated set of gestures that must be performed to make the spell work, and you have to read all about these gestures in turn to figure out which ones you really need.


Have you heard of hoogle (Haskell's type signature search)? It almost exactly feels like what you've described.


no, thanks for telling me!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: