Yes, CHIP-8 is kind of the standard "I want to get into emulators" first project. In my latest book Computer Science from Scratch we go CHIP-8 -> NES in chapters 5 to 6. GBA is quite a step up from CHIP-8. I would suggest doing NES or GameBoy next, but of course with today's LLM help GBA is very reasonable if you are going that route.
Syntax is not the focus of your testing, but it’s often a pre-requisite to be clearly and accurately speaking the same language. Think not of taking off points for missing a semicolon but instead understanding the difference between the syntax for a method call and a property access. The different syntax conveys different meaning and so we should expect some basic level of accuracy to the language in question. At least that’s how I see it.
Marking at scale is hard to maintain that consistency though. It’s not whether the exam writer sees it that way, it’s whether the markers understand intent and objective over pedantic nuance
I agree with your premise about why accurate evaluation matters, but your post comes across as pretty bitter. Unless you’re at the job with him, you really don’t know that it’s a “I just need to show up” job he has at Booz Allen. Perhaps he has other great traits like a high social or emotional intelligence that make him good at his job beyond whatever was being evaluated on those projects you helped him with.
I think you're missing the point. The majority of jobs at companies like Booz Allen are sort of like Kabuki theater and don't require any technical competence. The main responsibilities are to show up on time and present a certain image to customers.
He definitely didn't have the book ghostwritten. It does have advice on issues that go beyond faith. But I think it's much more useful as a guide to the faithful than the non-faithful. We interviewed him last year about the book:
To be fair, comments here are graded on kindness, civility, curiosity, intellectual gravity, technical merit, novelty, thoughtfulness, substantiveness, objective fact, not fulminating, not cross examining, steelmanning vs strawmanning, not containing memes, not containing humor, not expressing positive emotion, not expressing negative emotion, not being snarky, sneering, overly cynical, not cynical enough, being "curmudgeonly", class bias, political bias, religious bias, cultural bias, not using "flamewar style" and many other heuristics.
If you followed all of the guidelines for comments to the letter, you would wind up sounding wooden, if not entirely like an AI.
Hi, I'm the one who originally wrote Ladybird's AppKit UI. Just FYI, it was written long before Ladybird split from SerenityOS, and even longer before Swift was on the table. I only chose Objective-C++ because it was the language I was familiar with at the time :)
No, I don't think you're missing anything. He never answered the title of the post ("Faster Than Dijkstra?"). Instead he went on a huge tangent about his experience writing software for routers and is dismissive of the algorithm because the router problem space he was working in did not deal with a node count high enough to warrant the need for a more complex algorithm. Dijkstra's algorithm is used for problem spaces with far higher number of nodes than he mentions... basically an article that talks about some kind of interesting things but doesn't say much about its central question.
So your opinion is based on just reading the table of contents? I always find it disconcerting when someone writes a multi-paragraph commentary on a work they didn't actually read or see.
I understand that you're commenting on the approach more than the contents, but you're pretty dismissive of it without actually reading the details of how they went about things.
You're not quite judging a book by its cover, but you're not that far beyond that.
> you're pretty dismissive of it without actually reading the details of how they went about things...
I wasn't really trying to be dismissive (other than saying that I personally would not recommend it to a young person interested in programming and deep learning). I was mostly trying to start a discussion about the best way to teach/learn this subject. I hoped to attract more knowledgeable commenters (such as yourself). A day later I still stand by my personal opinion that it's probably best to learn the mathematics first, and to use the lingua franca of the domain.
I'd like to add that, from my perspective, this pseudo-critique was a very small part of my comment. I was mostly trying to say "It's very important and difficult to keep early students' interest. Bravo on the novel approach taken in the book. It's much better than what I had as a college student." That might not have been clear in my comment.
> You're not quite judging a book by its cover, but you're not that far beyond that.
Fair. But there wasn't anything else to read in the submission and I was trying to start a curious conversation. Despite my good intentions it was a bad comment.
Ironically this post comes across to me as written by an LLM. The em-dashes, the prepositions, the "not this, that" lines. As a college instructor, I can usually tell. I put it through GPTZero and it said it's 96% LLM written. GPTZero is not full-proof but I think it's likely right on this one and I find it very ironic.
I find this knee-jerk reaction, that everything that shows certain stylistic choices is put under the suspicion of "might be generated", to have become a tedious cliché.
Unless the author had live-streamed the writing process, how could we know? Humans have been exposed to LLM generated texts on nearly all channels for more than three years, so by now it would be a surprise if there had not been a reciprocal reaction. Writers imitate what they read, the tools we shaped might have started to shape us.
The article starts with a philosophically bad analogy in my opinion. C-> Java != Java -> LLM because the intermediate product (the code) changed its form with previous transitions. LLMs still produce the same intermediate product. I expanded on this in a post a couple months back:
"The intermediate product is the source code itself. The intermediate goal of a software development project is to produce robust maintainable source code. The end product is to produce a binary. New programming languages changed the intermediate product. When a team changed from using assembly, to C, to Java, it drastically changed its intermediate product. That came with new tools built around different language ecosystems and different programming paradigms and philosophies. Which in turn came with new ways of refactoring, thinking about software architecture, and working together.
LLMs don’t do that in the same way. The intermediate product of LLMs is still the Java or C or Rust or Python that came before them. English is not the intermediate product, as much as some may say it is. You don’t go prompt->binary. You still go prompt->source code->changes to source code from hand editing or further prompts->binary. It’s a distinction that matters.
Until LLMs are fully autonomous with virtually no human guidance or oversight, source code in existing languages will continue to be the intermediate product. And that means many of the ways that we work together will continue to be the same (how we architect source code, store and review it, collaborate on it, refactor it, etc.) in a way that it wasn’t with prior transitions. These processes are just supercharged and easier because the LLM is supporting us or doing much of the work for us."
I think that we can already experience a revolution with LLMs that are not fully autonomous. The potential is that an engineering-like approach to a prompt flow can allow you to design and review (not write) a lot more code than before. Though you're 100% correct that the analogy doesn't strictly hold until we can stop looking at the code in the same way that a js dev doesn't look at what the interpreter is emitting.
What would you say if someone has a project written in, let's say, PureScript and then they use a Java backend to generate/overwrite and also version control Java code. If they claim that this would be a Java project, you would probably disagree right? Seems to me that LLMs are the same thing, that is, if you also store the prompt and everything else to reproduce the same code generation process. Since LLMs can be made deterministic, I don't see why that wouldn't be possible.
PureScript is a programming language. English is not. A better analogy would be what would you say about someone who uses a No Code solution that behind the scenes writes Java. I would say that's a much better analogy. NoCode -> Java is similar to LLM -> Java.
I'm not debating whether LLMs are amazing tools or whether they change programming. Clearly both are true. I'm debating whether people are using accurate analogies.
> PureScript is a programming language. English is not.
Why can’t English be a programming language? You would absolutely be able to describe a program in English well enough that it would unambiguously be able to instruct a person on the exact program to write. If it can do that, why couldn’t it be used to tell a computer exactly what program to write?
> Why can’t English be a programming language? You would absolutely be able to describe a program in English well enough that it would unambiguously be able to instruct a person on the exact program to write
Various attempt has been made. We got Cobol, Basic, SQL,… Programming language needs to be formal and English is not that.
I don’t think you can do that. Or at least if you could, it would be an unintelligible version of English that would not seem much different from a programming language.
I agree with your conclusion but I don't think it'd necessarily be unintelligible. I think you can describe a program unambiguously using everyday natural language, it'd just be tediously inefficient to interpret.
To make it sensible you'd end up standardising the way you say things: words, order, etc and probably add punctuation and formatting conventions to make it easier to read.
By then you're basically just at a verbose programming language, and the last step to an actual programming language is just dropping a few filler words here and there to make it more concise while preserving the meaning.
However I think there is a misunderstanding between being "deterministic" and "unambiguous". Even C is an ambiguous programming language" but it is "deterministic" in that it behaves in the same ambiguous/undefined way under the same conditions.
The same can be achieved with LLMs too. They are "more" ambiguous of course and if someone doesn't want that, then they have to resort to exactly what you just described. But that was not the point that I was making.
I'm not sure there's any conflict with what you're saying, which I guess is that language can describe instructions which are deterministic while still being ambiguous in certain ways.
My point is just a narrower version of that: where language is completely unambiguous, it is also deterministic where interepreted in some deterministic way. In that sense plain, intelligible english can be a sort of (very verbose) programming language if you just ensure it is unambiguous which is certainly possible.
It may be that this can still be the case if it's partly ambiguous but that doesn't conflict with the narrower case.
I think we're agreed on LLMs in that they introduce non-determinism in the interpretation of even completely unambiguous instructions. So it's all thrown out as the input is only relevant in some probabilistic sense.
Here's a very simple algorithm: you tell the other person (in English) literally what key they have to press next. So you can easily have them write all the java code you want in a deterministic and reproducible way.
And yes, maybe that doesn't seem much different from a programming language which... is the point no? But it's still natural English.
English CAN be ambiguous, but it doesn't have to be.
Think about it. Human beings are able to work out ambiguity when it arrises between people with enough time and dedication, and how do they do it? They use English (or another equivalent human language). With enough back and forth, clarifying questions, or enough specificity in the words you choose, you can resolve any ambiguity.
Or, think about it this way. In order for the ambiguity to be a problem, there would have to exist an ambiguity that could not be removed with more English words. Can you think of any example of ambiguous language, where you are unable to describe and eliminate the ambiguity only using English words?
Human beings are able to work out the ambiguity because a lot of meaning is carried in shared context, which in turn arises out of cultural grounding. That achieves disambiguation, but only in a limited sense. If humans could perfectly disambiguate, you wouldn't have people having disputes among otherwise loving spouses and friends, arising out of merely misunderstanding what the other person said.
Programming languages are written to eliminate that ambiguity because you don't want your bank server to make a payment because it misinterpreted ambiguous language in the same way that you might misinterpret your spouse's remarks.
Can that ambiguity be resolved with more English words? Maybe. But that would require humans to be perfect communicators, which is not that easy because again, if it were possible, humans would have learnt to first communicate perfectly with the people closest to them.
A determinisitic prompt + seed used to generate an output is interesting as a way to deterministically record entirely how code came about, but it's also not a thing people are actually doing. Right now, everyone is slinging around LLM outputs without any trying to be reproducible; no seed, nothing. What you've described and what the article describe are very different.
Yes, you are right. I was mostly speaking in theoretical terms - currently people don't work like that. And you would also have to use the same trained LLM of course, so using a third party provider probably doesn't give that guarantee.
reply