Hacker Newsnew | past | comments | ask | show | jobs | submit | cpburns2009's commentslogin

It reads like a fictional story. There's a curious sign of AI editing starting in his February posts. There's a significant increase in em-dash usage, fewer parentheticals, and fewer obvious punctuation errors.

PyPy is limited to maintenance mode due to a lack of funding/contributors. In the past, I think a few contributors or funding is what helped push "minor" PyPy versions. It's too bad PyPy couldn't take the federal funding the PSF threw away.

> It's too bad PyPy couldn't take the federal funding the PSF threw away.

The PSF is primarily a political advocacy organisation, so it wouldn't make sense for them to use the money for Python.


From what I gather the maxxer/maxxing suffix is young Gen Z slang for hyper fixation. Looksmaxxing is being obsessed with your looks. Jestermaxxing would then be an outrageous jester or clown for the sake of it? Maybe it's a synonym for rage-baiting? I'll return to guarding my lawn.

One problem with Linux is there's so many editors to choose from. I assume you want to exclude the Java IDEs which rules out Eclipse and Netbeans. The basic editors are Gedit and Kate. The native IDEs are Geany and KDevelop. Then there's the Scintilla-based editors/IDEs which are probably closer to what you're looking for: SciTE (text editor), or CodeLite (IDE).

Exactly this. A family member of mine had no good option before Starlink. Dial-up is obsolete, traditional satellite internet was not available due to some angle of a valley or treeline. A 4G signal booster can only do so much with a poor signal.

> What if a non-native English speaker uses the help of an AI model in the formulation of some issue/task?

How can you be sure the AI translation is accurately convening what was written by the speaker? The reality is you can't accommodate every hypothetical scenario.

> What about having a plugin in your IDE that rather gives syntax and small code fragment suggestions ("autocomplete on steroids")? Does this policy mean that the programmers are also restricted on the IDE and plugins that they are allowed to have installed if they want to contribute?

Nobody is talking about advanced autocomplete when they want to ban AI code. It's prompt generated code.


The clearly LLM PRs I receive are formatted similarly to:

    ## Summary
    ...

    ## Problem
    ...

    ## Solution
    ...

    ## Verification
    ...
They're too methodical, and duplicate code when they're longer than a single line fix. I've never received a pull request formatted like that from a human.

Don't you use the dialectic?

What quantization are you running on the 5080? I'm waiting to receive mine.

If you want to have a chance at running a large model, it needs to be quantized. The unsloth user on Huggingface manages popular quantizations for many models, Qwen included, and I think he developed dynamic GGUF quantization.

Take Qwen/Qwen3.5-35B-A3B for example. It's 72 GB. While unsloth/Qwen3.5-35B-A3B-GGUF has quantizations from 9-38 GB.


Does llama.cpp support Qwen3.5 yet? When I tried it before, it failed saying "qwen35moe" is an unsupported architecture.


Yes, but make sure you grab the latest llama.cpp release

New model archs usually involve code changes.


If you're running Ollama, you'll have to wait a little longer for its embedded version of llama.cpp to catch up. It can be a couple days or weeks behind.


Awesome! It looks like the llama.cpp-hip AUR was updated today to b8179, and it works.


You would need the Dynamic 2.0 GGUF as discussed in the article.

But mmmmmm, Q8_K_XL looks mighty nice.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: