Hacker Newsnew | past | comments | ask | show | jobs | submit | trvz's commentslogin

This could’ve been a nice native lickable app, as befitting for what it does.

Instead, and I’m not against AI, AI slop that isn’t native, has awful design and awful font decisions.

Someone should take the idea but implement it properly.

And a Cover Flow view is a must.


They're annoyed at people shamelessly publishing low quality crap. Calling it out is a way to raise standards back up.

More accurately, the risk has increased by at least one order of magnitude, but the confidence of the public has largely stayed the same.

Real men rawdog with root.

Could be a fonting error, could be being really aggressive about not ignoring slavic diacritics. Considering the topic of the article, I say it could be either.

If it were the latter, why would it be misspelled in the title at the top of the page?

Installing with pip on macOS is just not an acceptable option. It'll mess up your system just like npm or gem.

This needs to go on homebrew or be a zip file with an app for manual download.


Agree with you, a slightly more maintainable way to use it now is with "uv" or mise. i've used `uv tool install unsloth` for this one.

Yep - uv is a better fit - and you get parallel downloads as well

It's not just uv, it also will drop an nvm install in your home directory :(

Hey we're still working on making installation much better - appreciate the feedback!

We come from Python land mainly so packaging and distribution is all very new to us - homebrew will definitely be next!


Uv helps you up though. Use a pyproject.toml and uv sync. Everything will be put into the venv only, nothing spread across the whole system.

The pyproject.toml can even handles build env for you, so you no longer need a setup.sh that installs 10 tool in specific order with specific flag to produce working environment. A single uv sync, and the job is done.

Plus the result is reproducible, so if this time uv sync work, then it also work next time.

Highly recommend if you are still on pip.

Note: Take a example that I used to install unsloth with rocm setup that based on unreleased git version dependencies and graphic card specific build flag, all of them can be handled with one command 'uv sync'. This will require a big pile of shell script if doing another way. https://github.com/unslothai/unsloth/issues/4280#issuecommen...


I recommend installing uv first, then you can install any Python code you want inside a virtual environment to keep it isolated from the rest of the system.

Yep uv pip install unsloth works as well - we probably should have just made that the default - in fact Unsloth makes its own venv using UV if you have it dynamically

I think the website should probably mention those installation preset in unsloth pyproject.toml though. The website instruct you to install dependencies separately. But it turns out there are dedicated preset that install specific rocm/cuda/xpu version in the project.

or `uv tool install unsloth` for a safe 'global' installation

On my linux systems I use venv to not affect system packages, is that not an option for this situation?


I know the whole package system across most languages is a dumpster fire but for Python, uv solves a lot of problems.

uv init

uv add unsloth

uv run main.py % or whatever


Yep UV is fantastic - should have just that default!

Ah yes, came to say something similar, Python dependencies are an absolute nightmare, even with uv it feels like there's always a battle to make other peoples Python apps install.

Update: It looks like it doesn't work with the current Python version, you might have to downgrade to Python 3.13 (however even then I still get `error: unexpected argument '--torch-backend' found`)


Agreed, feels like a vibe-coded frontend based on already given backend features.

Also, never saw any Unsloth related software in production to this day. Feels strongly like a non-essential tool for hobby LLM wizards.


You would be surprised - we're the 4th largest independent distributor of LLMs in the world - and nearly every Fortune 500 company has utilized either our RL fine-tuning package or used our quants and models - we for example collab directly with large labs to release models with bug fixes.

Unsloth is providing the best and most reliable libraries for finetuning LLMs. We've used it for production use-cases where I work, definitely solid.

Glad it was helpful!

Even a brief reading of their site would have spared you this embarrassment.

You seem to have insight into the size of OpenAI’s models.

Care to share the parameter counts for them?


Lane position should be managed by putting files into different folders.

Name and dates can also be stored in the filename and file metadata.


Working on additional meta data in the file. What makes you prefer to have lanes managed by file move? I considered it but was concerned about the potential loss of data if there are unsaved changes in the file, or the user accidentally moving a file while the agent is writing to it. I will consider that further though.


Seems like a silly thing to say right when x86 is getting pummelled to death by Apple and Valve, maybe slowly, but steadily, while the rest of the gang also watches on.


> Valve

This is a funny thing to say when Valve hasn't actually released any ARM device yet, and the Steam Deck is still fully reliant on x86. The ARM hardware they do plan to release relies on x86 emulation, which is something that historically usually doesn't pan out.


Worked very well for Apple in their transition to Apple Silicon.


for real, rosetta is crazy good


The Mac silicon is nice in that it has partial x86 emulation in that it can work in x86 memory store mode.

Since they had control over the hardware, they could punt on one of the hard parts of Rosetta and bake it into Silicon.

Understanding the memory ordering requirements from binary without source and without killing performance by being overly conservative (and hell, the source itself probably has memory ordering bugs if it was only tested on x86) sounds next to impossible.


> Understanding the memory ordering requirements from binary without source and without killing performance by being overly conservative (and hell, the source itself probably has memory ordering bugs if it was only tested on x86) sounds next to impossible.

It is hard, but Microsoft came up with a hack to make it easier. MSVC (since 2019) annotates x86 binaries with metadata describing the codes actual memory ordering requirements, to inform emulators of when they need to be conservative or can safely YOLO ordering. Obviously that was intended to assist Microsoft's Prism emulator, but the open source FEX emulator figured out the encoding (which I believe is undocumented) and implemented the same trick on their end.

Emulators still have to do it the hard way when running older MSVC binaries of course, or ones compiled with Clang or GCC. Most commercial games are built with MSVC at least.


Steam Frame is using ARM. Not sure exactly what was the reason for them to do it there.

They also use emulation backing this project: https://github.com/FEX-Emu/FEX


That is actually addressed in the article. Several architectures "pummelled" x86 before. PowerPC, for example. They did not stood the test of time though.


What they did not win was the popularity contest, mostly thanks to Windows - the Wintel market was just too massive to compete with.

But that’s changed somewhat - Apple has managed a larger mind and market share (while switching into ARM). The vast majority of uses are now available on the web, which is CPU agnostic, and there is a huge amount of open source software available.

The only things for which x86 still shines a little brighter are games, and native office. But office is mostly available on web, on Mac, and on Winarm. So games. Which aren’t big enough market mass to sustain the x86’s popularity — and is a segment (soon) under attack by Valve.


> The only things for which x86 still shines a little brighter are games, and native office. But office is mostly available on web, on Mac, and on Winarm. So games. Which aren’t big enough market mass to sustain the x86’s popularity — and is a segment (soon) under attack by Valve.

You've missed a huge segment:

Random in-house apps or niche vertical market apps that are closely tethered with a business workflow to the point that replacing them is a massive undertaking, where the developers at best aren't interested in improving anything and at worst no longer exist.


No I did not miss it. That has moved to web, either directly Or through an RDP/VNC interface where the actual windows virtual machine is hidden.

Embedded/hardware is the last segment still not replaced by web.


It absolutely has not. I absolutely agree most of those kinds of apps could, and those that can probably should, but more than enough have not.

I support a lot of dental practices using Patterson Eaglesoft and they still don't officially support VMs in any form, even for the server (despite it working fine) while they have removed all support for using terminal services. Obviously the basic application works fine, but a dental practice needs to be able to take digital x-rays. Shock the sensor drivers only exist for Windows and back when RDP and Citrix were supported it required a special bridge running on both the client (which of course still had to be Windows) and the server.

We used some thin clients back in the day for front desk stations and hygiene rooms that didn't need any special hardware, but the main practice rooms and the pano stations always needed full Windows PCs.

The client app is built with PowerBuilder so it'd require a deep rewrite to support any other platforms.

The server side is a Sybase SQL Anywhere database and a SMB file share so it could easily be run natively on Linux but the vendor can't be bothered.

This is a company that still insists that every user needs local admin privileges, despite literally nothing going wrong when they don't have it, and who usually doesn't support new Windows releases until a few months after it becomes the default for new PCs.

---

There are other dental platforms that do have web interfaces intended mostly to enable the use of iPads and other tablets but switching platforms is far from straightforward for practices with years of data, custom integrations, etc. Even if you are willing to go through the trouble (or starting fresh) those platforms, to my knowledge, still require Windows PCs for digital x-ray support.


I did write “embedded/hardware”. Yes, you need special drivers for your X-ray/drill/whatever so you earned an another decade of windows.

But in the places I frequent (backoffice, municipal, finance) it’s all gone web and rdp-through-web (which is web, in the sense that it doesn’t require windows on the client) with centralized administration with minimal (not quite self-serve but reasonably close) thin client users.


Most people outside US, and similar G8 countries, aren't going to pay Apple.


No, but Microsoft are also going arm. Where the us goes, the world goes eventually.


Microsoft keeps trying to go ARM for a decade already, most Windows devs and consumers don't care, backwards compatibility rules on PC world, regardless of Prism and ARM64EC.

Additionally beware what to wish for, as CoPilot+ PC are locked down with Pluton security processor, from XBox and Azure Sphere.


The x86 emulation for fallback is (I’ve heard - not tried) usable for the first time.

Microsoft tried in the past without a Rosetta equivalent; Apple succeeded twice with Rosetta. They did not try to switch cold turkey the way Microsoft did.


It doesn't cover device drivers, nor stuff like DAWs plugins.

Apple doesn't care about backwards compatibility like the PC, who doesn't move, stays behind.


Are you familiar with anyone who uses a 20 year old DAW?

Retro Amiga trackers don’t count.


servers


I'd add AWS + Gravitron to that list as well.


Lately I've made making some AWS Lambda functions to do some simple things in Python and chose to use the ARM-based instances because there wasn't any reason not to.


Yeah, we migrated a bunch of compute to Graviton a few years ago at a previous employer and the result was better performance at a lower price point.


Anecdotally at work (SME) we are pretty much all in on ARM. MacBooks with M-series, AWS Graviton instances, even our CI runners are now ARM to match local development.


How? x86 leads on performance. It's reasonably low power now, too; perhaps not the best, but it's not aughts-era power consumption.


People should look into consumer market share numbers before commenting.


And the article explains why they'll never "win."


I believe it’s more from the point of view of Kernel, Compiler, and Driver developers, not from manufacturers and users. Standards, while not very flexible, are good for building ecosystems.


What does Valve ship without x86?


Nothing yet, but the upcoming Steam Frame VR headset is ARM based. The relevant detail is they're bankrolling the open source FEX x86 emulator, with the goal of bringing the whole Steam back-catalogue to ARM systems.


Now that Google and Apple have to (more-or-less) allow other app stores, I wonder if Valve is bankrolling FEX with the intent of selling games on mobile?


The Steam Link was ARM-based.


Wasted tokens are preferred for local models, I need the GPU mainframe in my bedroom to heat it as I live in a third world country with unreliable heating (Switzerland).



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: