Hacker Newsnew | past | comments | ask | show | jobs | submit | magnio's commentslogin

Incredible, I have never heard of std::autodiff before. Isn't it rare for a programming language to provide AD within the standard library? Even Julia doesn't have it built-in, I wouldn't expect Rust out of all languages to experiment it in std.

It makes use of https://github.com/EnzymeAD/enzyme which is an LLVM plugin and since Rust also uses LLVM in its backend, we can enable this plugin in our Rust toolchain when autodiff is enabled. So, it is a bit of compiler black magic rather than a direct implementation in the standard library.

You can read some motivation for it at the following link

https://rust-lang.github.io/rust-project-goals/2024h2/Rust-f...

note that it also discusses `std::offload`, which might also be of interest.


First, you don't have to feel guilty of anything, since forking open source projects to make changes tailored to your use case is as old as open source itself. It is, in fact, the primary benefit of open source.

Second, it is not a given that your change would be accepted regardless of who wrote it. Maybe the feature is too niche for its complexity, maybe it is better implemented with more generality or extensibility that does not make sense for your own use. In those cases, your change might have been rejected upstream, so having it only locally is a perfect fine solution.

Third, if you believe it is actually useful for broader users, open an issue requesting that feature, and say LLM implemented it in an hour. Then the maintainers can prompt their own LLM to implement it with ease, or do whatever they want with their project.


You could send a comment/open a discussion explain explaining what you did and asking if they would be interested in the feature or a PR.

Apple has device attestation deployed like one year before Google even proposed it: https://httptoolkit.com/blog/apple-private-access-tokens-att...

hacker news when discovering that apple deployed WEI, for ages, with beloved IT company Cloudflare, affecting hundreds of millions of users: "aww, you're sweet"

hacker news when reading that google is doing the same thing for the rest of the userbase: "hello, human resources?"


I thought that cloudflare system worked on any hardware and the tokens are anonymous. Did that change at some point? If it didn't change, then yeah it should get a very different reaction!

(Edit: it looks like the new system is still private and still interlinked with the old system that lets you use any hardware? I think?)

Also I don't know how you could have missed the widespread criticism of apple and especially cloudflare on this site.


apple has blessed cloudflare WAF with backend access to the apple ID service tokens that they manage for things like iMessage authenticity

I think it has also blessed Amazon's WAF

Cloudflare has a turnstile product that i'm sure uses this apple IDS token

Mobile Safari generally is not shown Cloudflare captchas or similar because of Apple-Cloudflare cooperation. it's not complicated.

Apple calls it a "Personal Access Token" but that makes it sound more like a DRM scheme - which it sort of is, it is managing your right to a free-as-in-beer access scheme - than a broad web integrity environment solution


Were you attempting to give us an example of the Goombah Fallacy? Because this is a picture perfect one.

Really. I think HN hates Cloudflare with (quite unjustified if you ask me) searing passion.

In 2008, the Department of Homeland Security (DHS) contacted Unspam Technologies, asking, "Do you have any idea how valuable the data you have is?" The DHS' email served as the impetus for Cloudflare, a technology company Prince co-founded with Holloway and fellow Harvard Business School graduate Michelle Zatlyn the following year.

https://en.wikipedia.org/wiki/Matthew_Prince#:~:text=In%2020...

They're literally a government surveillance program larping as a private company, many such cases.


You want Typst: https://github.com/typst/typst

It's like the JSX of Latex: markup in a programming language, not a programming language pretends to be markup.


> I used Typst for a few weeks. It already feels much more understandable, consistent, hackable, and customizable. I guess that is the difference between an ad hoc macro system and an actually thought through programming language.

> The only drawback I can see is the ecosystem being smaller and less mature. That is, however, counteracted by being able to do things on your own, without immersing yourself deeply in LaTeX for years. Also, it will improve with time.


> learn SQL, even though it's out of fashion

In what world is SQL out of fashion??


There's now ~2 generations of professional engineers for whom SQL was rarely/never a thing to learn. Between the hard(er) split among front-end/back-end developers, ORMs improving, and the (flawed) idea that NoSQL would make SQL irrelevant, it has become somewhat of a niche skill.

Think about Firebase. One can be full stack on an app built on Firebase and be successful without ever touching SQL. Firebase is very popular, and has been for some time.

Source: I have worked with a set of otherwise solid engineering teams and can say that SQL familiarity has given me a leg up on very smart engineers who nonetheless do not do relational databases.


The only way you can get away with creating an application without touching sql is of you offload the logic to your backend language, and then I don't think you'd be efficient enough to scale.

Also can someone actually understand the logic of joins, indexes, pks, etc enough to create an efficientand scalable db, and not simply have learned sql by proximity?


> The only way you can get away with creating an application without touching sql

Please look at app platforms like Firebase[1]. There are absolutely complex Web applications running at scale that do not use SQL anywhere in the stack.

Aside from that, MongoDB and Redis are 17 years old; CouchDB is over 20. NoSQL is well-established at this point. All of the hyperscalers offer proprietary NoSQL databases, and have done so for years. A large number of developers uses those databases in production.

In our API-centric environment, there are a lot of apps that don't do much in the way of managing their own data directly at all, using mixtures APIs for auth and other key application functions.

> can someone actually understand the logic of joins, indexes, pks, etc enough to create an efficientand scalable db

If you are not using a relational database, these concepts do not necessarily apply.

1 - https://firebase.google.com/docs/firestore/query-data/querie...


Damn, we get it, USA is a dystopia. No need to keep scaring us with those stories.


It just occurred to me that some of the car-hating comments on HN might be motivated by a yearning for a more communal way of life (the expression of which has been suppressed by the US's ethic of freedom for the individual).


They do have their own: https://hal.science/

It is actually quite common to come across HAL in subfields of mathematics in my experience.


HAL is decidedly second-tier. Given the option, everyone would pick arXiv over HAL. Hence, HAL hosts lots of stuff that didn't (even) make it to arXiv => lots of subpar dredge.


I agree that dredge is a huge problem with HAL, but it's getting better. While arXiv is still stuck with a unfriendly UI.


> HAL is decidedly second-tier. Given the option, everyone would pick arXiv over HAL.

Can you elaborate on that?


That’s great. People will use whichever one is better.


Turns out that "better" for many people means "better moderated", since static hosting is hard to differentiate. And at present Arxiv is winning that one (at the expense of considerably higher running costs due to said moderation)


Talk about a bubble. No one outside of programmers know what the heck is Claude. In Asia, ChatGPT and Gemini dominates LLM usage, followed by Perplexity.


I suspect we're underestimating the number of users Deepseek has in Asia.


Microsoft released a report with some numbers on Deepseek adoption globally. They say it's got ~90% market share in China, and is growing in popularity across Africa.

https://www.microsoft.com/en-us/corporate-responsibility/top...


What you are describing is linear (or affine) types in academic parlance, where a value must be used exactly (or at most) once, e.g., being passed to a function or having a method invoked, after which the old value is destroyed and not accessible. Most common examples are prolly move semantics in C++ and Rust.


I am not the most ardent supporter of LLM, but the whole article reads like a critique of macOS idiosyncrasies and its aversion to CLI and text format. Why does macOS tell you to use the GUI so much?

Sure, GUI is more accessible to the average users, but all the tasks in the article aren't going to be done by the average user. And for the more technical users, having to navigate System Settings to find anything is like Dr. Sattler plunging her arms into a pile of dinosaur dung.


Power users can use CLIs quite easily on macOS. The official documentation is geared towards the non-power users but information about most tasks a power user wants done in a CLI are available, it just requires a power user skill of searching for it.

It's a good filter, keep it simple and easy for the vast majority of people, and have tools for the advanced ones to use.


> macOS idiosyncrasies and its aversion to CLI

But people using OSX often also know the commandline quite well - at the least better than most windows users. I saw this again and again in university.


It also helps that OSX has FreeBSD underneath (so, practically, Linux).


>FreeBSD underneath (so, practically, Linux).

BLASPHEMY


>Why does macOS tell you to use the GUI so much?

Because it's whole point is that it's a graphical OS.

If you used just cli unix userland, might as well use Linux.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: