Hacker Newsnew | past | comments | ask | show | jobs | submit | cpeterso's commentslogin

I like Basecamp’s framing of software development time as management’s “appetite” for a new feature, how much time they are willing to spend on a project, as opposed to an estimate. This helps time box development and control project scope.

https://basecamp.com/shapeup/4.5-appendix-06


> using u8 prefixes would obligate us to insert casts everywhere.

Unfortunately, casting a char8_t* to char* (and then accessing the data through the char* pointer) is undefined behavior.


Yes, reading the actual data would still be UB. Hopefully will be fixed in C++29: https://github.com/cplusplus/papers/issues/592

My guitar teacher has a Line 6 HX Stomp multieffects pedal. In addition to programming effects patches use Line 6’s HX Edit desktop application, he also uses ChatGPT to generate patch files (they’re just JSON) by describing the effect or referencing a specific artist or song by name.

As I understand it, the difference between that and the pedal above, is that “patches” on the line6 are describing a chain of pre-existing effects. So like, phaser->delay->reverb. The polyend pedal you’re actually able to write custom dsp - so you’re able to build new units to chain together.

But there are other pedals that do custom code, rather than just custom patching, see my other comment https://news.ycombinator.com/item?id=46727231


This is really cool. I have a Line 6 Helix, and have wanted to explore something like this. Do you know if there is anything online about doing this?

I don’t have a specific recommendation, but a quick search found threads on Reddit and the Line 6 forums about generating patches with ChatGPT.

Similar: here is a YouTube video of an amusing reverse Turing test with four LLMs and a human. To make the test more interesting, the players pose as famous historical characters (Aristotle, Mozart, da Vinci, Cleopatra, and Genghis Khan) on a train in Unity 3D.

https://youtu.be/MxTWLm9vT_o


Neither Apple's nor Google's announcement says Siri will use Gemini models. Both announcements say, word for word, "Google’s technology provides the most capable foundation for Apple Foundation Models". I don't know what that means, but Apple and Google's marketing teams must have crafted that awkward wording carefully to satisfy some contractual nuance.


Direct quote from Google themselves:

"Apple Intelligence will continue to run on Apple devices and Private Cloud Compute, while maintaining Apple's industry-leading privacy standards."


This clarifies nothing?


Direct quote from their joint statement: "Apple and Google have entered into a multi-year collaboration under which the next generation of Apple Foundation Models will be based on Google's Gemini models and cloud technology. These models will help power future Apple Intelligence features, including a more personalized Siri coming this year."

Source: https://blog.google/company-news/inside-google/company-annou...


I think Apple would also still like to publicly say it's Apple's model, not Google's.


‘Based on’ implies an LoRA or some fine tuning.


It clarifies exactly what was being questioned.


Apple likely wants to post-train a per-trained model, probably along with some of Google's heavily NDA'ed training techniques too.


> "Google’s technology provides the most capable foundation for Apple Foundation Models"

Beyond Siri, Apple Foundation Models are available as API; will Google's technologies thus also be available as API? Will Apple reduce its own investment in building out the Foundation models?


Check again: https://x.com/NewsFromGoogle/status/2010760810751017017?s=20

"These models will help power future Apple Intelligence features, including a more personalized Siri coming this year."


I see what you mean, though I think “these models” refers to Apple’s Foundation Models, which “will be based on Google's Gemini models and cloud technology.” I guess it depends on wrist “based” means.


Mostly likely the wording was crafted by an artificially intelligent entity.


If a new programming language doesn’t need to be written by humans (though should ideally still be readable for auditing), I hope people research languages that support formal methods and model checking tools. Formal methods have a reputation for being too hard or not scaling, but now we have LLMs that can write that code.

https://martin.kleppmann.com/2025/12/08/ai-formal-verificati...


Absolutely agreed. My theory is that the more tools you give the agent to lock down the possible output, the better it will be at producing correct output. My analogy is something like starting a simulated annealing run with bounds and heuristics to eliminate categorical false positives, or perhaps like starting the sieve of eratosthenes using a prime wheel to lessen the busywork.

I also think opinionated tooling is important - for example, the toy language I'm working on, there are no warnings, and there are no ignore pragmas, so the LLM has to confront error messages before it can continue.


It should be impossible for an LLM to generate invalid code, as long as you force it to only generate tokens that the language allows.


Tokens do not encode semantics.


You can choose which token to sample based on language semantics. You simply don't sample invalid ones. So the language should be restrictive on what tokens it allows enough that invalid code is impossible.

> You can choose which token to sample based on language semantics

Can you though?

> the language should be restrictive on what tokens it allows

This is a restriction on the language syntax, not its semantics.


The website doesn’t scroll in Safari on iOS.


It should be fixed now. Thanks for flagging!


Is there an advantage for trb code to use colons for type annotations like:

  def greet(name: String): String
Instead of arrows like rbs files? Why diverge from precedent?

  def greet: (name: String) -> String


Typescript. I imagine most people writing Rails applications are also writing typescript for front-end code, so being able to use the same muscle memory for Ruby typing seems high desirable. That is the thing that stood out to me when I saw this site: it looks like they are taking the very positive lessons from Typescript and applying them to Ruby.

I agree with other posters here. I don't need everything typed - Ruby's duck typing is an awesome feature - but I do wish that some of the more important interfaces in our code were more strongly self-documenting and enforced.


Chromium uses the BSD license. Google could take Chromium closed source tomorrow without needing to change the license.


I used Notational Velocity for years. I loved its free form approach to note taking and searching, but I needed a cross platform solution with files that could be shared using Dropbox.

https://notational.net/

I now just use three text files open in Sublime Text: todo-today.txt, todo-this-week.txt, and todo-later.txt. I review them daily and promote todos to the next file when appropriate.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: