Hacker Newsnew | past | comments | ask | show | jobs | submit | kbolino's commentslogin

This forms a closed set of types (A, B, nil -- don't forget nil!) but the compiler doesn't understand it as such and complains that the following type-switch is not exhaustive ("missing return"):

  func Foo(s SumType) bool {
    switch s.(type) {
    case A: return true      
    case B: return true
    case nil: return true
    }
  }
Also, you, the package author, may know what constitutes SumType, but the consumers of your package don't, at least not without source. Moreover, you can spread A, B, and any other implementations of SumType across many source files, making it hard to answer the question even with source. This is even a problem for the standard library, just consider go/ast and its Decl, Expr, and Stmt interfaces, none of which document what types actually implement them.

> but the compiler doesn't understand ...

Right — While it does has sum types, it doesn't have some other features found in other languages.

But, of course, if one wanted those features they would talk about those features. In this discussion, we're talking specifically about sum types, which Go most definitely does have.

> nil -- don't forget nil!

This is why alternative syntax has never been added. Nobody can figure out how to eliminate nil or make it clear that nil is always part of the set in a way that improves upon the current sum types.


Go does not use LLVM at all and never has. It uses its own compiler and its own (quirky) assembler. Of course, this does not matter to most people, but LLVM compilation can be both a blessing (good interop with C/FFI) and a curse (changes to LLVM internals causing issues for languages other than C/C++).

> They also failed to initially offer any automatic porting tooling which could have increased adoption.

Maybe it wasn't very good, but 2to3 was there from the start:

https://docs.python.org/3.0/library/2to3.html


Huh, so it was. Thanks for the correction.

From nbio's README:

  For regular connection scenarios, nbio's performance is inferior to the standard library due to goroutine affinity, lower buffer reuse rate for individual connections, and variable escape issues.
From gnet's README:

  gnet and net don't share the same philosophy in network programming. Thus, building network applications with gnet can be significantly different from building them with net, and the philosophies can't be reconciled.
  [...]
  gnet is not designed to displace the Go net, but to create an alternative in the Go ecosystem for building performance-critical network services.
Frankly, I think it's unfair to argue that the net package isn't performant, especially given its goals and API surface.

However, the net/http package is a different story. It indeed isn't very performant, though one should be careful to understand that that assessment is on relative terms; net/http still runs circles around some other languages' standard approaches to HTTP servers.

A big part of why net/http is relatively slow is also down to its goals and API surface. It's designed to be easy to use, not especially fast. By comparison, there's fasthttp [1], which lives up to its name, but is much harder to work with properly. The goal of chasing performance at all costs also leads to questionable design decisions, like fiber [2], based on fasthttp, which achieves some of its performance by violating Go's runtime guarantee that strings are immutable. That is a wild choice that the standard library authors would/could never make.

[1]: https://pkg.go.dev/github.com/valyala/fasthttp

[2]: https://pkg.go.dev/github.com/gofiber/fiber/v3


STD is built on goroutines whereas these performance networking libraries are built on a main reactor loop. Hence the need for refactoring, not just tweaking.

Something like http/v2 and net/v2. I know gnet had(has?) issues wit implementing tls because how the entire STD is designed to work. At the time, it was a great piece of software, but by now, it is slow and outdated. A lot of progress has been made since in networking, parsing, serialization, atomics and so on.


Where and how they spent their money is on p. 21 of this PDF [1] which can be obtained from this official source [2]. This is just a high-level breakdown, but it does illustrate that, for example, more than twice as much is spent on "Donation processing expenses" ($7.5M) as "Internet hosting" ($3.1M), and that the largest line item, by far, is "Salaries and benefits" ($106M).

[1]: https://wikimediafoundation.org/wp-content/uploads/2025/04/W...

[2]: https://wikimediafoundation.org/annualreports/2023-2024-annu...


Well obviously salaries will be the highest expense in any organization like this. The more interesting question is if it's salaries to security programmers or teachers at an african womens' coding bootcamp (yes they did spend money on that, and yes it's probably useful, but hardly what people think of when they see those "donate now to keep wikipedia alive" banners). A big percentage probably goes to their CEO who does who knows what.

There are a couple of ways to approach this information. One is to compare to the past. For example, comparing with 2008-2009 [1], they now spend 3.75 times as much on hosting, but 48 times as much on salaries, illustrating a more-than-tenfold relative growth in salaries compared to hosting. While hosting is not now nor ever was their only relevant expense, it is a good anchor point.

Another key difference over the last 15 years has been the introduction of awards and grants, which didn't exist then but now comprise $26.8M (15%) of their expenditures. This is where most of the ideological/controversial spending actually goes, rather than the salaries per se, but even more to the point, this one line item is more than 3 times their entire inflation-adjusted budget from 15 years ago ($5.6M times 150% CPI = $8.4M) and is still more than if we adjusted their entire budget using the hosting cost as an index ($5.6M times 3.75 = $21M).

[1]: https://upload.wikimedia.org/wikipedia/commons/a/a4/WMF_Annu...


Look, I'm not defending wikipedia, I'd just like to point out that comparing hosting to salaries is a quite strange metric. Hosting is cheap and relatively constant, adding features to the site or paying admins to maintain the quality of edits is scalable. How does throwing more money at hosting make a better product? It's not like the servers can't handle the requests.

Using hosting costs as an index is nonsensical. I wasn't able to find numbers for 2009, but since 2015 the monthly page views have remained almost exactly constant. So you might as well claim that they're vastly overpaying for hosting since inflation from 2008 is way less than 3.75x.


I picked hosting because it's a line item that exists across all of their budgets, it's a rough proxy for a web business's non-salary expenses, it's a big part of what you think you're donating to based upon Wikipedia's own language in their fundraising drives, and if nothing else, it's way more forgiving to the growth of their expenses than consumer price inflation is.

Ultimately every person has to decide for themselves whether they think WMF is a worthy recipient for their donations, but it is in no way operating on a shoestring budget nor staffed by volunteers anymore.


It is not always necessary to explicitly use prepared statements, though. For example, the pgx library for Go [1] and the psycopg3 library for Python [2] will automatically manage prepared statements for you.

[1]: https://pkg.go.dev/github.com/jackc/pgx/v5#hdr-Prepared_Stat...

[2]: https://www.psycopg.org/psycopg3/docs/advanced/prepare.html


The Apple monitor will likely have better speakers, and I'm not even sure the others will have microphones at all. Apple also does a better job with color accuracy/consistency, at least historically. There's still a sizeable markup, but it's not entirely for nothing.

Back in the day (~15 years ago), when 4K monitors were unheard of and even Apple's high-end displays were still 1440p, you could get a bottom-dollar monitor using one of their panels (e.g. Yamakasi Catleap Q270) for about a third of the price. However, it came with no amenities, a single connector (dual-link DVI only), a questionably legal power cable, and no built-in scaling. The vendors, presumably to prevent refunds, even asked for your graphics card model before selling it to you, because it wouldn't work with low-end cards. Oh, and there were very few in the U.S., so you were typically getting them shipped straight from abroad, customs duties and all.

We've definitely come a long way.


Apple monitors are one of those things that are absolutely worth buying on release, but every month after that they get a worse and worse value.

After a few years, the "cheap ones" have usually caught up, if you're willing to do the research.


I disagree, the software and excellent integration in the ecosystem has always differentiated Apple and even years later models from ASUS are still headaches when it comes to everything outside the panel. Its like when gamers used to compare Apple spec by spec (ie. CPU, RAM, Disk) and valued all the software they provide at $0.

These days they still value software at $0 but the specs have become quite competitive and many times exceed what the rest of the market offers.


Sure, all I'm pointing out is the prices don't go down - so that you might as well buy as soon as they're released and get the most value.

Whereas with their laptops and almost everything else you might as well wait if you can, next year's is gonna be better and/or cheaper.


This is true, but it feels like a mistake. It's too late to change now, of course, but I feel like (0, nil) and (!=0, !=nil) should both have been forbidden. The former is "discouraged" now, at least. It does simplify implementations to allow these cases, but it complicates consumers of the interface, and there are far more of the latter than the former.

The author doesn't touch on it, but the bigger problem with things like Foo|Bar as an actual type (rather than as a type constraint) is that every type must have a default/zero value in Go. This has proven to be a difficult problem for all of the proposals around adding sum types, whether they're used as errors or otherwise. For example, to support interface{Foo|Bar} as a reified type, you'd have to tolerate the nil case, which means you either couldn't call any methods even if they're common to Foo and Bar, or else the compiler would have to synthesize some implementation for the nil case, which would probably have to just panic anyway. And an exhaustive type-switch would always have to have "case nil:" (or "default:") and so on.

> every type must have a default/zero value in Go

Hot take, maybe, but this is one of the few "mistakes" I see with Go. It makes adding QoL things like you mentioned difficult, requires shoehorning pointers to allow for an unset condition, some types don't have a safe default/zero value like maps, and makes comparisons (especially generic) overly complex.


Go specifically does not want to add QoL things because it means the compiler team has to spend time implementing that extra syntax and semantics versus making a minimal set of features better.

The problem with the zero value business is that it also makes adding these QoL things in libraries difficult or outright impossible. Case in point, I tried building a library for refinement types, so you can have a newtype like,

  type AccountName string
except you write it like (abridged)

  type AccountName refined.Scalar[AccountName, string]

  func (AccountName) IsValid(value string) bool {
    return accountNameRegexp.MatchString(value)
  }
and that enforces an invariant through the type system. In this case, any instance of type AccountName needs to hold a string conforming to a certain regular expression. (Another classical example would be "type DiceRoll int" that is restricted to values 1..6.)

But then you run into the problem with the zero value, where the language allows you to say

  var name AccountName // initialized to zero value, i.e. empty string
and now you have an illegal instance floating around (assuming for the sake of argument that the empty string is not a legal account name). You can only really guard against that at runtime, by panic()ing on access to a zero-valued AccountName. Arguably, this could be guarded against with test coverage, but the more insidious variant is

  type AccountInfo struct {
    ID int64 `json:"id"`
    Name AccountName `json:"name"`
  }
When you json.Unmarshal() into that, and the payload does not contain any mention of the "name" field, then AccountName is zero-valued and does not have any chance of noticing. The only at least somewhat feasible solution that I could see was to have a library function that goes over freshly unmarshaled payloads and looks for any zero-valued instances of any refined.Scalar type. But that gets ugly real quick [1], and once again, it requires the developer to remember to do this.

[1] https://github.com/majewsky/gg/blob/refinement-types-4/refin...

So yeah, I do agree that zero values are one of the language's biggest mistakes. But I also agree that this is easier to see with 20 years of hindsight and progress in what is considered mainstream for programming languages. Go was very much trying to be a "better C", and by that metric, consistent zero-valued initialization is better than having fresh variables be uninitialized.


> The only at least somewhat feasible solution that I could see

You can use pointers and then `encoding/json` will leave them as `nil` if the field is missing when you `Unmarshal`. I believe the AWS Go SDK uses this technique for "optional" fields (both input and output.) Obviously more of a faff than if it supported truly "unset" fields but it is what it is.

(see https://go.dev/play/p/rkLqnEmyuVE )


Go was trying to be a better c++. In c++ there are infinity different constructors and that was too complicated, so they made a language with only one constructor. Go isn't the way it is because nobody knew any better, it's because they deliberately chose to avoid adding things that they thought weren't beneficial enough to justify their complexity.

You're missing the point, Go does not want these QoL features. Arguing about why they are hard to add is pointless because, philosophically, they are undesirable and not going to be accepted.

Go 1.26 just added new(expr) which simplifies passing values to places that expect pointers.

The old guard is slowly stepping away, for good or ill. QoL changes are not off the table, but they do have to fit into the language.


Why not just define default(Foo|Bar) as default(Foo)?

There are cases where privileging the first alternative makes sense (like Maybe[T] = None|T with None = struct{} e.g.), and there are cases where it doesn't (like Either[L,R] = L|R). There has been an extensive amount of discussion about a number of different proposals, e.g. https://github.com/golang/go/issues/54685, https://github.com/golang/go/issues/57644, etc., but so far, much like "better" error handling, no workable consensus has yet emerged.

The ability to grow without copying is already part of how slices work. Every slice is really a 3-word tuple of pointer, length, and capacity. If not explicitly set with make, the capacity property defaults to a value that fills out the size class of the allocation. It just so happens that, in this case, the size of the Task type doesn't allow for more than 1 value to fit in the smallest allocation. If you were to do this with a []byte or []int32 etc., you would see that the capacity doesn't necessarily start at 1: https://go.dev/play/p/G5cifdChGIZ

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: