Interesting! Does it touch on why people initially became so opinionated about serif/sans readability? And what’s a meaningful characteristic if not serifs?
- serif was claimed to lead to better horizontal tracking... so better for long prose readability
- sans serif was claimed to lead to better spot-recognition of characters... so better for spot-character/word recognition and legibility
Those effects were never very strong, and varied depending on the exact fonts in use (and for digital, font rendering characteristics).
There's also probably an effect based on what you're used to. If most of the books you read are serif (which they would be for older people, since almost all printed books were serif), and your exposure to sans serif was largely via the internet, and you don't like most of what's written on the internet, that might sway you toward serif. Conversely, if you mostly read modern internet text, you might have the opposite bias.
I can confirm the later effect; prior to the internet, my primary exposure to sans serif fonts were government documents and forms, and advertisements, neither of which inculcated an association with any virtues.
Perhaps you should compare government documents and forms from different governments. UK government forms are extraordinarily beautiful, and welcoming, and are easy to fill out. US government forms, on the other hand, seem almost calculated to be unfriendly, and are incredibly difficult to fill out even when you use supporting instructions. It almost seems like they have been deliberately designed so that they cannot be filled out without the assistance of a lawyer. Canadian forms seem pretty neutral, and practical, but are nowhere near as pleasant to fill out as UK forms are.
Most of those 160 pages, is repetitive mish mash of various historical research (many of questionable quality) on typeface readability loosely grouped by certain themes retold in a way that makes it even less clear about their results, quality and whether testing conditions are useful for making any good conclusions.
Little value in reading it all unless you follow references and read what the quoted research actually did and said. The chapters have different thematic, but content and conclusions are very samey -> a bunch of questionable research and research which was inconclusive or didn't observe significant overall advantage of serif vs sans serif.
As for where it came from to me it very much feels like the defense of serif typefaces is largely typographers defending existence of their craft and people talking past each other with overgeneralized claims. There is definitely value in the art and craft of typography and I respect that. It would be too bland if everything used plain sans serif fonts that barely differ from each other, and you can definitely mess up typography making text hard to read when done badly. But I also believe that there is plenty of things based on traditions and "everyone knows x because that's how we have always done it".
As for sans serif for screens the obvious reason and also thing that comes up multiple times is low resolution text. At certain resolution there are simply not enough pixels for serifs. The author of paper suggest that with modern high resolution screens this argument doesn't stand.
My personal opinion is that it's not a big issue at sufficiently high text size. But even on somewhat modern 2560x1440 screen I can find plenty of UI elements that have only 7-8 pixels high labels. Not everyone is using retina displays and not everything is long format text. Screen resolutions have increased, but so have information density compared to early computer screens, although there is recent trend of simplifying UI to the point of dumbing it down and adding excessive padding all over the place.
There are other screens beside computers and mobile phones, many of them not very high resolution even by standards of early computer screens. It doesn't make sense to put high resolution screen and Linux computer in every little thing. Problem is made worse by lack of antialised text sometimes due to screen, sometimes MCU memory and compute limitations. You are probably not going to have modern font rendering stack on something like black and white washing machine screen, gas station pump or thermostat
The research multiple times mentioned stuff like low resolution, but it hardly ever quoted hard numbers in a meaningful way. How many pixels a typeface needs to be comfortably represent serif? How many arcseconds? Surely there must be research related to that one. This might be part of problem for some comparative research - can't compare readability of serif/sans serif if there is no serif typeface at those resolution. Stuff like point 10 or point 12 without additional details is meaningless.
Some personal anecdote -> text antialising has huge effect. Made a sample text of serif and sans serif font and zoomed out to the point where lower case letters are ~6px high. I wouldn't expect there to be enough resolution for serif but you can perceive surprising amount of detail in letter shapes. Zoomed in on screenshot it's a blurry mess, but at normal zoom level the serif letters are fine. It's readable but wouldn't consider either of 2 comfortable. When scaled up to 8px both pieces were still harder to read than same height text in UI labels. Why is that? Why is one identical height sans serif text much more readable than other? Are UI labels better pixel aligned? Is it due to subpixel antialising? That's on a 90deg rotated screen, is subpixel antialising even working properly there?
Just for fun switch OS UI font to serif. Due to font sizing inconsistency it ended up being 1 pixel shorter (7px) than same size default UI font. Can those even be considered serifs when they are hardly a pixel each? It felt weird, nowhere near as bad I expected, but still weird.
On both cases it is based on some evidence even if they are completely different (one is a question of definition, another of measurement and observation): for Pluto, it is a round lump of rock going around the Sun on it's own separate orbit; for serif vs non-serif, argument is that serifs help with line tracking for eyes depending on the line spacing and line length.
For a meta-study finding a different result, it'd be great to qualify how was the previous research wrong so we learn something from it.
I've marked as something to pick up as I am very curious.
> There are a lot of "soft" sciences that get increasingly softer every year. Social sciences, gender and women's studies, political science, some of the fast and loose use of "economics" these days.
I don’t think anyone is claiming these are sciences, except perhaps economics. I think you’re fighting a straw man
People get doctorates in these fields and post studies in journals that get picked up by thinktanks and media outlets. It's "science" for all intents and purposes; they're used as a source of authority based on data and analysis and formal papers.
I think your objection is that these fields inform policy and other decisions, and you feel that only science should do that. I think I disagree that only science should inform decisions. Non-science things can inform decisions, we’re generally opposed to murder/racism/bigotry even though there’s no double blind study we can run to determine the correct morality. These fields can impact decisions, and yet not be “science”
"science for all intents and purposes" is the most accidentally flattering anyone has said, and is hilarious in context. Wasn't expecting women's studies to get glazed on HN but I'm here for it :)
I think this may be a 'bug': as you zoom into the US west coast, SAN is visible before LAX. But LAX serves much more people every day, so a random person is much more likely to care about LAX. Intuitively, it seems to me that LAX should show up first. That could be intentional, but I can't think of a good reason why that choice would be made.
I don't think it's an easy problem to solve at all, that's why I quipped about making it an interview problem. :) In an interview, I'm just interested in hearing people talk through trying to solve difficult problems. Getting to a solution is incidental. And it's way more fun when I don't know of a go-to solution, either.
This screams vibe-coded slop. Think about it, if you were to implement zoom based detail level, you would have to try hard to introduce a bug on line 3, yet it happens to hit prod.
Yet, this thread is full of people defending this pre-alpha quality thing.
Not only that, but airports blink in-and-out of existence as you zoom or pan the map around. It can't even decide if it wants to show a certain airport or not.
This is interesting, I like the thought about "what makes something difficult". Focusing just on that, my guess is that there are significant portions of work that we commonly miss in our evaluations:
1. Knowing how to state the problem. Ie, go from the vague problem of "I don't like this, but I do like this", to the more specific problem of "I desire property A". In math a lot of open problems are already precisely stated, but then the user has to do the work of _understanding_ what the precise stating is.
2. Verifying that the proposed solution actually is a full solution.
This math problem actually illustrates them both really well to me. I read the post, but I still couldn't do _either_ of the steps above, because there's a ton of background work to be done. Even if I was very familiar with the problem space, verifying the solution requires work -- manually looking at it, writing it up in coq, something like that. I think this is similar to the saying "it takes 10 years to become an overnight success"
As someone with only passing exposure to serious math, this section was by far the most interesting to me:
> The author assessed the problem as follows.
> [number of mathematicians familiar, number trying, how long an expert would take, how notable, etc]
How reliably can we know these things a-priori? Are these mostly guesses? I don't mean to diminish the value of guesses; I'm curious how reliable these kinds of guesses are.
For number of mathematicians familiar with and actively working on the problem, modern mathematics research is incredibly specialized, so it's easy to keep track of who's working on similar problems. You read each other's papers, go to the same conferences etc.
For "how long an expert would take" to solve a problem, for truly open problems I don't think you can usually answer this question with much confidence until the problem has been solved. But once it has been solved, people with experience have a good sense of how long it would have taken them (though most people underestimate how much time they need, since you always run into unanticipated challenges).
Certainly knowing how many/which people are working on a problem you are looking at, and how long it will take you to solve it, are critical skills in being a working researcher. What kind of answer are you looking for? It's hard to quantify. Most suck at this type of assessment as a PhD student and then you get better as time goes on.
Paywalled, do we have a way around? I'm trying to avoid archive.ph / archive.today / etc because of the bad behavior, but not sure what the alternatives are.
In any case, it's crazy to claim we've achieved AGI lol, we must have different ideas of what that means. If you give claude a sufficiently large codebase, it will just start forgetting that pieces of it exist, and redoing already completed work. I know this is because of compaction / context, but to me, being-able-to-remember-things is an important aspect of a teammate. A couple weeks ago, I was working on some price testing and claude recommended using student's t-test, even though purchasing is no-gaussian, and that is required for student's t-test. Sure, it's better than most random people, and it's cool that it knows about student's t-test, but it's also not going to replace a competent human.
You could be right. But reading the comments here it seems it's had 2-3 scandals in the last 4 years, which makes me suspect that more could be brought to light.
> During their simulation of Mallory’s Everest expedition, the data showed that on summit night, the average body temperature difference between the twin in modern down and the twin in complicated layers of silk, wool, and gabardine was a staggering 1.8°C.
The human body self-regulates, and is pretty sensitive to dramatic temperature swings. So, conditioned on the fact that they both survived the adventure, we should expect their temperature differences to be relatively small. This doesn't mean the clothing is great, it means [their body] + [their clothing] is adequate.
Additionally, I'm not a doctor but 1.8 C is not small compared to normal human variation! Normal body temperature ranges between 36 and 37 C, a "high fever" starts around 39 C [0], and hypothermia is anything below 35 C [1]. The comfortable range of human temperature is 1 deg C, and the "outside of this is concerning" range is only 4 C wide. 1.8 C is quite big from that perspective.
Right, the 1.8C difference is substantial in terms of human physiology and indicates a diminished level of comfort as the body fights to keep the temperature up.
I also found it funny how they mentioned that modern clothing keeps you warmer longer once you stop moving, then tried to minimize the significance of that. There's a reason "cotton kills" is a cliche. Modern fabrics, windbreaker shells, and engineered layers don't make a huge difference in warm, dry, active conditions - it's when things go sideways that they can be the difference between comfort and fatal hypothermia.
There are times when layering is not the way to go. One of them is heavy activity in extreme cold. Layers can cause moisture to freeze in bad places. Having lived in a place that often got down to -40, I was always most comfortable with a light synthetic shirt under a single winter coat. No complex layers. And waterproofing isn't needed as there isn't any water around.
I also and have gone to -46F and for me a thick wool sweater and wool felt coat makes huge difference. I can not even wear my wool sweater until it gets to -20F otherwise I will burn up :)
My record was -63f/-53c. But it isnt all that bad. There is literally no weather/wind below about -40. No snow. No wind. No clouds. Only strange stuff like ice crystals falling from a clear sku, and snow that squeeks like walking on styrofoam. -35 and windy always felt colder than -50.
I know someone who has three or four different thicknesses of pure lambswool jerseys for wearing while he's cycling, at different air temperatures. It never really gets all that cold down south here at 56°N and frankly I think spending ten minutes dicking about over which jumper you're wearing for optimal performance takes a lot of the fun out of it.
That said, I'm a fat 52-year-old, and I cycle in jeans and a T-shirt, and if I start to feel cold it's a sign I'm not pedalling hard enough and I should get the boot down a bit, burn some calories.
Does it take 10 minutes to choose? Back when I was commuting, I had different kit depending on the temperature, and it wasn't exactly hard.
>50F: Summer gear, and not much of it. I run hot, and there's no need to make it worse.
>20F: Add a thick sweatshirt and gloves
>0F: Add wool socks, long pants and a wool underlayer, a windproof outer shell, glasses, a hat, a thicker windproof layer over my gloves, and sometimes a scarf depending on how short I'd cut my beard.
>-20F: Similar, but with some extra layers over my core, and the scarf is mandatory.
>-40: Similar, more layers.
<-40: I know my limits. I've nearly gotten in serious trouble before when it's too cold out and I didn't plan for extra wind and a cold pocket near the river or having to walk because of a poorly maintained road or whatever. My gear wasn't especially high-tech, and I just called work and emailed my professors to let them know I wasn't going to make it.
Wind would have me reaching for wind breaking and insulation at higher temperatures.
It wasn't a 10-minute process by any means though. I'd pull out my phone in the wee hours of morning, see that it was X temperature on the homescreen, and plan accordingly. If he's just selecting between a few jerseys that should be even easier, right?
I didn't have a lot of choice. I was pretty broke and also couldn't afford to skip 2 months of work or school. Nowadays I'm a bit more careful with my time.
If you start doing longer rides you learn there are general temperature ranges and kit that's fine to commute in or ride an hour in traffic with a rucksack is very different from the kit you want on a 6 hour ride in the countryside. I generally have kit for 0-10, 10-15, 15-22, 22+°C. My 0-10 jersey will boil me alive after an hour cycling in 13°C but likewise my 10-15°C will risk hypothermia in 8°C. There's only so much layering you can do with cycling kit before it starts becoming restrictive.
At one point I was stationed at a military base in the north which got to -40, even -50 somewhat regularly in the winter. Part of the orders for extreme cold was "no bicycles". Too many cardio nuts were seen riding in inadequate clothing, especially lack of proper boots. The worry wasn't them getting cold, it was them falling.
A light jacket is all good when you are pumping out the calories, but take a fall and you are now sitting on the ground unable to move. At -40 you may have only minutes before life-altering cold injuries (lost toes). Add to that the darkness and snowbanks and you might not be found for hours... IF anyone is actually looking for you. Cellphone screen get tricky in serious cold. A person walking to work, which was still not advisable, would at least be wearing clothing warm enough to stand still in the cold.
The radio used to have public service announcements calling for people to keep blankets in their car. Not in the trunk. Within reach of the driver. Get into a wreck, trapped without heat, and that fleece blanket under your seat might save your life.
Much further north. I was working with the canadians. I saw weather phenomena that i have seen nowhere else, from sun dogs every morning to watching the northern lights and realizing they are actually in the southern sky.
I'm curious: I do cycle in jeans and a t-shirt while in the city. Up to 45 minutes I'm perfectly fine, but if I'm on the saddle for over one hour I really start to miss the chamois. What's your experience with that?
Seconded. Old-school leather saddles are pretty good for riding in street clothes. But, they do tend to require a slightly different fit - I never managed to run one with my normal saddle-bar drop - the Brookes really wanted saddle and bar at the same height and the nose of the saddle pointed up a bit. This was good for ~2 hours or so, never tried it for longer, since I had normal road bike with normal saddle for that.
My old bike had a Brooks saddle and I gave it to someone to use with no real expectation of getting it back, and sure enough I didn't get it back. They're still using it though :-)
I wish I'd swapped out the really nice saddle for a more entry-level one though.
I didn't see more details in the article, but my guess is they were taking and averaging multiple temperature reads across the body. That is, core temp should only be within a narrow range like you say, but fingertip temp will vary much more widely.
All in all I found this to be a very strange article. If you just look at the data, I think a reasonable conclusion is that modern gear is vastly better at its function than old time Mallory gear. It's much lighter and keeps the wearer much warmer than old gear. But the whole tone of the article is about "myth busting" and how there haven't been really that many improvements in gear. I'm just looking at their charts and data and wondering what they're smoking.
Across their boots, legs, and upper body, they're at 6.578 kg/14.4 lbs for the old gear and 6.373 kg/14.0 lbs for the new gear. Yes, the newer gloves and headgear are significantly lighter - 1.132 kg/2.5 lbs vs 0.463 kg/1 lbs, and I don't know what they're bundling in "accessories", but the difference is nowhere near what I would have imagined.
Also, I've got some lightweight modern gear from companies like Patagonia, Montbell, Sea 2 Summit, REI, and others, and if I could get the same performance out of waxed canvas and leather at the same weight I'd ditch those systems in a heartbeat. The nylon is finally ripstop, but it's thinner than ever and tears when you rub your shoulder on a thorny branch.
But I don't think you actually get the same performance at the same weight. You're colder and have to be more careful about stopping and getting hypothermia, but your old gear weighs the same? Then you should get more of it.
Obviously that older gear wasn’t useless, since real people used it to climb the exact same mountains that people climb today.
It’s pretty clear from the text that they have debunked the idea that modern synthetic materials have outstripped older materials in performance. At the start of their project they expected modern gear of similar capabilities to be lighter. What they found was that modern gear’s advantage is primarily that it is simpler to use. Instead of seven carefully–chosen layers of wool and silk, you can wear a single coat. That single coat is also effective over a much larger temperature range than the older clothes.
Really this should not be all that surprising, as the expertise required to pick those layers has been condensed by engineers into the design of the coat. The modern climber no longer needs that same expertise, just money to buy the coat.
This is the same story of specialization that has powered our economic growth for centuries. You and I no longer need to know how to grow vegetables, or shoe a horse, or design a circuit. There might still be advantages to knowing how to write a sonnet or plan a battle, but for the most part we can leave these tasks to specialists who can get better results than we can. Those specialists in turn can leave other tasks to us. Everyone gets more efficient as a result.
> It’s pretty clear from the text that they have debunked the idea that modern synthetic materials have outstripped older materials in performance... That single coat is also effective over a much larger temperature range than the older clothes.
It feels like these two statements are in contradiction.
FWIW, I do a lot of hiking / backpacking / snowboarding in various conditions and "effective over a much larger temperature" is the #1 thing I shop for. If I can have 1 jacket that I wear from the time I get up in the morning until lunch, that's worth more than any other feature. I hate having to stop a hike to strip off a layer and I hate having to find a way to carry my jacket while snowboarding.
As measured in mass needed for a given amount of insulation. They expected the modern materials to achieve the same protection from cold while being lighter. That’s not what they found.
> If I can have 1 jacket that I wear from the time I get up in the morning until lunch, that's worth more than any other feature.
Yes, I suspect that many people think adaptability is even better than raw performance. After all, most of us don’t have a sherpa who can carry our jacket while we snowboard.
It was 1.8 C difference in skin temperature, not core body temperature. As you note, 1.8 C would be massive for core temp.
Wearable thermometer patches attached to each man’s head, chest, hands, feet, and legs recorded body temperature at five-minute intervals, nonstop, for the entire 10 days of the expedition.
I'll argue that, if it got down to the sharp edge of survival's knife, only the 2-degree warmer twin would come home. 2 degrees C (3 F) is palpably warmer.
That being said, if a 2-degree dip in temp would kill you, you are already praying for Ernest Shackleton's leadership.
Any theories or conclusions in the article especially with regards to science and medicine is best ignored as the article was written by an LLM.
The photographs and text within quotes are probably the only human things in there. We might go to the source of the data (the brothers instagram) for better conclusions, but for me this well is poisoned by slop.
Do you imagine that "nerds" have different bodies than "normal" people? I mean, sure, they're athletic, but they still go to human doctors, not some sort of xenobiologist veterinarians.
They may have started out the same as you or me, but the conditioning and acclimatization they’ve done over their lives certainly makes them more adapted to the activities they’re doing than the average person.
Not to be a stickler (ok I like being a stickler) but temperature delta, especially deltas between degrees celsius, should be given in kelvin. A 1.8K difference makes sense. A 1.8C difference would be 274.8 kelvin!
This is probably the most ridiculous comment I've read in the history of this website.
There is no difference in the amount of energy 1 degree Celsius delta and 1 degree Kelvin delta represents.
The only (and I really mean only) difference is how zero energy is defined. It is not possible to have negative energy, and that zero Celsius represents the freezing point of water is an artifact of convenience, not of absolute definition.
Also, the way Kelvin is defined necessitates that both degrees are identical. If 10 degrees Celcius defined the boiling point of water at 1 atmosphere (or whatever the actual definition is) then Kelvin would be smaller by a factor of 10. And this applies to both negative and positive K values.
A 1.8 degree C different would be 1.8 kelvin. The two degrees have different zero points but one degree Celsius and one degree Kelvin are identical in magnitude.
Celsius is not an absolute scale, but that isn't a problem for deltas: (10C - 5C)=5C, (10K-5K)=5K. Celsius is only problematic when multiplying or dividing. 10C is not twice as hot as 5C.
Saying something is false and then asking for citations doesn't seem that helpful to me.
To support your argument, take the following example:
Lets take some water at 273.15 Kelvin and add 1 Kelvin of energy to it. The water is now at 274.15 Kelvin. The difference is of 1 Kelvin.
If we had the same amount of water at 0 degrees Celsius and added 1 Celsius of energy, the water would now be at 1 Celcius.
Converting these values leave us with 273.15 Kelvin and 274.15 Kelvin respectively.
You can repeat this experiment (ignoring latent heat) for any value of Kelvin or Celsius, therefore Kevlin and Celsius are interchangeable in reference to temperature comparasion.
To be a stickler, communication requires respect for your audience. The vast majority of everyone understands a 1.8 degree C delta. I would argue that very few people anywhere would understand a temperature delta given in kelvin.
"A 1.8C difference" expands as "A difference of 1.8C" expands as, and here's the ambiguity, either:
"An absolute difference of 1.8C, or 274.8K, measured between A and B"
or
"A relative difference of 1.8C, or 1.8K, is added/subtracted to A/B in order to reach B/A"
I don't think the context-free variant with K will improve understanding and decrease confusability in this discussion context, but I appreciate the pointer about it in general. I'll take a lot more care around it in a future thread about space apparel!
No it doesn't. The absolute difference[1] of 1.8°C is the same as 1.8K; they have the same scale. The subtraction of values cancels out the offset.
A relative difference[2], usually given in percent change, has problems with a unit that has an offset zero like Celcius, but that isn't what anybody is using here. It's more than simple subtraction; you have to divide by the reference value.
You're just confused by terminology. While 1 C is 273 K, 1 degree Celsius is 1 degree Kelvin.
See, a degree is not an absolute unit of measure like a Celsius or a Kelvin, it's a relative difference between two absolute units of measure. When discussing the difference between two separate temperature readings measured in Celsius, degrees Celsius is entirely appropriate.
Think of it like time: there is a difference between meeting at 2:00 and meeting two hours from now.
I realize it’s lazy to just ask, but… 160 pages…
reply