Hacker Newsnew | past | comments | ask | show | jobs | submit | ssl-3's commentslogin

I tried it. Maybe it's easier to speak than hexadecimal is.

But I'm not sure that "How morally the enviable assistances categorize the insistent iodine beyond new time where new systems stalk" has the same memorable quality as "correct horse battery staple" does.


It is hard, but Everyday Astronaut had a manually-operated camera with a 2,000mm lens that captured everything from engine start all the way through a reasonably-clear view of SRB separation.

In 4k, at 720fps.

(I didn't bother with watching the NASA feed.)


I wish we'd known this before the launch

Both myself and my 12yo were disappointed by the NASA feed, it was more like the matter-of-fact coverage of 'routine' Shuttle launches of the 1980s than something worthy of this historic mission.


Well, now you know. :)

Always watch Everyday Astronaut's live feeds for rocket launches. It's the primary gig for some of those involved, so they care a lot about making it something that is both informative and superb.


If I understand the question correctly: You want to run a Linux distribution like (say) Debian in a FreeBSD jail? With the Linux kernel and all?

It can make sense if the premise that nobody is steering the ship is accepted to be true.

That'd be nice. But if it is even possible, we won't be around to see it happen.

You guys with your dedicated hardware. :)

I did routing duties for my LAN with my primary desktop for about a decade, variously with Linux, OS/2 (anyone remember InJoy?), and FreeBSD -- starting with 486 hardware. Most of that decade was with dial-up.

The first iteration involved keying in ipfwadm commands from, IIRC, Matt Welsh's very fine Running Linux book.

WAN speeds were low; doing routing with my desktop box wasn't a burden for it at all. And household LANs weren't stuffed full of always-on connected devices as they are today; if the Internet dipped out for a few minutes for a reboot, that wasn't a big deal at all.

I stayed away from dedicated hardware until two things happened: I started getting more devices on the LAN, and I saw that Linksys WRT54G boxes were getting properly, maturely hackable.

So around 2004 I bought a WRT54GS (for the extra RAM and flash) and immediately put OpenWRT on it. This lead to a long rabbit hole of hacks (just find some GPIO lines and an edge connector for a floppy drive, and zang! ye olde Linksys box now has an SD card slot for some crazy-expensive local storage!).

I goofed around with different consumer router-boxes and custom firmware for a long number of years, and it all worked great. Bufferbloat was a solved problem in my world before the term entered the vernacular.

And I was happy with that kind of thing at home, with old Linksys or Asus boxes doing routing+wifi or sometimes acting as extra access points... until the grade of cheap routers I was playing with started getting relatively slower (because my internet was getting relatively faster) and newer ones were becoming less-hackable (thanks, binary blob wifi drivers).

---

I decided to solve that problem early in 2020. Part of the roadmap involved divorcing the routing from the wifi completely -- to treat the steering of packets and the wireless transmission of data as two completely distinct problems.

I used a cheap Raspberry Pi 4 kit to get this done. The Pi4 just does router/DNS/NTP/etc duties like it's 1996 again. Dedicated access points (currently inexpensive Mikrotik devices) handle all wifi duties.

That still works very well. Pi4 is fast enough for me with the WAN connections available here (which top out at 400Mbps) even while using SQM CAKE for managing buffers, and power consumption of the whole kit is too low to care about.

The whole OpenWRT stack just plods along using right around 64MB of RAM. VLANs are used to multiply the Ethernet interface into more physical ports (VLANs were used to do this inside the OG WRT54G, too).

It's sleepy, reliable, and performant.

---

And it'll keep being fine until I get a substantially-faster WAN connection. For that, maybe one of the China-sourced N150 boxes, with 10gb SFP+ ports, will be appropriate -- after all, OpenWRT runs on almost anything including AMD64 and the UI is friendly-enough.

But there's no need to upgrade the router hardware until that time. Right now, all of my routing problems are still completely solved.


>Part of the roadmap involved divorcing the routing from the wifi completely

This is the move. Let's you upgrade the different parts of the network separately. I have 3 components, an N150 router/fw/DNS/VPN box with 2.5GB NICs running OPNSense. A cheap but surprisingly good 2.5GB managed switch, and a cheap wifi 6 VLAN tag capable wifi access point.


Yes, it definitely is the right way to do stuff. It's not arduous and it represents a highly functional and sustainable level of separation.

It wasn't always practical (dedicated, plain PoE access points of unobtrusive shapes were once rather expensive), but these days it's completely approachable and usable.

If I may ask: Why a 2.5GB switch instead of, say, 10GB? I know 10GB over copper is a mess due to the heat generation, but my own perfect vision of an upgrade involves using optics instead.


It's ugly, but yes: https://www.anker.com/products/a8895?variant=45839927509142

That cable has one power input (that is only an input), and two outputs (that are only outputs), and a brainbox in the middle to direct the circus.

If we label the connectors as A, B, and C, then it works like this: A charges B and/or C, and other charging directions are no-op.

The less-complex way is to use a USB A to C cable, if that's appropriate. With these, the A side is always the source and the C side is always the sink.

---

And yeah, it's annoying. I got a cheap lithium car jump starter several years ago with some neat power bank features (like 60W USB PD in/out, on one port). So I plugged it into my phone with USB C at my desk, and discovered that they'd charge eachother seemingly randomly. While changing nothing, I'd look over and sometimes the jump starter would charge the phone, and sometimes the phone would be charging the jump starter. The conglomeration formed a heater, with more steps.

(Back and forth with the same poop, forever.)


Ah, yeah, I remember those. That miiiiight work for my use case...

--- (I remember. The poop.)


The headphones have equivalent performance whether a USB 2 cable is connected, or a USB 3 cable is connected. The headphones themselves are not USB 3 devices; the addition of USB 3 cabling instead of USB 2 cabling would change absolutely nothing about how they work.

So, no: I wouldn't expect the cable for a pair of headphones (of any price) to support USB 3. That represents extra complexity (literally more wires inside) that is totally irrelevant for the product the cable was sold with. (The cables included with >$1k iPhones don't support USB 3, either.)

Meanwhile: Fast charging. All correctly-made USB C cables support at least 3 amps worth of 20 volts, or 60 Watts. This isn't an added-cost feature; it's just what the bare minimum no-emarker-inside specification requires. A 25-cent USB C-to-C cable from Temu either supports 60W of USB PD, or it is broken and defiant of USB-IF's specifications.

---

Now, of course: The cable could be thinner and more flexible and do these same things. That'd probably be preferred, even: Traditional analog headphones often used very deliberately thin cables with interesting construction (like using Litz wire to reduce the amount of internal plastic insulation) to improve the user's freedom of movement, and help prevent mechanical noise from the cables dragging across clothes and such from being telegraphed to the user's ears.

Using practical cabling was something that headphone makers strived to be good at doing. I'm a little bit annoyed to learn that a once-prestigious company like B&W is shipping cables with headphones that are the antithesis of what practical headphone cables should be.

---

But yeah, both USB C cables and the ports on devices could be better marked so we know WTF they do, to limit the amount of presumption required in the real world. So that a person can tell -- at a glance! -- what charging modes a device accepts or provides, or whether it supports video, or whether it is USB 2 or USB 3, or [...].

Prior to USB C, someone familiar with the tech could look at a device or a cable and generally succeed at visually discerning its function, but that's broadly gone with USB C. What we have instead is just an oblong hole that looks like all of the other oblong holes do.

After complaining about this occasionally since the appearance of USB C a decade or so ago, I've come to realize that most people just don't care about this -- at all. Not even a little bit. Even though these things get used by common people every day, the details are completely out of the scope of their thought processes.

It doesn't have to be this way, but it's not going to change: Unmarked ports are connected together with unmarked cables and thus unknown common capabilities are just how we roll.


The Litz wire point is pretty spot on, traditional headphone manufacturers understood that cable ergonomics mattered. Somewhere in the transition to USB-C, that institutional knowledge just evaporated.

Your last paragraph is depressingly accurate though. I think that's exactly why devices like the Treedix exist: the standards bodies and manufacturers clearly aren't going to fix the marking problem, so now we need test equipment to figure out what our own cables do.


> The Litz wire point is pretty spot on, traditional headphone manufacturers understood that cable ergonomics mattered. Somewhere in the transition to USB-C, that institutional knowledge just evaporated.

"I heard what you guys are planning and I talked to my financial guy. He said I have enough to put a manufactured home on some land in some desolate place like the Dakotas or central Wisconsin, as long as I keep a bit of supplemental income and live a little lower. So I'm going to do that, and take my chances on growing artisanal rutabaga to sell at farmers markets.

I've already packed up the Prius. I just stopped by to wish you kids luck with your new headphone project and tell you that I won't be back."


I saw my friends doing that and stayed out of that game completely until Plextor released their first 8x burner -- the PR-820.

By then, it was all pretty well sorted despite that burner having no underrun protection.

The IBM Ultrastar 9ES drives kept it fed very well on that otherwise quite slow Slackware box.

Burn a CD, compile a kernel, and browse the web while watching some VCD rip of a music video in one corner of the screen? No problem.


Neither.

The batteries, the grid/generator-supplied power supplies, and the telephone switch equipment are all connected in parallel -- as if the entire DC power infrastructure consists of only two wires, and everything involved with it connects only to those two wires.

1. In normal operation, the batteries are kept at a constant state of charge. The switches are powered from the same DC bus that keeps the batteries charged.

2. When the power grid goes down, the batteries slowly discharge and keep things running like nothing ever happened (for hours/days/weeks). There is no switchover for this; it's just the normal state, minus the ability to juice-up the batteries. (Remember: It's just one DC bus.)

3. When the grid comes back up (or the generators kick in), the batteries get recharged. There is no switchover for this either; nothing important even notices. (Still just one DC bus.)

4. If the grid stays up long enough, go to 1. Repeat as the external environment dictates. (And as you might guess, it's still one DC bus and there's also no switchover here. Things just continue to work.)

--

You can play with this at home with a capacitor (which loosely acts like a battery does), an LED+resistor combo (which acts as a load), and a small power supply that is appropriate for LED+resistor you've chosen (which acts as the AC-DC converting grid input).

Wire them all 3 parts up in parallel and the light comes on.

Disconnect the power supply, and the light stays on for a bit -- it successfully runs from power stored in the capacitor.

Reconnect the power supply, and the light comes on and the capacitor ("battery") recharges -- concurrently.

Improve staying power by adding more parallel capacitance. Reduce or eliminate it by reducing or eliminating capacitance. Goof around with it; it's fun. (Just don't wire the capacitor backwards. That's less fun.)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: