Can't comment with a unicycle, but I regularly ride bikes, used several electric scooters and onewheels.
The onewheel is genuinely fun to ride and control. It does require a lot more attention than a bike. And I also never felt comfortable riding it faster than my ability to outrun it, which meant that it's never exactly ideal for long commutes (see my other comment above about the safety if interested).
For long distance and commuting, a bike is unbeatable IMHO. Comfortable, safer at speed. Electric or not.
I never saw the point on e-scooters, and don't own any for this reason. I feel more in control and have better ability to manouver and recover with a onewheel than an e-scooter, but the e-scooter requires almost zero brain to use, so there's that. The e-scooters have much bigger range too, and are less tiring to use.
I wish I could try an EUC. I suspect it has similar characteristic to the "flow" mechanics of the onewheel which would make it appealing, but I wonder how fast I would actually push it comfortably before wearing a full-face helmet.
I switched over to an escooter. The no brain thing is right. Press button, go forward. I suspect that's one of the reasons they're super popular.
I used to have a one wheel I commuted on. It was fine. I was fine getting it up to full speed a lot and had a good commute for it (mostly flat, mostly bike lanes). At some point you realize they can break 20 in the right conditions. This is a stupid line of thought I suggest no one reading this follow if they get one. Didn't crash but I could have. Always wear safety gear (I have a replaced cracked helmet to show why).
Bikes are the best, locking is the worst part about them. If they had a bike I didn't have to lock you'd never get me on anything else.
Would also love to do more then a few demo rides on an EUC. From what I've been told by riders who've done both they're much harder than a one wheel. I've honestly kinda wondered about safety on them. On the one wheel if I crashed, I normally ended up falling backwards or forwards off it and rolling. On EUC it feels hard to fall anyway besides forwards or backwards without getting your feet tangled in it. Those things are heavy and fast, I'd hate to see what they can do to ankle bones.
I was able to carry the onewheel on any transport, and that's nice, but it's not as convenient to lock compared to a bike. It's also too heavy to carry comfortably for more than 5 minutes. When you want to go into a supermarket for example you're in a worse position compared to a bike.
How do you handle an escooter in that situation?
I've seen the "carry handle" in action on a EUC, and it's pretty convenient for that. Held vertical takes next to no space on the ground so it's not much different than having a trolley.
I normally would stick the one wheel in the bottom of my cart. I also could stick it under my desk at work which helped a bunch. Bikes got locked in a parking garage that was "secured" but would have a bunch of bikes stolen every 3 months.
I don't. I just can't take the escooter to the store. I can really only commute on it as I can't really do anything with it when I get to a store/bar/whatever. My friends may or may not be able to fit it depending the apartment size. Now that I'm laid off I'm probably going to try to sell the scooter.
Those trolley handles look great. My next PEV is probably going to be an EUC of some stripe. I like that there's more than one brand after watching Future Motion get more restrictive and have more problems with their recent releases.
I did absolutely enjoy my time with some onewheels, but I did experience a "sideplant" on a climb when I likely exceeded either the torque of current limit when I absolutely never expected it (and got no warning either). Definitely not fun.
I never had issues when playing with it even at top speed and hitting an obstacle, as the extremely low platform and modest top velocity meant I could always recover by scrambling forward and propelling with the front feet. The real problem happens when not expecting it, like when I was slowly climbing, and even at zero speed the fall can be nasty. I never lowered my attention even for a split second on the onewheel after this.
I never had a chance to try an EUC, but due to how the legs are used for support, I can't see the same as a recover possibility. If the EUC fails, faceplant seems inevitable.
As for the "pushback", the effect on the onewheel is extremely noticeable and didn't feel the need for an additional auditory sound. The problem, as stated by OP, is that you can have failure modes where the motor stops supporting you before "pushback" (or even sound) kicks you in.
I liked the movie and enjoyed the visual pretext of the communication, although it always felt not very credible to me from a logical standpoint.
I would expect two intelligent species intending and willing to communicate to form an increasingly complex communication scheme starting from basic principles.
Of course, this wouldn't have translated into a decent plot by itself.. :/
Please follow the site guidelines. I removed the rate limit from your account on the assumption of good faith (https://news.ycombinator.com/item?id=36719915) and don't want to revert that, but if you keep posting like this we'll end up having to.
Hey remember a month ago when we had a huge thread with dozens or maybe even hundreds of comments about how interesting and thoughtful were the works of the unabomber? I wonder what all those commenters are up to I wonder what you might find if you follow some of them around for a few days.
I wonder if they have any overlap with the folder of screenshots I have of users here advocating eugenics, colonialism, or infanticide. I truly can't guess either way.
Pointing the finger at other people doesn't alter whether you're following the rules or not. If you don't to get rate limited or banned, please follow the rules regardless of how bad other comments are or you feel they are.
If you see a post that ought to have been moderated but hasn't been, the likeliest explanation is that we didn't see it. You can help by flagging it or emailing us at hn@ycombinator.com.
Not sure what to think of this, as in sarcastic sense...?
It's not like we're discussing trite software, I was just discussing about the "plot device" here where the decoding of an arbitrary language with complex symbology is a central feature. Doesn't it feel "forced" to you in the same way action movies show absolutely unrealistic martial arts moves for the sake of entertainment, or watching most "hackers" in computer movies?
Wouldn't you agree that starting form the absolute basics would be a much sound/quicker way to come to a unambiguous shared vocabulary instead of just showing complex blots to another species and expecting them to decode it? It's also especially odd to me that the somehow the host civilization is doing the decoding part. Considering contact from a supposedly more advanced civilization, I would almost expect the opposite to be true, where the aliens would likely take most of the burden of establishing communication (and probably already did so by watching/listening).
But I get it.. I still enjoyed the movie (like I can still enjoy action movies or computer movies)...
I get where you're coming from but the main thing in the story is that the aliens' perception of time was very different from that of humans; they could perceive their entire life at any given time. Given that, if the aliens could not imagine that humans only had this limited perception, the aliens would have thought the humans should already have known how their language worked...or something like that, miscommunication due to differences etc.
On my machine, when I last tried the various accelerated terminal emulators, I wasn't convinced. At least under plain X, GL context creation adds extra latency when creating new windows (might be different if you use a compositor all the time I guess). In addition, on terminals such as kitty, the startup time of a full new process was really non-negligible I suspect due to the python support.
With a tiling window manager, the built-in notebook/tiling functionality is not really useful (the window manager is more flexible and has universal keybindings) so when looking at the time required to pop a full new window in either single or shared instance they were actually behind regular xterm. Resource usage wasn't stellar either (xterm was still better than most lightweight libvt-based terminals). Couldn't feel much of a latency improvement (again, X without compositor).
I'm sure at full throughput the difference is there, but who is looking at pages of output you can't read? I do keep terminals open for days, but my most common usage case is mostly open window -> run a small session -> close and I got annoyed fast.
It depends on your workflow and on your resolution too. For example I do most things exclusively inside the terminal. If you are using vim, and are making use of splits, on a 4k60hz(or 1440p144hz) screen and want to scroll on one split and not the other, you will notice how slow and laggy redraws are. This was especially noticeable on macos (yay work computers) for me, which led me down the GPU accelerated terminal rabbit hole. iterm2 had its metal renderer, which (at the time) only worked with ligatures disabled, whereas kitty/wez/etc did not have that limitation.
The litmus test I use is how smooth can the terminal emulator run `cmatrix` at fullscreen
I've only had the issue on macos, konsole on my linux box works fine. I've stuck with kitty though cuz it works great on both linux and macos and I love the url opening feature as mentioned here: https://news.ycombinator.com/item?id=35140206
Probably inspired by the performance problems with the windows terminal [1] and the accelerated terminal [2] developed by Molly Rocket as 'answer'? A series of videos presenting the poc [3]
I've been doing a lot of my non-work computing lately on an actual VT420, which tops out processing bytes coming from the serial line (the computer you're logged in to) at 19.2kbps. I could stand for it to be faster, especially with the screen at 132x48. But never in 30+ years have I ever thought a terminal emulator connected to a session running over a pty on the same machine was slow.
I have started to see "terminal" apps that won't run on a real terminal, though. Using UTF-8 regardless of your locale, using 256-color xterm escapes regardless of your TERM setting, being unreadable without 256 colors, etc, and in general not using termcap/terminfo.
because rendering on the CPU is CPU-intensive when there's a lot of stuff scrolling by.
even on an integrated GPU, text rendering is far faster when you use the GPU to render glyphs to a texture then display the texture instead of just displaying the glyphs individually with the CPU.
It's comical being downvoted for this without comment. Having actually analyzed terminal performance, and optimized terminal code, this is based on first hand experience. The vast performance difference between terminals is almost entirely unrelated to rendering the final glyphs.
I'll add it to my (unfortunately far too long) backlog (he says and goes on to write an essay; oh well - for a blog post I'd feel compelled to be more thorough). But the quick and dirty summary:
1. The naive way is to render each change as it occurs. This is fine when the unbottlenecked output changes less than once a frame. This is the normal case for terminals and why people rarely care. It falls apart when you e.g. accidentally cat a huge file to the terminal.
Some numbers with the terminals I have on my system (ignored a whole bunch of xterm/rxvt aliases; e.g. aterm, lxterm etc.): cat of a file of 10MB on my system on a terminal filling half of a 1920x1024 screen on a Linux box running X takes (assume an error margin of at least 10% on these; I saw a lot of variability on repeat runs):
Take this with a big grain of salt - they're a handful of runs on my laptop with other things running, but as a rough indicator of relative speed they're ok.
Sorted in ascending order. These basically fall in two groups in terms of the raw "push glyphs to the display" bit, namely using DrawText or CompositeGlyphs calls or similar, or using GPU libs directly.
Put another way: Everything can be as fast as rxvt(-unicode); everything else is inefficiencies or additional features. That's fine - throughput is very rarely the issue people make it out to be (rendering latency might matter, and I haven't tried measuring that)
Note that calling the rest other than kitty and wezterm not GPU-accelerated is not necessarily entirely true, which confuses the issue further. Some of these likely would be slower if run with an X backend with no acceleration support. I've not tried to verify what gets accelerated on mine. But this is more of a comparison between "written to depend on GL or similar" vs "written to use only the specific OS/display servers native primitives which may or may not use a GPU if available".
2. The first obvious fix is to decouple the reading of the app output from the rendering to screen. Rendering to screen more than once per frame achieves nothing since the content will be overwritten before it is displayed. As such you want one thread processing the app output, and one thread putting what actually changed within a frame to screen (EDIT: you don't have to multi-thread this; in fact it can be simpler to multiplex "manually" as it saves you locking whatever buffer you use as an intermediary; the important part is the temporal decoupling - reading from the application should happen as fast as possible while rendering faster than once per frame is pointless). That involves one big blit to scroll the buffer unless the old content has scrolled entirely out of view (with the "cat" example it typically will if the rest of the processing is fast), and one loop over a buffer of what should be visible right now on lines that have changed. The decoupling will achieve more for throughput than any optimisation of the actual rendering, because it means that when you try to maximise throughput most glyphs never make it onto screen. It's valid to not want this, but if you want every character to be visible for at least one frame, then that is a design choice that will inherently bottleneck the terminal far more than CPU rendering. Note that guaranteeing that is also not achieved just through the naive option of rendering as fast as possible, so most of the slow terminals do not achieve this reliably.
Note that this also tends to "fix" one of the big reasons why people take issue with terminal performance anyway: It's rarely that people expect to able to see it all, because there's no way they could read it. The issue tends to be when the terminal fails to pass on ctrl-c fast enough and stop output fast enough once the program terminates because of buffering. Decouple these loops and skip rendering that can't be seen and this tends to go away.
3. Second obvious fix is to ensure you cache glyphs. Server side if letting the display server render; on the GPU if you let the GPU render. Terminals are usually monospaced; at most you will need to deal with ligatures if you're being fancy. Some OS/display server provided primitives will always be server-side cached (e.g. DrawText/DrawText16 on X renders server-side fonts). Almost all terminals do this properly on X at least because it's the easiest alternative (DrawText/DrawText16) and when people "upgrade" to fancier rendering they rarely neglect ensuring the glyphs are cached.
4. Third fix is you want to batch operations. E.g. the faster X terminals all render whole strips of glyphs in one go. There are several ways of doing that, but on X11 the most "modern" (which may be GPU accelerated on the server side) is to use XRender and CreateGlyphSet etc. followed by one of the CompositeGlyphs, but there are other ways (e.g. DrawText/DrawText16) which can also be accelerated (CompositeGlyphs is more flexible for the client in that the client can pre-render the glyphs as it pleases instead of relying on the server side font support). Pretty much every OS will have abstractions to let you draw a sequence of glyphs that may or may not correspond to fonts.
There is a valid reason why using e.g. OpenGL directly might be preferable here, and that is that if used conservatively enough it's potentially more portable. That's a perfectly fine reason to use it, albeit at the cost of network transparency for those of us still using X.
So to be clear, I don't object by people using GPUs to render text. I only object to this rationale that it will result in so much faster terminals, because as you can see from the span of throughput numbers, while Kitty and Wezterm doesn't do too badly they're also nowhere near fastest. But that's fine - it doesn't matter, because almost nobody cares about the maximum throughput of a terminal emulator anyway.
You're welcome. It's a bit of a pet peeve of mine that people seem to be optimising the wrong things.
That said, to add another reason why doing the GPU ones may well be worthwhile on modern systems anyway, whether or not one addresses the other performance bits: Being able to use shaders to add effects is fun. E.g. I hacked in an (embarassingly bad) shader into Kitty at one point to add a crude glow effect around characters to make it usable with more translucent backgrounds. Doing that with a CPU based renderer on a modern resolution would definitely be too slow. I wish these terminals would focus more on exploring what new things doing GPU based rendering would allow.
In the 80's 'glyph rendering' was usually done right in hardware when generating the video signal though (e.g. the CPU work to render a character was reduced to writing a single byte to memory).
I was specifically thinking of bitmapped machines like the Amiga. Granted, e.g. a modern 4K display w/32bit colour requires roughly three orders of magnitude more memory moves to re-render the whole screen with text than an Amiga (typical NTSC display would be 640x200 in 2 bit colour for the Workbench), but ability of the CPU to shuffle memory has gone up by substantially more than that (raw memory bandwidth alone has - already most DDR2 would be able to beat the Amiga by a factor of 1000 in memory bandwidth), but the 68k also had no instruction or data cache, and so the amount of memory you could shuffle is substantially curtailed by the instruction fetching; for larger blocks you could make use of the blitter, but for text glyph rendering the setup costs would be higher than letting the CPU do the job)
> but for text glyph rendering the setup costs would be higher than letting the CPU do the job
Depends on how the glyph rendering is done. Modern GPU glyph/vector renderers like Pathfinder [1] or Slug [2] keep all the data on the GPU side (although I must admit that I haven't looked too deeply into their implementation details).
That part was about the Amiga blitter specifically. The setup cost for small blits and the relatively low speed of the blitter made it pointless for that specific use.
The term GPU is primarily associated with 3D graphics, and most of what GPUs do is designed for that. Hardware acceleration of 2D graphics existed long before 3D hardware acceleration became common for PCs, but wasn’t called GPU, instead it was simply referred to as a graphics card.
The difference is that applying textures to a 3D object is almost never a pixel-perfect operation, in the sense of texture pixels mapping 1:1 to final screen pixels, whereas for text rendering that’s exactly what you want. Either those are different APIs, or you have to take extra care to ensure the 1:1 mapping is achieved.
There are ways to configure the texture blitter to be precisely 1:1. This is written into the GL/Vulkan standards for exactly this reason, and all hardware supports/special cases it. It is how pretty much every GUI subsystem out there handles windowing.
The transforms are specified so you can position things perfectly these days, when aligned with screen pixels.
Think of the compositing of layers of translucent windows used in modern 2d window managers, while dragging them around. Or even scrolling in a browser. Those rely on the GPU for fast compositing.
Even for 3d, think of the screen-space techniques used in games, where it's necessary to draw a scene in layers combined with each other in various interesting logical ways (for shadows, lighting, surface texture, etc), with the pixels of each layer matching up in a reliable way.
It’s a different set of operations for the most part, when you look into it. Drawing a 2D line or blitting a 2D sprite is quite different from texture-shading a 3D polygon. It’s not generic “number crunching”.
Ok but just because operations aren't perfectly identical doesn't mean you can't do it and it certainly doesn't mean it will be slow. I have had great success with SDL_gpu.
In (realtime) rendering the saying goes "bandwidth is everything", and that's exactly what GPUs do really well, moving incredible amounts of data in a very short time.
I agree with you, but stuck with wezterm for some time now for it's non-GPU related features. Specifically the font configuration with fallbacks and configurable font features such as ligatures and glyph variations is nice. I use a tiling window manager and a terminal multiplexer, so I have no use for terminal tabs/splits/panes. I wish there was something as "simple" as alacritty, but with nicer font rendering.
I love wezterm due to its ligature and colourscheme support, and the fact it's very clean and simple compared to, say, Konsole (I also generally use i3 leading to KDE apps not being the prettiest).
> xterm was still better than most lightweight libvt-based terminals
Even worse: although many terminal emulators claim to emulate some "ANSI" terminal or be "VT100 compatible" and so on, most of them aren't at all. Simply run vttest in your terminal of choice and be surprised, especially by how many of them fail at very basic cursor movement tests. One of the few terminal emulators which gets most things right is xterm. It's also one of the very few terminal emulators which even supports exotic graphics capabilities like Sixel/ReGIS/Tek4014. Nobody should underestimate xterm …
> I'm sure at full throughput the difference is there
I am not. It makes next to no sense to me. Maybe if you have a highres screen and dedicated VRAM. Otherwise going through the GPU interfacing ceremony just adds overhead.
Yeah, as I keep saying in these threads, the performance needed to do "fast enough" terminals was reached no later than the 1980s, and while bits per pixel and resolution has increased since then, it has increased slower than CPU speed. It's not the CPU cost of getting pixels on the screen that bottleneck most terminals.
In my experience, there's two archetypes of terminal users:
* Open one window and leave it open forever. Reuse that one for all commands.
* Open a window, run a couple commands, and close it.
For the second group, startup perf is everything, because users hit that multiple times a day. For the first group, not so much.
Some of the other tiling functionality is also more helpful for folks that aren't on platforms with as powerful of window managers (macOS, Windows)
I am in the second group, kinda - i hit Win+Shift+X (my global key for opening a new terminal) pretty much all the time to enter a few commands. I basically open terminals in a "train of thought"-like fashion, when i think of something that isn't about what i do in one terminal i open another to run/check/etc out. Sometimes i even close those terminals too :-P (when i work on something there might be several terminal windows scattered all over the place in different virtual desktops).
Also i'm using xterm and i always found it very fast, i never thought that i'd like a faster terminal.
i think a very effective workflow is missing from this list: open a long running terminal window but have many tmux panes.
many modern wm's and terminals have multitab and multiwindow features but i invested time only into learning tmux and i can use it anywhere. and of course nohup functionality is builtin by definition.
i have said it before and i can say it again: terminals come and go, tmux stays.
In my old organization, internal emails (same domain, internally sent) were regularly classified as spam if the UA wasn't outlook. "Clutter" added another circle of hell, as not only you had to explain "check your junk folder" but also "check your clutter folder".
I attributed this to the sheer incompetence of the local admins. The same organization later switched to O365, and the problem remained unchanged.
I doubt much if we think decades. Sheet stock has always been used in conjunction to modular systems such as these, via manual or CNC machining (slots and holes are trivial to do). The advantage of a modular system such as these is the ability to build a scaffold with only straight cuts and a few off-the-shelf connectors.
In most cases you need a sheet metal press to form a scaffold out of _just_ sheet metal.
For anything structural (ie: when thickness matters), the price of custom-cut sheet stock is not that competitive anymore.
The price of machine-cut metal sheet has dropped quite a bit the last years though. I think it's filling a new niche, not really replacing the space of modular extrusions.
This sounds like the process required for most ISO certifications which sums up to: "pay up, do nothing useful" in my experience.
The extra cost of certification is only very _rarely_ useful. I have to laugh at the "bolster cybersecurity rules to ensure more secure hardware and software products".
It shifts the cost to the company that needs/wants CE.
On one hand, it might actually incentivize companies to pay up for OSS maintenance services, since certification requires a _process_, and not just an end product you can copy without any commitment at all. I don't see this working for small devs though (the paperwork will likely exceed the actual extra revenue in all but the largest projects - so why bother?).
This also puts CE at disadvantage where another market can just do that: steal/clone OSS and skip all the certifications. I'm a lot more worried about this point than the rest.
I don't have a youtube account, and likely never make one. I was bypassing the login wall as long as I could, but ever since login was enforced to watch age-restricted videos, I now simply skip the content.
You'd think this would be a non-issue: I'm mostly following retro-content of this kind. But the amount of age-restricted videos I'm hitting is just baffling. I was following summoning salt, and I was able to watch the video before it got age-restricted. Is there any profanity there? No. If you think there is, you should reconsider your moral views.
I've seen creators over-censor their videos for the same reason, to the point of absurdity. As in this one, I've seen as far as pixelating 8bit 8x8 "nudity" in 90ies games just to be on the safe-end side.
There's no question youtube serves as a big audience window for small content creators. Youtube is amazing for discovery due to the immense choice. However I do support and watch videos outside of youtube (and I'm more and more eager to do so).
I encourage all creators to post on youtube with this attitude in mind.
RISC-V architects weighted pros and cons of having a flags register, and pros and cons of having overflow exceptions.
They concluded it is best to not have flags at all (conditional branches do their own testing, and no flag dependencies need to be tracked which simplify superscalar implementations) nor overflow checks (flow breaking and costly; if you need the check, the cost of a software check is minimal, by design).
I understand for overflow exceptions, but I would have expected the cost of a register flag to be zero or near zero, for example, in an integer adder?
It also doesn't look like to me the cost of a software check can always be trivial. It can be for a single operation, but an advantage of an overflow register is that it allows to check for a group of operations as a whole (check/branch just once and abort), which is what is probably practical to do algorithmically. In such scenario switching to software checks for each op and/or bound check the inputs sounds by far not minimal.
Flags aren't that simple. In a superscalar microarchitecture, then you have a lot more to track re: who set the flag, as the flag is a target of every instruction that can set it.
Minimal.. It depends! I remember that the developpers behind GMP (a GNU library for bignum) weren't happy with the performance!
But that's pretty niche..
The onewheel is genuinely fun to ride and control. It does require a lot more attention than a bike. And I also never felt comfortable riding it faster than my ability to outrun it, which meant that it's never exactly ideal for long commutes (see my other comment above about the safety if interested).
For long distance and commuting, a bike is unbeatable IMHO. Comfortable, safer at speed. Electric or not.
I never saw the point on e-scooters, and don't own any for this reason. I feel more in control and have better ability to manouver and recover with a onewheel than an e-scooter, but the e-scooter requires almost zero brain to use, so there's that. The e-scooters have much bigger range too, and are less tiring to use.
I wish I could try an EUC. I suspect it has similar characteristic to the "flow" mechanics of the onewheel which would make it appealing, but I wonder how fast I would actually push it comfortably before wearing a full-face helmet.