Hacker Newsnew | past | comments | ask | show | jobs | submit | dosshell's commentslogin

you would lose 80 repos from "10000 : 1847" also in that case.

What parts have you exactly built?

All I see are dependencies that are glued together with claude.

Can you clearify exactly what you have developed?


I built specifically:

The browser-based IDE (editor, project handling, UI)

The circuit simulation layer that connects components to the emulator

The glue between the AVR8 emulator and the virtual peripherals (GPIO, UART, SPI, etc.)

The component interaction system (buttons, LEDs, displays, etc.)

The architecture that lets compiled Arduino sketches run and interact with the simulated hardware

Some parts like the AVR CPU emulation and the compiler toolchain obviously come from existing projects, but the goal of Velxio wasn't to re-implement an AVR core from scratch. It was to build a usable environment where all of these pieces work together in the browser.

I'm still having trouble connecting the cables and components properly. I'm looking for a better algorithm. I'm also trying to create a real-world electronics simulator in JavaScript using an engine like CircuitJS1.


what do you mean with "The browser-based IDE (editor ...)

You use Monaco Editor for that.


Yes, the editor itself uses Monaco. That dependency is listed in the README

When I said "browser-based IDE", I meant the environment around it: project handling, compilation with arduino cli, the UI, the serial monitor, and the integration with the emulator and circuit simulation

Monaco is just the editing component


This is a good read! and something i have in the back of my head when debugging spooky bugs.


If it captures variables, it is not possible to manually declare the type. You can however hide it in an std::function. But then you probably get some overhead and some heap allocations in real life.


You put the using as class member (private) or as local in the function.


> I have to have TikTok for work

I'm sorry but what? Your job demands what apps you have installed on your PRIVATE phone!?


Well, nobody's forced it, but my company publishes content on TikTok that drives customers, and I want to be able to see it myself. You'd be surprised how many CISOs and security workers are on TikTok.

Edit: "experts" > "workers"


Tiktok.com

?


I would assume for advertising/business account. There are things you can only do on the TikTok app that you can't do on the web.


All jobs I've had since the mid 2010s essentially did the same for me by requiring 2fa in certain contexts


What kind of 2FA? I run OTP on my work laptop. Yes, it's maybe not really a 2nd factor if someone had access to my laptop with LUKS open. But at least I don't expect any automated attack because it's my own piece of code using an otp library.


One of the contexts is login to the laptop , would be pretty challenging to facilitate on device ;)

Sadly, biometric authentication as 2fa is not sufficient for that.


Same here. If someone is accessing my OTP codes from my laptop, I've got bigger problems to worry about.


Only my most recent job is doing this. Before the job provided a phone for 2FA that I didn't use much outside of that.


>> because part of what’s holding Zig back from doing async right is limitations and flaws in LLVM

this was interesting! Do you have a link or something to be able to read about it?


Much of the discussion is buried in the various GitHub issues related to async. I found something of a summary in this Reddit comment

https://www.reddit.com/r/Zig/comments/1d66gtp/comment/l6umbt...


iirc the llvm async operation does heap allocations?


Note that: There are no economic science Nobel prize.

Only one similar named price in the name and memory of Alfred Nobel, which some how, is allowed to be part of the Nobel prize celebration.

I guess my opinion is in minority, but i don't like that another prize hijacks the Nobel prize.


> I can get away with a smaller sized float

When talking about not assuming optimizations...

32bit float is slower than 64bit float on reasonable modern x86-64.

The reason is that 32bit float is emulated by using 64bit.

Of course if you have several floats you need to optimize against cache.


Um... no. This is 100% completely and totally wrong.

x86-64 requires the hardware to support SSE2, which has native single-precision and double-precision instructions for floating-point (e.g., scalar multiply is MULSS and MULSD, respectively). Both the single precision and the double precision instructions will take the same time, except for DIVSS/DIVSD, where the 32-bit float version is slightly faster (about 2 cycles latency faster, and reciprocal throughput of 3 versus 5 per Agner's tables).

You might be thinking of x87 floating-point units, where all arithmetic is done internally using 80-bit floating-point types. But all x86 chips in like the last 20 years have had SSE units--which are faster anyways. Even in the days when it was the major floating-point units, it wasn't any slower, since all floating-point operations took the same time independent of format. It might be slower if you insisted that code compilation strictly follow IEEE 754 rules, but the solution everybody did was to not do that and that's why things like Java's strictfp or C's FLT_EVAL_METHOD were born. Even in that case, however, 32-bit floats would likely be faster than 64-bit for the simple fact that 32-bit floats can safely be emulated in 80-bit without fear of double rounding but 64-bit floats cannot.


I agree with you. It should take the same time when thinking more about it. I remember learning this in ~2016 and I did performance test on Skylake which confirmed (Windows VS2015). I think I remember that i only tested with addsd/addss. Definitely not x87. But as always, if the result can not be reproduced... I stand corrected until then.


I tried to reproduce it on Ivybridge (Windows VS20122) and failed (mulss and muldd) [0]. single and double precision takes the same time. I also found a behavior where the first batch of iterations takes more time regardless of precision. It is possible that this tricked me last time.

[0] https://gist.github.com/dosshell/495680f0f768ae84a106eb054f2...

Sorry for the confusion and spreading false information.


Sure, I clarified this in a sibling comment, but I kind of meant that I will use the slower "money" or "decimal" types by default. Usually those are more accurate and less error-prone, and then if it actually matters I might go back to a floating point or integer-based solution.


I think this is only true if using x87 floating point, which anything computationally intensive is generally avoiding these days in favor of SSE/AVX floats. In the latter case, for a given vector width, the cpu can process twice as many 32 bit floats as 64 bit floats per clock cycle.


Yes, as I wrote, it is only true for one float value.

SIMD/MIMD will benefit of working on smaller width. This is not only true because they do more work per clock but because memory is slow. Super slow compared to the cpu. Optimization is alot about cache misses optimization.

(But remember that the cache line is 64 bytes, so reading a single value smaller than that will take the same time. So it does not matter in theory when comparing one f32 against one f64)


This is very interesting! Are there any movements to move towards this?

Wouldn't it open up for a new attack vector where process could read each other data?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: