Hacker Newsnew | past | comments | ask | show | jobs | submit | CakeEngine's commentslogin

Also in the 80s we (by which I mean other people), downloaded software from the television by sticking an LDR to the screen whilst a dot flashed black and white during the duration of a programme. A program from a programme.

I remember see the dot a few times, but it was probably very short lived.


Is this not something that can be addressed with cameras and (maybe) learnt approaches now? You don't need blind repeatability if you've got good visual monitoring to close the control loop, you just (just!) need good accuracy and low latency from video to motor control.


Why not just throw a SteamVR/Vive laser tracker onto the end of the arm and use that to close the loop? They claim sub-mm precision at room-sized distances, so it should be even better if you had it basically mounted on the base. If you wanted to get fancier you could build it into the end effector w/ one of these? https://tundra-labs.com/products/tl448k6d-vr-system-in-packa...


Another early 90s C learner here, and I also put (and still do!) parentheses around return expressions. But I do:

   `return( 1 );`
not

   `return (1);`
...because the bracket belongs to the return, not the expression. It's by analogy with 'if' and 'while', a mental "look out, here comes an expression" warming.


Indeed. There's also UZI - a v7 UNIX for Z80.


I don't think it would be. You'd have to detect them on the data bus and also differentiate between data and instruction accesses which I don't /think/ the Z80 does, at least not easily.


6502 has output pins that let you distinguish between instruction fetch, data read/write, and interrupt/reset vector fetch. So you can bank switch based on all that stuff.

6502 doesn't have a pin for "IO space" but you just pick your MMIO range e.g. 0xCnnn on the Apple ][ and a NAND gate on the 2 MSBs and a 3-input OR/NOR on the output and the next two bits gives you effectively the same signal. (or an OR gate on bits 12&13 plus a 3-input AND/NAND on bits 14&15 and the output).

It was common to use 6522 VIA 8 bit output ports to bank switch one or two 4k ranges to allow each such 4k range to access 256*4k = 1 MB of RAM. You could use a 16x8 bit SRAM to do the same thing for the whole address space.


It does have a pin for that (#M1, "machine cycle 1").

Zilog actually designed their peripheral chips to handle the RETI (return from interrupt) opcode specially. On the Z80 itself it does the same thing as a normal RET, but other chips can detect it on the bus and treat it as the signal that their interrupt handler is finished.

I also recall reading about some hobby project to add a PC-relative addressing mode to the 8080 or Z80. A redundant opcode like "MOV A,A" would be used as a prefix, which caused the external hardware to add the program counter to the immediate operand of the following instruction. Can't find it right now.


Using M1 plus the address lines was used to trap on certain addresses on expansion devices for the Spectrum such as Interface 1 or the +D. Then they’d use a line on the expansion bus to disable the standard ROM and substitute their own ROM / static RAM until some other trigger such as an OUT to a port would page the ROM back in.

I guess it’s a short step to looking up memory addresses against a bitmap for validation. I couldn’t tell from the video if this is what he did. Cutting the addresses into pages would let you save or add more levels and I guess you could implement a TLB similarly.


This is only a problem is there's a clear mapping from detectable genetic feature to an expressed macro-feature like intelligence.

My (admittedly limited) understanding is that each detectable genetic feature has a whole panoply of effects, some of which wont' be apparent at birth. Selecting for intelligence through specific genes is like to also be selecting for weak bones, reduced longevity, or other unpredictable side-effects.

Maybe one day it will be possible but there's a chasm between here and there which can only be crossed by extensive testing on real people. Is that even crossable?


This has been out of the box already for a while. You can literally have your embryo commercially screened for intelligence, height, and other complex traits today


Has it been out of the box long enough to validate that the adult versions of the embryos exhibit the traits that they're supposed to? Is it a noticeable intelligence boost? For example, a standard deviation higher IQ as measured by Raven's Progressive Matrices, compared to siblings from un-screened embryos? I read enough about biotech and biohacking that I feel like I would have already come across reports if this really works, but maybe it's very recent.

I read Gwern's "Embryo Selection for Intelligence" a few years ago:

https://www.gwern.net/Embryo-selection

Near future possibilities seemed pretty limited based on that review, unless the reasoning was incorrect:

As median embryo count in IVF hovers around 5, the total gain from selection is small, and much of the gain is wasted by losses in the IVF process (the best embryo doesn’t survive storage, the second-best fails to implant, and so on). One of the key problems is that polygenic scores are the sum of many individual small genes’ effects and form a normal distribution, which is tightly clustered around a mean. A polygenic score is attempting to predict the net effect of thousands of genes which almost all cancel out, so even accurate identification of many relevant genes still yields an apparently unimpressive predictive power. The fact that traits are normally distributed also creates difficulties for selection: the further into the tail one wants to go, the larger the sample required to reach the next step—to put it another way, if you have 10 samples, it’s easy (a 1 in 10 probability) that your next random sample will be the largest sample yet, but if you have 100 samples, now the probability of an improvement is the much harder 1 in 100, and if you have 1000, it’s only 1 in 1000; and worse, if you luck out and there’s an improvement, the improvement is ever tinier. After taking into account existing PGSes, previously reported IVF process losses, costs, and so on, the implication that it is moderately profitable and can increase traits perhaps 0.1SD, rising somewhat over the next decade as PGSes continue to improve, but never exceeding, say, 0.5SD.


Just saw your quoted text, and I do think the reasoning is incorrect.

What is gets right is that there are some serious practical limitations. The most important are around the availability of embryos, financial costs, and diminishing returns.

What it gets wrong is modeling the implementation as an optimization tool opposed to a screening tool.

If you have a pool 10 embryos, with a trait on a normal distribution (eg IQ), you can screen the bottom half out. By doing so, the average IQ goes from 100 (normal mean) to 110, mean for embryos over 100.

People want genetic children, but if for example, a wife is infertile, eggs can be purchased for ~$2.5k.[1]

Using today's technology, you could buy 100 eggs, screen the top 10% (>120), and the average embryo in the pool would be now be the 95th percentile for IQ (eg 125, +1.66 Standard deviations above the mean)

The next technology needed to knock this wide open would the the cloning or duplication of human eggs from a single source. IVT egg extraction yields only 5-10 eggs per cycle. If this could be multiplied in-vitro, money would be the only constraint.

https://www.cryosinternational.com/en-us/us-shop/client/how-...


For polygenic traits no, it hasnt been out of the box long enough for the embryos to mature.

However, we have been screening for single gene mutations that impact IQ since the the 1960s and they are well validated. For example, the impact on IQ of Trisomy 21 (downs syndrome) has a well validated impact of about 30 IQ points. 16p11.2 Gene abnormality has a well studied impact of 16-25 IQ points depending on the abnormality.

If you had a dumb as rocks polygenic test that screened just these factors, you would see a noticeable difference compared to a control.


How many generations does it take before the test fails due to Goodhart's law?


Folk are getting hung up on the keyboard layout and poor buttons, which are bad but only the start of the problems with these machines.

It has two separate asynchronous screens, two sets of printed instructions (plus t&cs), more instructions on both screens, then two keypads (but with the same symbols although they're not interoperable). There is no consistent self-explanatory language for talking about the various buttons, knobs, slots and problems, and how the user's attention should flow between them. If use of this machine wasn't mandated, nobody would ever, ever touch it.


The instructions are also terrible. The text is quite small, and is in a grubby box which is hard to read, especially for those with poor sight. The signposting of the distinction between the three sets of instructions (coins vs credit/debit vs contactless) is very unclear. There are T&Cs right at the top of the machine in tiny letters that are downright impossible for people with poor sight - for those with good sight, that’s not a problem, we ignore them anyway, but how does someone with poor sight know they don’t need to read them? What are “controlled hours”?

It’s a total shitshow without even starting on the tech. Why is there an on-off button? Why is there a double flag button? Why is there a rotating button? Why is the wheelchair button not mentioned anywhere in the instructions?


And this particular model is so slow. Every button press takes the screen 1-2 seconds to react, that's assuming you're looking at the right one.

Quite often the card reader isn't working and it takes about 90s to timeout; always relaxing when there's a queue forming.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: