The only relevant question is:
"Will the investigator use ... information ... obtained through ... manipulations of those individuals or their environment for research purposes?"
which could be idly thought of as "I'm just sending an email, what's wrong with that? That's not manipulating their environment".
But I feel they're wrong.
https://grants.nih.gov/policy/humansubjects/hs-decision.htm would seem to agree that it's non-exempt (i.e. potentially problematic) human research if "there will be an interaction with subjects for the collection of ... data (including ... observation of behaviour)" and there's not a well-worn path (survey/public observation only/academic setting/subject agrees to study) with additional criteria.
Agreed: sending an email is certainly manipulating their environment when the action taken (or not taken) as a result has the potential for harm. Imagine an extreme example of an email death-threat: That is an undeniable harm, meaning email has such potential, so the IRB should have conducted a more thorough review.
Besides, all we have to do is look at the outcome: Outrage on the part of the organization targeted, and a ban by that organization that will limit the researcher's institution from conducting certain types of research.
If this human-level harm was the actual outcome means the experiment was a de fact experiment including human subjects.
Let's assume that the baby has a perfect ordered ranking of blocks, 3 > 2 > 1, but that the experimenter doesn't know what it is.
There's three scenarios for what the new, third block is: 1, 2 or 3.
If it's 2 or 3, then the rejected block in the first round has a score of 1, and so we'd expect. So we'd only expect the baby to switch in a third of the cases, as opposed to 50% of the cases where the blocks are assumed to be equal.
"However, in the critical test trial that followed, 16 of 21 infants (76.2%) chose the new block (block C; Fig. 1)"
I can't work out the p-value vs. 66% compared to 50%, though...
They aren’t switching though. They are given a choice between A&B then if they choose A they are given a choice between B&C. The choice between B&C should be 67/33 but instead it’s around 75/25 and the authors claim this is because in not choosing B the baby decided they liked B less. The evidence for this claim is that if an adult makes the first choice for the baby then the 2nd choice is 50/50.
https://arxiv.org/ftp/arxiv/papers/1910/1910.05224.pdf talks at length about it -- the result is known to be wildly invalid for gas giants, as they have the temperatures and pressures to have viable chemical pathways to phosphene.
Yeah - the argument in the paper is a lot more nuanced than it's being represented in the media, where it's being flattened to the point of uselessness.
"Without a winking smiley or other blatant display of humor, it is utterly impossible to parody a Creationist in such a way that someone won't mistake for the genuine article." -- Nathan Poe
Depends on how you define the near future. There's a bunch of reasons to be skeptical that we'll fully digitize a human connectome anytime soon.
First the fruit fry brain is very small. You can image the entire thing at the microscopic level with a single image. The human brain is massive by comparison. Getting a coherent image that traces an axon from the tip of frontal lobe to the back off the occipital lobe is going to be a huge challenge.
Second the fruit-fly brain has 25,000 neurons while the human brain has more than 10,000,000,000. There's 6 orders of magnitude difference there.
Third, it's highly likely that glia (non-neurons) in the brain play a major role in neural computation so we'll have to image those too. Humans have way more glia than most other animals.
Lastly the connectivity of neurons in the human brain is very high. Getting those little connections right is key in all this as we aren't just going for the neurons but the connections between them.
If you can handle 25k neurons today that's roughly 19 binary orders of magnitude to 10 billion or less than 38 years of Moore's law type doubling, calling it 2 years instead of 18 months per double. So not that scary. There are different kinds of doubling and who knows if it will keep on keeping on.
>Second the fruit-fly brain has 25,000 neurons while the human brain has more than 10,000,000,000. There's 6 orders of magnitude difference there.
>Lastly the connectivity of neurons in the human brain is very high.
yep, the human brain is 100B neurons and has 10e4 connections per neuron while the fly brain has 10e3 connections/neuron. So we need to emulate 10e15 connections of the human brain. GPT-3 has 175B of weights.
Right, but GPT-2 had 1.5 billion neurons in 2019 so GPT-3 was a 100x increase. 1e15 may not be that far off, especially since these operations are relatively straightforward to run in parallel.
Not really. The big question mark in my mind is how important the surrounding biochemistry really is to the brain's function. We already know from the study of mental illnesses that altering the biochemistry can significantly affect a person's state of consciousness and mental abilities. What is less clear is what happens if you remove the biochemistry completely. Does the brain even still function? Does a person go insane? Totally unknown.
If you turn a computer off, then try to transfer the software it was running, you will get some data. But whatever was in the volatile RAM, won't get transferred. It's very likely the mind has such data.
Is there much evidence for this RAM-type hypothesis? Many animals hibernate but still have their personality and memories intact, similarly we sleep and can undergo long periods of unconsciousness without us or others perceiving a significant change.
Even drastic measures like electrical shocks or chemicals (up to a point) tend to have temporary rather than permanent effects. That evidence seems to imply that most of what we consider as 'us' is the more permanent physical neuron connections rather than the transient chemical/electrical states.
It's /all/ transient chemical states on a sufficiently long timescale. And the 'critical' timescale varies across different physical systems. (Also, you get severe brain damage after a few minutes without oxygen - that seems pretty damn transient to me.)
Right, but we generally understand the mechanism there. Without oxygen, ATP pumps within animal cells cannot functionj. They can no longer maintain the right ion gradients across cell membranes and are destroyed due to osmotic pressure.
It's a challenge for reading the neurons connectivity for sure, but I don't think it is evidence that there is more to 'us' than our physical neuron connectivity graph.
Well, you've still got the graph of neuron connections, but there are also weights on those connections. (In artifical neural networks, the graph is the architecture, and the weights are pretty much everything...) Per the article:
"Ion channel proteins change shape in response to the electric field across the membrane, opening or closing pores; at the synapse shape-changing proteins respond to electrical changes to trigger the bursting open of synaptic vesicles to release the neurotransmitters, which themselves bind to protein receptors to transmit their signal, and complicated sequences of protein shape changes underlie the signalling networks that strengthen and weaken synaptic responses to make memory, remodelling the connections between neurons."
Can the weight of a connection be surmised after oxygen deprivation? Or are the chemical changes that happen under oxygen deprivation irreversible (from an information theoretic viewpoint)?
As much as a photograph of a sandwich is convincing evidence that a sandwich can be uploaded. That is a picture of the connections in a fly's brain, not an uploaded fly's brain, or a simulated fly's brain.
In two dimensions, a vehicle moving North-South and a vehicle moving East-West have to cross into each other's path at some point, and 2.5D solutions (bridges and flyovers) help but require significant investment in resources that simply isn't possible at every junction.
In three dimensions, this simply isn't the case; we can separate different directions of traffic by height, and provision of dedicated corridors for changing route is merely a matter of making regulation rather than infrastructure.
There's a reason why we've had autopilot on planes for significantly longer than on cars.
The flip side is twofold - to take advantage of said autopilot, you need a pilot’s license, which is significantly harder and more expensive to get than a car license.
Aircraft are also expensive (and come with expensive operating costs like airframe examinations), since their failure mode is a more-or-less controlled “falling out of the sky” - so you want the most resilient parts you can have to skew more towards the “more controlled” end of the spectrum.
There’s also a whole network of human traffic controllers who work 24x7 to accommodate our existing air traffic; more would be required.
* What are the usecases where this [the Internet Computer and/or Motoko] shines?
* Can you expand on Orthogonal Persistence -- is this a "per actor" persistence, or from the references to blockchain is it some sort of shared state between actors?
* How does the internet look/feel/work different once this exists?
1. Compared to other forms of DECENTRALIZED compute, IC + Motoko shines in making web apps with User experience comparable to centralized providers (AWS, GCP, Azure). Example: You can build simple react web social network and expect quick reads (in milliseconds) and writes (1-3 seconds). Note I did not say "blocks" or "finalization." I deliberately trying not to have a leaky abstraction from the POV of the app developer.
Compared to CENTRALIZED compute, you can create "open internet services" like an open version of TikTok where the control of the app and its features can be be done by (in simple terms) voting. Example: https://www.youtube.com/watch?v=_MkRszZw7hU
More broadly, one of the things that is surprising (it was to me when I joined, and it still is), how much of the cloud stack that developers or compute providers create is not necessary when you have protocol-based compute. As a developer, some things like firewalls, databases, seem less important in enforcing the security and scalability of my apps. I realize as I type this can sound a bit naive AND esoteric... and I think the only way to really show this is by action and folks just playing with it. Few words will really convey this as much as people playing with it directly, I get that.
2. Our compute model are "canisters" which (if you are familiar with actor model) are actors that contain both their code and their state. Actors communicate by sending messages to each other (as actor model implies). Example: I could create a twitter canister. Orthogonal persistance means its state can grow without me spinning up DBs or worrying about replication and consistency.
Once the decision has been made that Node 6 is no longer supported, it is then possible to refactor the code so that it uses appropriate modern idioms, such that the external behaviour is unchanged for Node 6+ but will no longer be parsable by Node 5.
In this specific case, it is both a refactor and a breaking change.
The only relevant question is: "Will the investigator use ... information ... obtained through ... manipulations of those individuals or their environment for research purposes?"
which could be idly thought of as "I'm just sending an email, what's wrong with that? That's not manipulating their environment".
But I feel they're wrong.
https://grants.nih.gov/policy/humansubjects/hs-decision.htm would seem to agree that it's non-exempt (i.e. potentially problematic) human research if "there will be an interaction with subjects for the collection of ... data (including ... observation of behaviour)" and there's not a well-worn path (survey/public observation only/academic setting/subject agrees to study) with additional criteria.