If someone wants to take Ozempic for cosmetic reasons, that's their business. I am almost certain you personally indulge in riskier activities than using Ozempic, or... modafinil? You know people still use research chemicals, testosterone, and modafinil, right?
> You can debate what policies are the most fair without calling trans women "men."
You're correct - man/woman are gender identities, male/female are biological facts. The more accurate version of that statement (which, btw, is not mine, I am just repeating what the complaints are) is:
Transition changes biology. We don't yet have the technology to fully reverse the effects of male puberty, so there can be reasonable debate about trans women who transitioned after puberty, but early transitioners have no meaningful advantage. Their bodies, in an athletic context, are female.
This is also true for many cisgender intersex women with XY chromosomes. Someone with androgen insensitivity can have XY chromosomes, yet be capable of giving birth. Drawing the line at having a Y chromosome makes no sense.
Quite a biased source, no? This doesn't provide evidence that these differences are biological. Boys are much more likely to exercise than girls due to social norms: https://pmc.ncbi.nlm.nih.gov/articles/PMC10478357/.
> Their bodies, in an athletic context, are female.
I'm sorry, but this is not true. "Puberty blockers" do not complete suppress the effects of male genetics. They only attempt to block certain hormonal effects.
It is not possible to completely block the effects of having male genes by simple hormone modulation.
> Someone with androgen insensitivity can have XY chromosomes, yet be capable of giving birth
We do not determine eligibility for sports classes based on ability to give birth for good reason. It's not a proxy for the genetic athletic differences being addressed by these classes.
Individuals with androgen insensitivity typically cannot give birth. This an extremely rare possibility, not a typical feature of the condition.
ChatGPT did not make the treatment. ChatGPT summarized literature explaining how to make similar treatments, and then specialized tools were used to make the treatment. It is perfectly reasonable that ChatGPT can summarize literature and not make compliant 100 page legal documents.
The principle behind personalized mRNA vaccines is simple enough, and it's perfectly plausible that someone with money and lab access can create an effective treatment not offered through conventional means. It would be plausible even for a human patient right now, with mRNA vaccines still hung up in clinical trials. For a dog? Of course.
> It is perfectly reasonable that ChatGPT can summarize literature and not make compliant 100 page legal documents.
No it's not. Yes ChatGPT can help summarizing literature, but it can also definitely also do the heavy lifting when it comes to write red tape.
I know some people here, after spending too much time reading Ayn Rand, believe that governments are the greatest evil but come on. Red tape is not some ARC-AGI level of challenge.
The LLM did not design the drug. The LLM summarized some papers on how to design similar drugs, and then a dozen specialized tools were used in an established pipeline to design the drug. You people need to read the article and read the background before writing nonsense based on your assumptions.
Funny how you keep skipping the unbelievable part of the story out of your replies: why would he spend 3 month hand typing a document that an LLM can definitely make at least 80% of it in one shot?
True, and even more true in the case that you barely understand what you're doing. That's a feature rather than a bug of this sort of paperwork; the person who's simply pestered ChatGPT until it says "great idea" won't cross that threshold at all, whereas this guy [and the bioinformatics processing chain and experts in the loop he found] crossed it in his spare time. If it was just the "two hours a night typing" as quoted in the article, LLMs can do it in no time.
"ChatGPT better at finding expert advice than filling in compliance forms" and even "getting workable results from latest generation Open Source bioinformatics tools possible for smart laymen with minimal background reading; learning enough to prove they aren't dangerous only takes marginally longer" doesn't sound nearly as bad as "layman asks ChatGPT to cure his dog's cancer, only hard bit is writing enough words to convince gatekeepers" as rendered by news coverage of this (and not really elaborated on more by the TFA). A rendering which really should trigger people's bullshit filters.
Other fields crossed the computers can find potential solutions easily a lot earlier (any idiot can put dimensions into pretty dumb civil engineering tools and get answers that are probably correct; don't as me how I know!) and actually have higher barriers (no, even if you actually learn the relevant physics as well you will still need to pass some elements of your home design via someone with the right professional liability insurance linked to their experience and formal qualifications)
You don't understand how the technology in question works, and you're just making shit up because you don't want to admit to being wrong.
What are you alleging here anyways? That all the scientists quoted and photographed in the article discussing their part in making the vaccine are in on the game? That the Australian made the story up wholesale? Come on.
> You don't understand how the technology in question work
See my comment history. I do know very well how language models work. Thank you.
What I don't know is why you're claiming you disagree with the story reported here being bullshit (the story being almost literally “ChatGPT did the heavy lifting and the only reason we can't have nice thing is because red tape is blocking humanity”: “he used AI to teach himself about how a personalized vaccine could work, designed much of the process himself. […] The red tape was actually harder than the vaccine creation”), when you know very well it's bullshit because your comments describe what has most likely happened (ChatGPT did nothing much besides telling what could work and pointing towards which scientists to seek help from).
Again, literally no one question the fact that mRNA-based medicine has incredible potential, the bullshit here is not about the medicine: it's about red tape being the only bottleneck in a fantasyland where AI solves all the hard challenges.
It's not a radical medical breakthrough, it's applying a technique already documented in the literature and years into human clinical trials. The LLM is just doing literature summary and planning. The most notable AI innovations here are in protein folding and binder prediction.
The LLM didn't oneshot the mRNA treatment, it merely suggested the idea. Most of the steps in the process were done with specialized tools. And no novel treatments were invented wholesale, it's more applying a documented process with existing open-source tools that's just too personalized and expensive to be offered by any vet.
Why do you find the idea of a man complaining about having painstakingly hand typed a 100 page document over three month when he claims he can use an LLM in a way pretty much no-one has before him?
It's several orders of magnitude easier to get an LLM fill some kind of red tape than it could be to use it in the way he claims to have used it.
He is not using an LLM in some new and exciting way. The process of making a personalized mRNA vaccine looks something like this:
1. Collect and sequence patient's normal and tumor genomes
2. Predict immunogenic neoantigens from genome
3. Generate optimized mRNA sequence from neoantigens
4. Create vaccine from sequence
modulo some variations, which I wrote off the top of my head because I understand this technology.
Steps 1 and 4 are done by contracted labs. Steps 2 and 3 are doable through open-source computational tools and a little engineering. What does ChatGPT do here? ChatGPT explains the process, finds labs that will do 1 and 4 for pay, finds published algorithms and data for steps 2 and 3. It's barely more complicated than what ChatGPT would do to help a student with their homework.
Legal documents, on the other hand? Have you ever tried to get an LLM to do your taxes? It's not easy.
> Legal documents, on the other hand? Have you ever tried to get an LLM to do your taxes? It's not easy.
Taxes are numerate, which is where LLMs fuck up.
Legal documents are structured texts, which is where LLMs shine. Should you blindly trust the outcome? fuck no, but a good first pass is trivially achievable if you set the right parameters. and make sure its relevant to the right jurisdiction.
reply