Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

And now we start relying on machine hallucinations for people's health. I don't see any way this could go wrong.


Ask it “is” questions and not “how“ questions help immensely. For example:

“Is C17orf69 related to cholesterol”

Vs

“How is C17orf69 related to cholesterol”

How questions by their nature can be lies. Been using chatgpt daily for medical questions but then I verify _everything_ it says via sources. It feels like 1998 Google in a way.

A dozen questions can save me tons of time. “My drunk college friend” who knows a lot but likes to boast that they know everything. Great for brainstorming, not great for fact checking.

I would absolutely love to try this model.


> Ask it “is” questions and not “how“ questions help immensely

Taking your example, 'Prompt Engineering' is becoming an increasingly important skill but I feel that everyone is winging it.

Is there any agreed scholar 'handbook' that synthesis the academia 'lessons learned' on this topic?


Good tip. I have been playing with ChatGPT, and learned to append "...with list of external references" at the end of all my questions I want real answers to.


The key to getting good results with Large Language Models is to combine them with another tool. For example, a human with a search engine produces more accurate results than a human alone. In this spirit, I chained together GPT 3.5 and a search engine and asked GPT 3.5 to return a list of search terms for an answer from ChatGPT and fed them to Google to get a list of citations. I call this "Citations Needed":

https://twitter.com/john_lam/status/1614778632794443776


It makes up references, too.


Wait, wait, what? I have not ran into this, but maybe the questions & topics I am asking . . .

Excellent! Now I have a way to identify ChatGPT research papers!


Last weekend, I was stuck on a data analysis problem: how to do dimensionality reduction with lots of censored data. There are a few ideas in the literature, but not many, so in a fit of desperation, I asked ChatGTP. After a few false starts, it told me about an algorithm called CenPCA that did exactly what I wanted. It cited a paper; it told me about the author. It even recommended Python and R implementations and showed how to use them.

The only catch? None of this exists! I couldn’t find the paper, couldn’t find the guy who allegedly wrote it, and couldn’t find the software (neither could pip or conda). When I asked for more details about the algorithm, it gave me something else meant for handling missing data instead.

On reflection, this is exactly what should have happened. I would have been thrilled to find something like that on StackOverflow or somebody’s blog, so it’s a great continuation of the prompt. It’s just not real!


I have had experiences like this several times also so it is interesting to look at an example from someone else.

Probing it about CenPCA it gave these two definitions:

"CenPCA is an abbreviation for Centralized Principal Component Analysis. It is a dimensionality reduction technique used in data analysis for compressing high-dimensional data into a lower-dimensional representation. The centralization step of CenPCA involves centering the data, making the first principal component aligned with the mean of the data. This makes it possible to capture the overall structure of the data and to highlight the most important patterns in the data set. CenPCA has been used in various applications such as finance, image processing, and bioinformatics to analyze large and complex datasets."

"CenPCA refers to "Centered PCA", which is a variant of Principal Component Analysis (PCA), a widely used method in machine learning and data analysis. The key difference between regular PCA and centered PCA is that in centered PCA, the data is first centered around the mean before performing the PCA, while in regular PCA, the mean is not subtracted from the data. This has implications in the interpretation of the results, as centered PCA can sometimes provide more meaningful principal components."

I would think there has to be something to this but it is confusing exact titles although I have had it make up python code that simply is nonsense also.

Another type of example I had was during a language philosophy discussion it recommended the paper:

"Language Evolution: The Hard Problems" by Morten H. Christiansen and Simon Kirby is a paper that explores some of the challenges and limitations of current theories of language evolution."

It was actually just a book called "Language Evolution" by Morten H. Christiansen and Simon Kirby but trying to find that exact paper title made me think it made it up.

I am tending towards becoming addicted to it still after massive frustration with it for things like this at first. It seems you just have to be careful when probing the outer edges of knowledge and to not get led down these nonsense paths.


ChatGPT is a multi-dimensional model. The paper exists, just not in the dimension you are currently in. I suggest you ask ChatGPT about whether changing dimensions would help you, or not.


trust but verify as we do in real life.


When a technology repeatedly fails to be trustworthy, stop trusting at all.


When humans repeatedly fails to be trustworthy, stop trusting at all.


Well, yes. That's a major reason for why we stop being friends, or change jobs, or fire employees, or find someone else to vote for.


Let me give you an example of what I meant by the parent. I asked ChatGPT, if there is any research area where Clifford Algebras and Expander graphs are related in any manner. It mentioned something about error-correcting codes. I verified it and sure enough there was a mathoveflow post about it.


When we'll all be in "almost fully autonomous" cars and have to "trust but verify" it'll be really fun. You'll be at the wheel 0.1% of the time, lose most of your skills because you'll almost never need the, and only have to be in charge in the worst edge cases.

Now apply that to other topics...




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: