Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

For the longest time I had been using GPT-5 Pro and Deep Research. Then I tried Gemini's 2.5 Pro Deep Research. And boy oh boy is Gemini superior. The results of Gemini go deep, are thoughtful and make sense. GPT-5's results feel like vomiting a lot of text that looks interesting on the surface, but has no real depth.

I don't know what has happened, is GPT-5's Deep Research badly prompted? Or is Gemini's extensive search across hundreds of sources giving it the edge?



> I tried Gemini's 2.5 Pro Deep Research.

I’ve been using `Gemini 2.5 Pro Deep Research` extensively.

( To be clear, I’m referring to the Deep Research feature at gemini.google.com/deepresearch , which I access through my `Gemini AI Pro` subscription on one.google.com/ai . )

I’m interested in how this compares with the newer `2.5 Pro Deep Think` offering that runs on the Gemini AI Ultra tier.

For quick look‑ups (i.e., non‑deep‑research queries), I’ve found xAI’s Grok‑4‑Fast ( available at x.com/i/grok ) to be exceptionally fast, precise, and reliable.

Because the $250 per‑month price for Gemini’s deep‑research tier is hard to justify right now, I’ve started experimenting with Parallel AI’s `Deep Research` task ( platform.parallel.ai/play/deep-research ) using the `ultra8x` processor ( see docs.parallel.ai/task‑api/guides/choose-a-processor ). So far, the results look promising.


I don't know about Gemini pro super duper whatever, but the freely available Gemini is as sycophantic as ChatGPT, always congratulates you for being able to ask a question.

And worse, on every answer it offers to elaborate on related topics. To maintain engagement i suppose.


The ChatGPT API offers a verbosity toggle, which is likely a magic string they prefix the prompt with, similar to the "juice" parameter that controls reasoning effort.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: