1. The “uncanny” look of AI‑generated photos
Many users feel that even the best models produce images that look too crisp or too perfect, giving a “photorealistic” but ultimately unsettling vibe.
“They look too crisp. No proper shadows, everything looks crystal clear.” – elorant
“I can’t quite put my finger on it… they just feel off to me.” – Deukhoofd
2. Depth‑of‑field, texture, and resolution artifacts
A recurring complaint is that AI models either blur the background incorrectly or keep foreground textures unnaturally sharp, especially at high resolutions.
“The blur isn’t correct… the depth of field is really wrong even if it conforms to ‘subject crisp, background blurred’.” – derefr
“At the moment there is no model that can go for 4k without problems you will always get high‑frequency artifacts.” – BoredPositron
3. Model comparison and prompt‑adherence debate
Users constantly compare new releases (Nano Banana Pro, Z‑Image, Flux, Qwen‑Image‑2, etc.) on realism, prompt accuracy, and editing capability, often arguing over which is truly “photorealistic.”
“For me the only model that can really generate realistic images is Nano Banana Pro.” – GaggiX
“The complex prompt following ability is seriously impressive here.” – cubefox
4. Open‑source vs. API, licensing, and industry dynamics
The discussion also covers how companies release weights, the “open‑source” debate, and the broader competitive landscape of large‑scale image models.
“They announced it and it was available via API, but then they released the weights a few weeks after with an Apache 2.0 license.” – vunderba
“The best models right now come from companies with considerable resources/funding.” – waldarbeiter
These four themes capture the main concerns and viewpoints circulating in the thread.