Three dominant themes in the discussion
| Theme | Key points | Representative quotes |
|---|---|---|
| 1. The “uncanny” realism problem | Users note that AI‑generated photos feel off—too crisp, wrong depth‑of‑field, lighting glitches, and subtle artifacts that trigger nausea or a sense of the uncanny valley. | “The text rendering is quite impressive, but is it just me or do all these generated ‘realistic’ images have a distinctly uncanny feel to it.” – Deukhoofd “The lighting is wrong, that's what's telling to me. They look too crisp.” – elorant “I had to stop. I don’t feel good staring at it now.” – finnjohnsen2 |
| 2. Prompting & model differences | Users compare models (Nano Banana Pro, Qwen‑Image‑2.0, Flux, Z‑Image) and discuss how specific prompts (e.g., “shallow depth of field, bokeh, DSLR”) can produce more realistic or artistic results, and how depth‑of‑field handling varies. | “I had no problems getting images with blurry background with the appropriate prompts.” – Mashimo “The blur isn’t correct though. The depth of field is really wrong even if it conforms to ‘subject crisp, background blurred’.” – afro88 “The complex prompt following ability and editing is seriously impressive here.” – cubefox |
| 3. Open‑source vs closed‑model hype | The conversation touches on the trend of companies releasing polished demos, then locking weights or keeping models proprietary, and the frustration with “coming‑soon” open‑source claims. | “Another closed model dressed up as ‘coming soon’ open source.” – singularfutur “Unfortunately no open weights it seems.” – dsrtslnd23 “They have image and video models that are nowhere near SOTA on prompt adherence or image editing but pretty good on the artistic side.” – wongarsu |
These three themes capture the main concerns and observations shared by the participants.