![](/static/253f0d9b/assets/icons/icon-96x96.png)
![](https://lemmy.ml/pictrs/image/c0ed0a36-2496-4b4d-ac77-7d2fd7f2b5b7.png)
My main concern with people making fun of such cases is about deficiencies of “AI” being harder to find/detect but obviously present.
Whenever someone publishes a proof of a system’s limitations, the company behind it gets a test case to use to improve it. The next time we - the reasonable people arguing that cybernetic hallucinations aren’t AI yet and are dangerous - try using such point, we would only get a reply of “oh yeah, but they’ve fixed it”. Even people in IT often don’t understand what they’re dealing with, so the non-IT people may have even more difficulties…
Myself - I just boycott this rubbish. I’ve never tried any LLM and don’t plan to, unless it’s used to work with language, not knowledge.
I often skip meetings without agenda. If they don’t care to prepare a reasonable invitation, I don’t care to join. Also - I skip meetings where they announce stuff. Announcements should go to my inbox, so I can read them when ready, not when they think it’s suitable for them.