Does AI have a built in “laziness” response? For example many times (both ChatGPT and Microsoft Copilot), I’ll ask a question related to finding the top rated or reviewed or ranked of a subject matter, and it will give me some websites and suggest to look it up myself. After hollering at it, the robot eventually will dig up the information I ask for and provide but it takes a several responses by me (and usually only after I demonstrate anger with it in my wording choices), that it responds with a satisfactory answer.
Then just today I used Copilot, to ask what years did The Dukes of Hazard air at 9pm on Friday nights on CBS? It’s first response was this was a popular time for the 1980 and 1981 TV season. I then asked it about the 79/80 TV season. It once again said that Dukes of Hazzard aired at 9pm on CBS during that season (which I believe is accurate from my recollection, but this raises the question, why didn’t it state those stats my first request?).
I asked when was it on during the 78/79 season. Same answer.
Then I asked it a trick question and asked when it aired during the 77/78 season? It said the same Friday 9pm CBS answer. I then responded “Your response doesn’t make any sense, do you know why”? It replied as follows…. “I see what happened there, I mistakenly gave you the same airing time for the 77/78 season as the others. However the Dukes of Hazzard premiered in Jan 1979, so it wasn’t on during the 77/78 season”
So if the AI can find the correct info. Why would it not get it right the first time when responding? I think it’s almost seems programmed to respond like a lazy human that is being annoyed and just trying to make you go away without it doing the legwork to provide accurate answers. Any thoughts on this? It has happened many times over last year or 2 when I’ve used both ChatGPT and CoPilot.