What is the risk if you ask for too short answers in discussions with chatbots


Chatbot AI (Hakinmhan photo source, dreamstime.com)
A research done by a French startup among very often used chats has come to the conclusion that if you ask a chatbot to give you more concise answers, it increases the risk that the agent will “invent” more than if you ask for detailed questions to which you do not ask short answer. Why does the concise “respond” accuracy?
Giskard, a test company in France, develops a comprehensive evaluation system for artificial intelligence models and has presented a study in which several language models such as Chatgpt, Gemini, Llam, Deepseek, Grok, Claude, Mistral and others.
One of the conclusions is that the most commonly used artificial intelligence models are not necessarily the most reliable, Giskard data shows. Some “hallucinates” too much. The “hallucinations” are, when it comes to AI, manufactured, incorrect or misleading answers, generated by a language model,
The bad part is that these answers, interspersed between the correct ones, seem credible, but are not based on real data or on verified sources.
Giskard researchers say that the requests for short answers to questions, especially those with great ambiguity, can negatively affect the factual accuracy of an AI model.
“Our data show that simple changes in system instructions dramatically influence the tendency of a model of hallucin,” the researchers wrote. “This discovery has important implications for implementation, because many applications prioritize concise responses, to reduce data consumption, to increase speed and reduce costs,” say those from the French company, quoted by Techcrunch.
An explanation is that the well -argued answers, especially in questions where several angles should be explored, require longer explanations. If you ask for a short answer from the beginning, it has no way to go through all the points and there are higher chances of errors.
The “hallucinations” are a problem that is difficult to eliminate in these applications. Even the most advanced models sometimes “invent” things because of their probabilistic nature. Moreover, the latest models hallucinates more than the previous ones, which makes their answers harder to verify.
Photo source: dreamstime.com