This discussion focuses on the issues surrounding Large Language Models (LLMs) and their abilities to hallucinate sources and comprehend information. Users express frustration over models providing inaccurate or non-existent URLs and emphasize the need for LLMs to improve their verification capabilities. The conversation highlights the expectation for advanced intelligence in LLMs to include robust source verification and comprehension, especially when claiming high levels of academic sophistication. Furthermore, practical advice is shared regarding how to phrase questions for better responses from LLMs.