LLM-Check: Investigating Detection of Hallucinations in Large Language Models | Read Paper on Bytez