Best AI for Literature Reviews? Only ONE Passed the Test

Andy Stapleton June 5, 2025
Video Thumbnail
Andy Stapleton Logo

Andy Stapleton

@drandystapleton

About

I'm Here To Help You Navigate Academia & AI Like a Pro I break down the insider secrets of academia so you can work smarter, publish faster, and stay ahead—with the best AI tools by your side. 💡 Want expert feedback on your AI tool for researchers? I’ve tested 100+ academic AI tools and work directly with AI & EdTech companies to refine usability, output quality, and market positioning. If you’re building an AI tool for academics, I can help you make it truly indispensable. 📩 Interested in AI tool consulting? Apply here: https://forms.gle/kfXz6Xe1Mj9uRkSKA 🔔 Subscribe for weekly videos on: ✔️ PhD & research advice ✔️ AI tools for academia ✔️ Productivity & writing tips ✔️ Insider insights on academic success

Video Description

In this video, I explore the reliability of three different literature review AI tools by putting them through a real test: generating a full literature review and then fact-checking every single reference they provided. ▼ ▽ Sign up for my FREE newsletter Join 21,000+ email subscribers receiving the free tools and academic tips directly from me: https://academiainsider.com/newsletter/ ▼ ▽ MY TOP SELLING COURSE ▼ ▽ ▶ Become a Master Academic Writer With AI using my course: https://academy.academiainsider.com/courses/ai-writing-course I wasn’t just looking at how fast or how detailed the outputs were—I wanted to know whether the sources these tools cited actually existed. For researchers, PhD students, and anyone beginning academic work, reference accuracy isn’t optional—it’s foundational. With the growing popularity of tools that promise to simplify the AI systematic literature review process, I felt it was important to go beyond the marketing and put these tools through a serious academic test. I used the same structured prompt for all three tools and assessed how well they met academic standards. My goal was to evaluate their performance not just in creating a readable review, but in handling the critical aspects of academic integrity—particularly how often they hallucinate references. The hallucination rate (or how often a tool makes up or misrepresents sources) is something every researcher needs to understand before trusting literature review writing AI. This process matters because the ability to trust your AI assistant can directly impact the quality of your thesis or paper. If you're using AI for PhD literature review tasks, it’s important to know whether the tools you're relying on are giving you accurate, verifiable citations or leading you down a path filled with errors. In my opinion, tools that assist with literature review search AI workflows can be incredibly helpful—but only when paired with human oversight. AI can accelerate the process of discovering key research themes, mapping debates in a field, and generating initial drafts. But blind trust in the results is risky. Watch this video to see which tool came out on top, which one I’d avoid, and what you should look for when using AI to support your academic research. ................................................ ▼ ▽ TIMESTAMPS 00:00 Intro 00:23 Manus AI 02:23 Manus Results 04:24 Genspark 05:29 Genspark Results 06:43 Gemini 08:55 Gemini Results 11:15 Overall Score 12:11 Outro ................................................ ▼ ▽ Socials for shorts and reels Instagram: https://www.instagram.com/drandystapleton/ TikTok: https://www.tiktok.com/@drandystapleton ................................................

You May Also Like