A new study reveals that while high trust in generative AI risks uncritical acceptance, students with stronger critical thinking skills engage in more reflective and collaborative AI use, underscoring the need for education that fosters both AI literacy and evaluative skills.
There is a growing concern among educators and researchers about undergraduate students potentially becoming over-reliant on Generative AI in academic problem-solving, which may undermine critical thinking and independent learning skills. A recent rigorous study conducted at an Asian university with over 800 undergraduate participants explores this issue in depth, investigating how individual factors like trust and AI literacy intersect with students' critical thinking abilities to shape their reliance behaviours when using Generative AI tools.
The study challenges traditional outcome-focused approaches to evaluating reliance on AI—where reliance is often judged simply by whether users accept AI outputs correctly or make errors by either blindly following AI or ignoring accurate AI suggestions. Instead, it adopts a more nuanced, strategy-graded perspective that emphasises the cognitive processes underlying how students engage with AI during problem-solving. This perspective resonates with the collaborative and iterative nature of problem-based learning (PBL), where the process of inquiry and critical engagement is as important as the final solution.
Using carefully designed surveys before and after a group problem-solving task requiring extensive Generative AI use, the researchers measured students' trust in AI, AI literacy, critical thinking disposition and skills, alongside their reliance behaviours categorised as reflective, cautious, collaborative, or thoughtless use of AI. Reflective and cautious reliance involve critically evaluating AI outputs and being sceptical or careful; collaborative use involves integrating AI outputs with peer feedback; while thoughtless use indicates passive acceptance of AI-generated content.
Key findings reveal that students exhibiting higher critical thinking skills and dispositions are significantly more likely to engage in reflective, cautious, and collaborative use of Generative AI—behaviours deemed desirable for effective human-AI collaboration in problem-solving. Conversely, greater trust in AI correlates strongly with increased thoughtless use, implying a risk of uncritical acceptance. However, importantly, critical thinking mediates this relationship by offsetting the tendency of trust to lead to thoughtless reliance, promoting instead more thoughtful engagement with AI.
AI literacy was found to uplift critical thinking skills and dispositions, which in turn fostered more desirable reliance behaviours. While AI literacy alone had a limited direct effect on reliance, its substantial indirect influence through critical thinking highlights that knowledge and competence in AI must be paired with evaluative skills to optimise AI use. This interplay suggests that cultivating both AI literacy and critical thinking is crucial in preparing students to use AI tools judiciously.
These findings align with wider concerns voiced in public discourse and other studies. For example, UK media and educational commentators have highlighted anxiety over the impact of generative AI on students’ critical thinking, creativity, and academic integrity. Research from institutions such as MIT underscores that heavy reliance on AI tools like ChatGPT may weaken neural engagement associated with executive control and creative thinking, reinforcing the need to safeguard cognitive skills in AI-integrated learning environments.
The implications for educators are significant. There is a need to design curricula and learning activities that explicitly scaffold critical thinking in the context of AI usage. Strategies may include integrating iterative AI interaction tasks that require students to critically assess, reflect on, and collaboratively refine AI outputs rather than accept them uncritically. Educators might incorporate Socratic questioning techniques to challenge assumptions and promote deeper inquiry. Furthermore, fostering AI literacy alongside critical thinking can empower students to navigate AI technologies ethically and effectively, mitigating risks of plagiarism, bias amplification, and loss of autonomy.
The study also acknowledges limitations—they caution that the findings come from a single task within a specific university context and predominately include high-performing students, suggesting that broader and longitudinal research is necessary to understand evolving reliance behaviours over time and across diverse educational disciplines.
In summary, this research advances the understanding of human-AI collaboration in education by emphasizing that reliance on Generative AI is not inherently problematic. Rather, it is the quality of cognitive engagement—rooted in critical thinking and informed by AI literacy—that determines whether AI use supports meaningful learning and problem-solving. By fostering these skills, educators can help students harness AI’s potential responsibly, positioning the UK as a leader in cultivating an innovative and ethically aware AI-enhanced education landscape.
Source: Noah Wire Services