Artificial intelligence in education

From AviationSafetyX Wiki
Jump to navigation Jump to search

Template:SHORTDESC:Template:Testcases other

Template:Tone

Artificial intelligence in education (AIEd)[1] is the application of AI in educational settings. The field combines elements of generative AI, data-driven decision-making, AI ethics, data-privacy and AI literacy.[2] An educator might learn to use these AI systems as tools and generate code,[3] text or rich media or optimize their digital content production.[4] Or a governmental body might see AI as an ideological project to normalize centralized power and decision making,[5] while public schools and higher education contend with increasing privatization.[6] The different definitions of AIEd are contested and can result in confusion about what exactly is being discussed.[7]

Background[edit | edit source]

Artificial intelligence could be defined as "systems which display intelligent behaviour by analysing their environment and taking actions – with some degree of autonomy – to achieve specific goals".[8] These systems might be software-based or embedded in hardware.[9] They can be rely on machine learning or rule-based algorithms.[10]

There is no single lens with which to understand AI in education (AIEd), but the genealogy of education and AI,[11] its promises and problematics[12] may assist with seeing the bigger picture. TheDartmouth workshop is considered a founding event for AI.[13] At least two paradigms have emerged from this workshop. Firstly the tutoring / transmission paradigm, where AIEd systems represent a conduit for personalizing learning. Secondly, the coordination paradigm, where AIEd is the supporter of a cohort's knowledge construction, and this mass is socialized into new systems of thought. Alternately there is the leadership model, where individuals take agency and make choices about their learning (with or without AI)[14][15] AIEd could be viewed as the ultimate disruption, replacing academics and their scholarly prestige,[16] or an opportunity to consider together, what makes humans different from machines.

Emerging perspectives[edit | edit source]

This complex social, cultural, and material assemblage should be seen in its geo-political context. It is likely that AI systems will be shaped by different policy or economic imperatives which will influence the construction, legitimation and use of this assemblage in an education setting.[17][18] Those who see AI as a conduit for knowledge transmission or construction are comfortable with the idea of machine's reasoning or having hallucinations. While those who are sceptics, recognize the cultivated "closed-off imaginative spaces" that big tech has captured, notice how big tech's discourse limits critical thought and discussions about these computational systems.[19] Resistors often take a principled response and refuse to accept the many metaphors of "artificial intelligence", used to disguise working practices that are exploitative and extractive.[20]

The AI in education community[edit | edit source]

The AI in education community has grown rapidly in the global north, driven by venture capital, big tech, and open educationalists.[21] While some believe AI will improve "access to expertise"[22] and revolutionize learning through natural language processing,[23] others focus on enhancing LLM reasoning.[24][25]

In the global south, critics argue that AI's data processing and monitoring reinforce neoliberal approaches to education rather than addressing colonialism and inequality.[26][27]

Algorithms effects on education[edit | edit source]

AI companies that focus on education, are currently preoccupied with generative artificial intelligence (GAI), although data science and data analytics is another popular educational theme. At present, there is little scientific consensus on what AI is or how to classify and sub-categorize AI[28][29] This has not hampered the growth of AI in education systems, which are gathering data and then optimising models.

AI offers scholars and students automatic assessment and feedback, predictions, instant machine translations, on-demand proof-reading and copy editing, intelligent tutoring or virtual assistants.[21] The "generative-AI supply chain",[30] brings conversational coherence to the classroom, and automates the production of content.[31] Using categorisation, summaries and dialogue, AI "intelligence" or "authority" is reinforced through anthropomorphism and the Eliza effect.

Framing education[edit | edit source]

Educational technology can be a powerful and effective assistant in a suitable setting. Computer companies are constantly updating their technology products. Some educationalists have suggested that AI might automate procedural knowledge and expertise[32] or even match or surpass human capacities on cognitive tasks. They advocate for the integration of AI across the curriculum and the development of AI Literacy.[33] With higher education facilities finding themselves with an opportunity to create a path for themselves and their students by creating guidelines so that AI can incorporated into their curriculum.[34] Others are more skeptical as AI faces an ethical challenge,[35] where "fabricated responses" or "inaccurate information", politely referred to as "hallucinations"[32] are generated and presented as fact. Some remain curious about societies tendency to put their faith in engineering achievements, and the systems of power and privilege[36] that leads towards deterministic thinking.[37] While others see copyright infringement[38][39][30] or the introduction of harm, division and other social impacts, and advocate resistance to AI.[40] Evidence is mounting that AI written assessments are undetectable, which poses serious questions about the academic integrity of university assessments.[41]

Tokens, text and hallucinations[edit | edit source]

Large language models (LLMs) take text as input data and then generate output text.[42] Coherent sentences are parroted [43] from billions of words and code that has been web-scraped by AI companies or researchers. LLM are often dependent on a huge text corpus that is extracted, sometimes without permission[44]. LLMs are feats of engineering, that see text as tokens. The relationships between the tokens allow LLMs to predict the next word, and then the next, thus generating a meaningful sentence that has an appearance of thought and interactivity. This massive dataset creates a statistical reasoning machine,[45] that does pattern recognition.[46] The LLM examines the relationships between tokens, generates probable outputs in response to a prompt, and completes a defined task, such as translating, editing, or writing. The output that is presented is a smoothed collection of words,[47] that is normalized and predictable. Translation, summarization, information retrieval, conversational interactions are some of the complex language tasks that machines are expected to handle.[48]

However, the text corpora that LLMs draw on can be problematic, as outputs will reflect their stereotypes or biases of the people or culture whose content has been digitized.[49] The confident, but incorrect outputs are termed "hallucinations".[50] These plausible errors are not malfunctions but a consequence of the engineering decisions that inform the large language model.[51] "Guardrails" offer to act as validators of the LLM output, prevent these errors, and safeguard accuracy.[52] These metaphorical "hallucinations" contribute towards the misconception that AI is conscious, perhaps AI mirages are a better alternative.[53] There are no fixes[54][55] for AI mirages, the "factually incorrect or nonsensical information that seems plausible".[56]

Socio-technical imaginaries[edit | edit source]

The benefits of multilingualism, grammatically correct sentences or statistically probable texts written about any topic or domain are clear to those who can afford software as a service (SaaS). In edtech, there is a recurrent theme, that "emerging technologies"[57] will transform education.[58] Whether it be radio, TV, PC computers, the internet, interactive whiteboards, social media, mobile phones or tablets. New technologies generate a socio technical imaginary (STI) that offer's society, a shared narrative[59] and a collective vision for the future.[60] Improvements in natural language processing and computational linguistics have re-enforced assumptions that underlie this "emerging technology" STI. AI is not an emerging technology, but an "arrival technology"[61] AI appears to understand instructions and can generate human-like responses.[62] Behaving as a companion for many in a lonely and alienated world.[63] While also creating a "jagged technology frontier",[64] where AI is both very good and terribly bad at very similar tasks.[61]

Public goods vs venture capital[edit | edit source]

At first glance, artificial intelligence in education offers pertinent technical solutions to address future education needs.[21] AI champions envision a future where machine learning and artificial intelligence might be applied in writing, personalization, feedback or course development. The growing popularity of AI, is especially apparent to many who have invested in higher education in the past decade.[21] Critical skeptics on the other hand, are wary of rhetoric that presents technology as solution. They point out that in public services, like education, human and algorithmic decision systems should be approached with caution.[65] Post digital scholars and sociologists are more cautious about any techno-solutions, and have warned about the dangers of building public systems around alchemy,[65] stochastic parrots or cognitive capitalism.[66] They argue that there are multiple costs that accompany LLMs, including dangerous biases the potential for deception, and environmental costs[43] The AI curious are aware of how cognitive activity has become commodified. They see how education has been transformed into a "knowledge business" where items are traded, bought, or sold.[67] African hyper scalers, venture capital and vice chancellors[68] are punting the Fourth Industrial Revolution, with the prospect of billions earmarked for South African Data centers,[69] such as Teraco Data Environments, Vantage Data Centre,[28] Africa Data Centres[31] NTT /Dimension_Data,[23] carefully avoiding being accused of monopoly practices.[70]

AI resilient graduates[edit | edit source]

AI has co-existed comfortably between academia and industry for years.[71] The terrain is shifting and currently AI research in the global north has computing power, large datasets, and highly skilled researchers. Power is shifting away from students and academics toward corporations and venture capitalists.[72] Graduates from universities in dominant cultures, where there are high levels of digitisation, need to become AI-resilient. Graduates from the majority world also need to value their own process of knowledge construction, resist the lure of normalisation and see AI for what it is, another form of enclosure, and start blogging.Template:Opinion Graduates from both the global north and the majority of the world need to be able to critique AI output, become familiar with the processes of technical change,[73] and let their own studies and intellectual life guide their working futures.[31]

Prominent commentators on artificial intelligence in education[edit | edit source]

Template:Prose

Critical Sceptics Curious Practitioners Acknowledged Experts Committed Champions
Ben Williamson[74] Lance Eton[75] Stephen Downes David Wiley[76]
Helen Beetham[77] Anna Mills[78] Rose Luckin Sal Kahn
Audrey Watters Bryan Alexander Vukosi Marivate[79] Anthony Seldon
Neil Selwyn[80][81]

With the use of AI tools becoming more commonplace in schools, universities and other educational settings, discussion is growing over the benefits and risks (as well as the possible longer-term consequences) of reorganising education around AI. A range of stances are emerging—ranging from enthusiastic proponents of the widespread adoption of AI in education through to more critical commentators.

Trust in AI educational technology[edit | edit source]

At present, teachers are still skeptical about AI due to two main factors: lack of knowledge and understanding of AI, as well as some misunderstandings about it. Because AI can only score based on written work, and teachers can sometimes understand what students want to express through text. So, teachers lack trust and have a negative attitude towards the use of AI-Edtech.[82]

References[edit | edit source]

  1. Losing the imitation game.  (2023-04-09)  Retrieved 2025-01-02 from Jennifer++
  2. 'Luckily, we love tedious work'.  Helen Beetham.  (2023-08-24)  Retrieved 2024-12-26 from imperfect offerings
  3. Ali Alkhatib: Defining AI.  Retrieved 2024-12-26 from ali-alkhatib.com
  4. Edtech Pandemic Shock: New EI research launched on COVID-19 education commercialisation.  (2020-07-10)  Retrieved 2024-12-08 from Education International
  5. An update on AI at JRF | Joseph Rowntree Foundation.  (2025-02-03)  Retrieved 2025-03-02 from www.jrf.org.uk
  6. A definition of AI: main capabilities and scientific disciplines.  (18 December 2018)  Retrieved from ec.europa.eu
  7. The corruption risks of artificial intelligence.  (2022)  Retrieved from transparency.org
  8. Histories of Artificial Intelligence: A Genealogy of Power.  David Thompson.  (2019-12-09)  Retrieved 2025-02-16 from www.hps.cam.ac.uk
  9. AI, slavery, surveillance, and capitalism.  Edward Ongweso Jr.  (2024-11-04)  Retrieved 2024-11-06 from The Tech Bubble
  10. Sano-Franchini, Jennifer, Megan McIntyre, and Maggie Fernandes. "Refusing GenAI in Writing Studies: A Quickstart Guide." Refusing Generative AI in Writing Studies. Nov. 2024. refusinggenai.wordpress.com
  11. 21.0 21.1 21.2 21.3
  12. The Near-term Impact of Generative AI on Education, in One Sentence – improving learning.  David Wiley.  (October 17, 2023)  Retrieved 2024-08-29 from Opencontent
  13. 23.0 23.1 Template:Cite conference
  14. Template:Cite arXiv
  15. AI And The Democratization Of Knowledge.  Mark Pittman.  Retrieved 2025-03-20 from Forbes
  16. 28.0 28.1
  17. What is AI? Can you make a clear distinction between AI and non-AI systems?.  (March 6, 2024)  Retrieved 28 June 2024 from oecd.ai
  18. 30.0 30.1
  19. 31.0 31.1 31.2 What price your 'AI-ready' graduates?.  (7 August 2024)  Retrieved 2025-03-20 from link
  20. 32.0 32.1
  21. Revolutionizing Education: The Impact of AI on Learning and Teaching.  (2023-11-08)  Retrieved 2024-09-30 from pearson.com
  22. Resisting Deterministic Thinking.  (5 April 2023)  Retrieved from link
  23. Researchers tested leading AI models for copyright infringement using popular books, and GPT-4 performed worst.  (6 March 2024)  CNBC.  Retrieved from link
  24. McQuillan, Dan (2022). Resisting AI: An Anti-fascist Approach to Artificial Intelligence. Bristol University Press
  25. Template:Cite arXiv
  26. 43.0 43.1
  27. Artificial Intelligence (AI) vs. Difference.  (2024-08-20)  Law Office of Lainey Feingold.  Retrieved 2024-08-21 from link
  28. When Should We Trust AI? Magic-8-Ball Thinking and AI Hallucinations.  Nielsen Norman Group.  Retrieved 2024-08-21 from link
  29. Template:Cite arXiv
  30. ChatGPT Isn't 'Hallucinating.' It's Bullshitting.  (6 April 2023)  Retrieved from link
  31. Inside Guardrails AI: A New Framework for Safety, Control and Validation of LLM Applications | by Jesus Rodriguez | Medium.  (14 September 2023)  Retrieved from link
  32. Tech experts are starting to doubt that ChatGPT and A.I. 'hallucinations' will ever go away: 'This isn't fixable'.  Retrieved from link
  33. Google CEO Sundar Pichai says 'hallucination problems' still plague A.I. Tech and he doesn't know why.  Retrieved from link
  34. Template:Cite arXiv
  35. A definition of emerging technologies for education.  (18 November 2008)  Retrieved from link
  36. 61.0 61.1
  37. 65.0 65.1 Don't Believe Every AI You See.  Retrieved 2024-08-20 from New America
  38. Rossi, P.G., Capolla, L.M., Giannandrea, L. et al. Review of Tiziana Terranova (2022). After the Internet: Digital Networks between Capital and the Common. Postdigit Sci Educ (2024).
  39. The commodification of education and the (generative) AI-induced scam-like culture.  Retrieved from link
  40. Five new universities coming to South Africa – BusinessTech.  Retrieved from link
  41. Riley, B., & Bruno, P. (2024). Education hazards of generative AI. https://www.cognitiveresonance.net/resources.html
  42. Competition Commission slapping Microsoft with monopoly complaint in South Africa – MyBroadband.  Retrieved from link
  43. Postman, N. (1998, March). Five things we need to know about technological change [Conference presentation]. Denver, Colo., United States. https://www.cs.ucdavis.edu/~rogaway/classes/188/materials/postman.pdf
  44. artificial intelligence – Page 2.  Retrieved 2024-09-09 from code acts in education
  45. AI + Education = Simplified | Lance Eaton | Substack.  Lance Eaton.  Retrieved 2024-09-09 from aiedusimplified.substack.com
  46. artificial intelligence – improving learning.  Retrieved 2024-09-09 from opencontent.org
  47. imperfect offerings | Helen Beetham | Substack.  Helen Beetham.  Retrieved 2024-09-09 from helenbeetham.substack.com
  48. AI Text Generators: Sources to Stimulate Discussion Among Teachers.  Retrieved from link
  49. Prof Vukosi Marivate - Staff Profile.  University of Pretoria.  Retrieved 2025-02-03 from link
  50. Template:Cite Q