Artificial intelligence in education

Artificial intelligence in education

Artificial intelligence in education (AIEd) is the involvement of technology such as machine learning and advanced intelligent agents to create a learning environment which can be customized on a need basis. AI in education has transformed the traditional way of learning by usage of generative AI chatbots, self-assessing tools to analyze and a structured way of learning online through tutoring applications. Artificial intelligence provides quick access to knowledge and guidance which is one of the primary reasons for its adoption in the industry and the potential solutions that it could provide. While it is considered to be the future way of learning, it does come with concerns and challenges over ethics, bad practices and misinformation. This article will provide a reflection of how it started, the changing landscape of education through AI adaptation and the ethical concerns and challenges the education industry is currently facing.

Historical foundations

AIEd can be traced back as early as in the 1960s, when educators and researchers found the developing possibilities of computers in helping to learn. Computer-based instruction systems made use of program instructions for students to experience interactive learning outcomes. One such example is PLATO (computer system) which was developed by University of Illinois for the students. In the years 1970s and 1980s, the development of intelligent tutoring systems (ITS) were being adapted in classroom teachings. ITS provided instructions and materials based on performance, which provided a customized approach to learning. In the modern era, post-2000s natural language processing gave rise to AI usage, as students looking for information had interactions based on natural language. In recent times, Generative AI has been widely used for learning purposes. Applications such as ChatGPT, Perplexity AI have revolutionized worldwide.

Background

Artificial intelligence could be defined as "systems which display intelligent behaviour by analysing their environment and taking actions – with some degree of autonomy – to achieve specific goals". These systems might be software-based or embedded in hardware. They can rely on machine learning or rule-based algorithms.

There is no single lens with which to understand AI in education (AIEd), but the genealogy of education and AI, its promises and problematics may assist with seeing the bigger picture. TheDartmouth workshop is considered a founding event for AI. At least two paradigms have emerged from this workshop. Firstly the tutoring / transmission paradigm, where AIEd systems represent a conduit for personalizing learning. Secondly, the coordination paradigm, where AIEd is the supporter of a cohort's knowledge construction, and this mass is socialized into new systems of thought. Alternately there is the leadership model, where individuals take agency and make choices about their learning (with or without AI) AIEd could be viewed as the ultimate disruption, replacing academics and their scholarly prestige, or an opportunity to consider together, what makes humans different from machines.

Emerging perspectives

This complex social, cultural, and material assemblage should be seen in its geo-political context. It is likely that AI systems will be shaped by different policy or economic imperatives which will influence the construction, legitimation and use of this assemblage in an education setting. Those who see AI as a conduit for knowledge transmission or construction are comfortable with the idea of machine's reasoning or having hallucinations. While those who are sceptics, recognize the cultivated "closed-off imaginative spaces" that big tech has captured, notice how big tech's discourse limits critical thought and discussions about these computational systems. Resistors often take a principled response and refuse to accept the many metaphors of "artificial intelligence", used to disguise working practices that are exploitative and extractive.

The AI in education community

The AI in education community has grown rapidly in the global north, driven by venture capital, big tech, and open educationalists. While some believe AI will improve "access to expertise" and revolutionize learning through natural language processing, others focus on enhancing LLM reasoning.

In the global south, critics argue that AI's data processing and monitoring reinforce neoliberal approaches to education rather than addressing colonialism and inequality.

Applications

Applications in AIEd can be a wide range of tools that can be used by teacher as well as students for learning outcomes. From primary classrooms to training facilities AI has evolved the way of learning through innovative and engaging delivery techniques.

AI based tutoring system

Intelligent tutors or Intelligent tutoring systems (ITS) such as SCHOLAR system in the 1970s was use for reciprocal questions being asked between teacher and students. ITS integrated four models the student model which was information about the student's abilities, the teacher model where based on analysis of student's performance strategies and guidance was provided, the domain model (knowledge of students and teacher), the diagnosis model where evaluation was made base on domain model. Although, it improved proficiency in studies, some studies provide negative results and claims of inefficiency than human tutoring were made.

Custom learning platforms

Personalized AI platforms are tailor made for individuals based on their strengths and weakness. The platforms make use of algorithms to predict students patterns and habits based on that they make recommendations to make improvement in their performances. Platforms such as LinkedIn, Duolingo are currently some of the popular companies providing the service. However, there is fair share of criticism as these system based learning platforms might provide isolation and student-teacher interaction may fade. Also, biasness in the train information might lead to misinformation.

Automated grading system

Automation assessment in grading students helps in saving time for the educator, providing immediate feedback. Systems make use of different rubrics combinations to grade performances. These systems need oversight as there might be scoring biasing.

Generative AI

AI tools such as ChatGPT, Grok (chatbot) fall under the category of generative AI, they provide results based on interactions and are very good in making use of search algorithms to give precised results to the user. However, there are risk involving over-reliance and violating academic integrity.

Ethical concerns

With the advancement and adoption of AI, there are ethical challenges involved and proactive measure need to addressed to ensure equity and fairness to educators and establishments.

Accessibility

Equal access to AI could be one of the areas that comes into consideration. As there may many low incomes and rural areas deprived of the platform use. This might widen the gap in terms of education access. Global efforts should be made to accessibility and train educators in those underprivileged areas.

Bias and fairness

AI agents might be trained on biased data according to different company driven agendas. This might lead to knowledge which is fed to them in form of misinformation. There should be policies and check to maintain such bias practices.

Data privacy

Data privacy is a ethical concern as most of the results are on trained data and it can be misused for various purposes. Compliance laws should make sure of the transparency and data privacy is intact.

Perspectives

Educator Perspectives

Educators and school administrations have found AI to be improving the efficiency of work done by a big margin, while some percentage of work force are concerned abut overreliance. Professional development is key to integrating AI effectively to ensue current jobs are not replaced.

Student Perspectives

Students are flexible, with technology such as personalized feedback and self-paced learning, but reliability, privacy, and fairness are concerns.

Algorithms effects on education

AI companies that focus on education, are currently preoccupied with generative artificial intelligence (GAI), although data science and data analytics is another popular educational theme. At present, there is little scientific consensus on what AI is or how to classify and sub-categorize AI This has not hampered the growth of AI in education systems, which are gathering data and then optimising models.

AI offers scholars and students automatic assessment and feedback, predictions, instant machine translations, on-demand proof-reading and copy editing, intelligent tutoring or virtual assistants. The "generative-AI supply chain", brings conversational coherence to the classroom, and automates the production of content. Using categorisation, summaries and dialogue, AI "intelligence" or "authority" is reinforced through anthropomorphism and the Eliza effect.

Framing education

Educational technology can be a powerful and effective assistant in a suitable setting. Computer companies are constantly updating their technology products. Some educationalists have suggested that AI might automate procedural knowledge and expertise or even match or surpass human capacities on cognitive tasks. They advocate for the integration of AI across the curriculum and the development of AI Literacy. With higher education facilities finding themselves with an opportunity to create a path for themselves and their students by creating guidelines so that AI can incorporated into their curriculum. Others are more skeptical as AI faces an ethical challenge, where "fabricated responses" or "inaccurate information", politely referred to as "hallucinations" are generated and presented as fact. Some remain curious about societies tendency to put their faith in engineering achievements, and the systems of power and privilege that leads towards deterministic thinking. While others see copyright infringement or the introduction of harm, division and other social impacts, and advocate resistance to AI. Evidence is mounting that AI written assessments are undetectable, which poses serious questions about the academic integrity of university assessments.

Tokens, text and hallucinations

Large language models (LLMs) take text as input data and then generate output text. Coherent sentences are parroted from billions of words and code that has been web-scraped by AI companies or researchers. LLM are often dependent on a huge text corpus that is extracted, sometimes without permission. LLMs are feats of engineering, that see text as tokens. The relationships between the tokens allow LLMs to predict the next word, and then the next, thus generating a meaningful sentence that has an appearance of thought and interactivity. This massive dataset creates a statistical reasoning machine, that does pattern recognition. The LLM examines the relationships between tokens, generates probable outputs in response to a prompt, and completes a defined task, such as translating, editing, or writing. The output that is presented is a smoothed collection of words, that is normalized and predictable. Translation, summarization, information retrieval, conversational interactions are some of the complex language tasks that machines are expected to handle.

However, the text corpora that LLMs draw on can be problematic, as outputs will reflect their stereotypes or biases of the people or culture whose content has been digitized. The confident, but incorrect outputs are termed "hallucinations". These plausible errors are not malfunctions but a consequence of the engineering decisions that inform the large language model. "Guardrails" offer to act as validators of the LLM output, prevent these errors, and safeguard accuracy. These metaphorical "hallucinations" contribute towards the misconception that AI is conscious, perhaps AI mirages are a better alternative. There are no fixes for AI mirages, the "factually incorrect or nonsensical information that seems plausible".

Socio-technical imaginaries

The benefits of multilingualism, grammatically correct sentences or statistically probable texts written about any topic or domain are clear to those who can afford software as a service (SaaS). In edtech, there is a recurrent theme, that "emerging technologies" will transform education. Whether it be radio, TV, PC computers, the internet, interactive whiteboards, social media, mobile phones or tablets. New technologies generate a socio technical imaginary (STI) that offer's society, a shared narrative and a collective vision for the future. Improvements in natural language processing and computational linguistics have re-enforced assumptions that underlie this "emerging technology" STI. AI is not an emerging technology, but an "arrival technology" AI appears to understand instructions and can generate human-like responses. Behaving as a companion for many in a lonely and alienated world. While also creating a "jagged technology frontier", where AI is both very good and terribly bad at very similar tasks.

Public goods vs venture capital

At first glance, artificial intelligence in education offers pertinent technical solutions to address future education needs. AI champions envision a future where machine learning and artificial intelligence might be applied in writing, personalization, feedback or course development. The growing popularity of AI, is especially apparent to many who have invested in higher education in the past decade. Critical skeptics on the other hand, are wary of rhetoric that presents technology as solution. They point out that in public services, like education, human and algorithmic decision systems should be approached with caution. Post digital scholars and sociologists are more cautious about any techno-solutions, and have warned about the dangers of building public systems around alchemy, stochastic parrots or cognitive capitalism. They argue that there are multiple costs that accompany LLMs, including dangerous biases the potential for deception, and environmental costs The AI curious are aware of how cognitive activity has become commodified. They see how education has been transformed into a "knowledge business" where items are traded, bought, or sold. African hyper scalers, venture capital and vice chancellors are punting the Fourth Industrial Revolution, with the prospect of billions earmarked for South African Data centers, such as Teraco Data Environments, Vantage Data Centre, Africa Data Centres NTT /Dimension_Data, carefully avoiding being accused of monopoly practices.

AI resilient graduates

AI has co-existed comfortably between academia and industry for years. The terrain is shifting and currently AI research in the global north has computing power, large datasets, and highly skilled researchers. Power is shifting away from students and academics toward corporations and venture capitalists. Graduates from universities in dominant cultures, where there are high levels of digitisation, need to become AI-resilient. Graduates from the majority world also need to value their own process of knowledge construction, resist the lure of normalisation and see AI for what it is, another form of enclosure, and start blogging.[opinion] Graduates from both the global north and the majority of the world need to be able to critique AI output, become familiar with the processes of technical change, and let their own studies and intellectual life guide their working futures.

Prominent commentators

With the use of AI tools becoming more commonplace in schools, universities and other educational settings, discussion is growing over the benefits and risks (as well as the possible longer-term consequences) of reorganising education around AI. A range of stances are emerging—ranging from enthusiastic proponents of the widespread adoption of AI in education through to more critical commentators.

Trust in AI educational technology

At present, teachers are still skeptical about AI due to two main factors: lack of knowledge and understanding of AI, as well as some misunderstandings about it. Because AI can only score based on written work, and teachers can sometimes understand what students want to express through text. So, teachers lack trust and have a negative attitude towards the use of AI-Edtech.

Challenges and criticism

Challenges involved are mostly about over reliance on the technology could lead to lesser creativity, critical thinking and problem solving abilities especially if students skip traditional methods. Algorithm errors, hallucination are some of the common flaws found today in AI agents, which sometimes makes it unreliable and less trustworthy. The increasing use of artificial intelligence tools by students for academic tasks has raised concerns about the potential adverse effects of widespread reliance on these tools on learning and the development of critical thinking skills. Reliance on generative artificial intelligence, for example, is linked with reduced academic self-esteem and performance, and heightened learned helplessness - raising concerns about its unintended effects. The study also found that use of Generative AI for academic tasks was lower among students with the conscientiousness trait- suggesting that self-disciplined and goal-oriented individuals were less inclined to rely on AI tools in their academic work. These findings further underscore concerns raised in prior studies regarding academic integrity in the context of AI use in academic settings.

See also

References

Uses material from the Wikipedia article Artificial intelligence in education, released under the CC BY-SA 4.0 license.