A Global Perspective on AI in Education

Matthew Hutson
7 min readJan 10, 2024
Source: Alexandra Koch / Pixabay

In November, I traveled to Doha for the eleventh WISE (World Innovation Summit for Education) Summit, organized by Qatar Foundation (which paid for my travel). This year’s event focused on artificial intelligence in education. I moderated the closing plenary panel, “AI for the Common Good.” The panelists were:

Source: Zoe de Lantsheere

We discussed chatbot tutors, computational literacy, misinformation, and the future of work. The following transcript is edited for brevity and clarity:

Matthew Hutson: Your Excellency, Qatar has a national AI strategy, and one of the pillars is a focus on education and training. What are some of Qatar’s initiatives to combine artificial intelligence with education?

HE Buthaina bint Ali al-Jabr al-Nuaimi: We’re working on two levels. The first level is learning about AI. Qatar is one of the first adopters of a national AI curriculum in the K-12 sector. On the other level, which is utilizing AI in education, we have implemented an e-learning strategy. In addition, we’re encouraging collaboration between higher-education institutes and industry.

MH: Your Excellency, has your work with international organization led to any insights on how countries might collaborate on either the development or the regulation of AI?

HE Kolinda Grabar-Kitarović: Yes, it certainly has. Bringing together government officials, policymakers, experts, academia, and representatives from industry is key. In my opinion, the biggest problem is the lack of regulation of AI at the global level. There’s the potential of misuse, for criminal and other activities, and benefits as well. I’m going to mention the IOC in particular. We have a working group on how we can use AI to enhance the performance of athletes and coaches. We can use artificial intelligence even for judging and referees. Also, artificial intelligence can be used to provide security.

MH: Jörg, you’ve talked about personalized education using AI, and how it’s needed not only for white-collar jobs. Who else can benefit?

Jörg Dräger: Everybody talks about personalization-measuring competencies, picking the right learning material and learning methods, and then collecting data from assessments and feeding the data back into the learning cycle. We tend to focus on the ten thousand doctors, and we seem to forget the millions of helpers who could build on the informal competencies they have, and fill in important gaps in the labor market.

MH: Earlier we heard Aidan White present the Paris Charter for AI in Journalism. He said one of the responsibilities of journalists was to prevent the spread of misinformation. Are there any initiatives in China to address this challenge?

Lihui Zhang: In AI development, China has some advantages and disadvantages. We have limited access to US AI, but we have a huge number of engineers, and a very big market with lots of data and algorithms. Generative AI is based on the collective, so it cannot avoid misinformation. Professional media can play a very important role in alignment with the facts, through fact-checking.

MH: Mona, given your work with language models, are there ways that AI can help Arabic speakers access information from other languages?

Mona Diab: The potential is huge. Given translation technology, we can actually translate a lot of material into Arabic.

MH: Your excellency, you were an English and literature teacher. If you were a teacher again, would your methods change?

HE Buthaina bint Ali al-Jabr al-Nuaimi: Definitely. AI has provided us with the opportunity to reduce administrative work. I think the pedagogy definitely will change. It used to be the teacher was the source of all knowledge. But now we need to teach students how to utilize the knowledge that exists.

MH: Your excellency, how can European countries compete with the US and China on AI?

HE Kolinda Grabar-Kitarović: When it comes to regulation and thinking about the future of AI, the EU is one of the leaders in the world. The EU is trying to balance the commercial interests of companies with a human-centered approach. The profit from globalization was mostly privatized, but the costs were socialized. And AI could increase the gap between societies and within societies. Those are some of the concerns that the EU is thinking about.

MH: Jörg, should AI changed not only how we teach, but also what we teach?

Jörg Dräger: The education system has to teach computational literacy. It has to teach how to work with the machines so that we know where to trust and where not to trust, where dangers are, where opportunities are. And it’s clear the repetitive tasks will be diminished in the workforce, and creative and socially interactive tasks will increase. So I think the core question for the education system is, Can we teach today how we will work and live tomorrow? If you look at the workforce, it’s team-based, it’s collaborative. But we teach individually, we test individually, we do the routine tasks that we can easily test instead of doing the things we need in the future.

HE Kolinda Grabar-Kitarović: Right now, we demand that all students be good at everything. I think it’s a waste of time for everyone. We need programs to help both talented and disadvantaged students to develop on their strengths and the interests.

Jörg Dräger: Just one quick thought about the how and the what. One of the core competencies we need in the future is resilience. You don’t teach a class in resilience. You build it into how you learn. And so the how and the what merge. Many higher-order competencies are hidden in how we learn-resilience or conflict management or intercultural communication or whatever. I think technology allows us to acquire higher-order skills, without even really noticing that you’re teaching.

MH: Your Excellency, in our earlier conversation you said one of the most important aspects of education was developing the student as a whole person, including their emotional intelligence. Can AI help with that?

HE Buthaina bint Ali al-Jabr al-Nuaimi: The connection between the teacher and the student is really critical for engagement and motivation. Are there platforms to teach emotional intelligence? Yes, but I think the teacher’s role is critical.

MH: Caixin Media has used AI to generate news content. What are some of the opportunities and challenges there?

Lihui Zhang: Nowadays, we just use AI to generate news from the exchange market. But there are a lot of large language models, and we are testing the AI a lot.

MH: Mona, you have taught responsible thinking to your students. Tell us about that.

Mona Diab: It’s not sufficient to think about algorithms and data and models. We need to think about societal impact. Whether it’s fair, whether it’s accountable. Another dimension is research conduct. And the last aspect is diversity and inclusion. I also want to highlight that responsible thinking is a muscle that we need to keep training and practicing.

MH: As we try to build AI responsibly, different areas of the world and individuals within one country or one region have different values. How can we build AI that is respectful of all of these different perspectives?

Mona Diab: One person’s bias is another person’s norm. Building technologies that account for different perspectives is a big challenge. It’s actually the right time for people from different disciplines, especially this audience, to join forces. Because believe it or not, you actually have this voice that could be changing this landscape.

MH: I want to give everyone one last word: Your thoughts on what makes you either optimistic or pessimistic or both about the future of artificial intelligence.

HE Buthaina bint Ali al-Jabr al-Nuaimi: Personalized education is a game changer, especially for children with disabilities, to help every child flourish and unlock their potential.

HE Kolinda Grabar-Kitarović: What I am afraid of is social distancing. We have seen with social media an increase in mental problems, suicide rates, etc. And just imagine when you start communicating with an AI program, a bot. How to continue to be human?

Jörg Dräger: I’m optimistic, but only if we take on responsibility as a society. If we just let industry take over, I’m quite pessimistic. Taking responsibility means being algorithmically literate, enforcing diversity of products, enforcing transparency, and having proper regulation.

Lihui Zhang: AI is just a tool. The people behind AI decide what is good, what is wrong.

Mona Diab: I think it’s incumbent upon us as technologists to demystify this technology. There’s a lot of enablement, empowerment, and enhancement associated with this technology. Let’s try to harness that.

Originally published at https://www.psychologytoday.com.

--

--