A.I. a pratical guide for IB teachers

Monday 7 August 2023

IB’s recommendations (as of 7th Aug 2023)

It’s important to read all of this page, from the IB, and try to summarise for yourself the key message(s) .
Some keypoints from the opening paragraph are: “The IB will not ban the use of AI software . . it is an ineffective way to deal with innovation” . . “we expect all our schools to discuss the various types of academic misconduct with their students”.

The rest of what follows are my personal reflections (not the IBs) on the impact of A.I. for us, as IB teachers.

Teaching Effective use of A.I.

The ability for A.I. (as with all new technologies) to further enhance human abilities & capacities is exciting, and to be embraced (in my view). Using it effectively is likely to be a very valuable skill now, and in the future. Absolutely I feel part of my responsibilities is to help students develop these skills, for mathematics (in particular) and their learning genuinely that are educationally beneficial. I would define this roughly as:

aids / enhances the cognitive development of the learner, or the outcomes achieved for the same level of cognitive development (their productivity / creativity)”

I found this video, shared with my students, very useful (and time efficient). A summary table of my key takeaways from this video below. Thank you Goda! I’d be interested to read other people’s top tips/links to concise summaries/top tips (subscribers can “comment” below)

This article: ”Homework Apocalypse” asks these three interesting questions:

  • Is asking AI to provide a draft of an outline cheating?
  • Requesting help with a sentence that someone is stuck on?
  • Is asking for a list of references or an explainer about a topic cheating?

. . . and/or are these all “educationally beneficial”?

The End of Homework?

Ethan Mollick, in the above “Homework Apocalypse” article notes: “One study of eleven years of college courses found that when students did their homework in 2008, it improved test grades for 86% of them, but only helped 45% of students in 2017. Why? Because over half students were looking up homework answers on the Internet by 2017, so they never got the benefits of homework

I particularly liked the below 4 point reflection (see his article for the details) on “homework for this fall” (emphasis in bold my modification). The author concludes that the final point, 4, is the best bet, long-term:

  1. Back to in-class essays. This is useful for tests, classes where learning to write is important, and as a stop-gap measure. On the downside . . does not give students the advantages of AI for learning.
  2. Keep outside of class essays, and forbid AI use . This will be a challenge as detection is a problem, as well as defining what “AI use” us.
  3. Keep outside of class essays , and encourage AI use. I made AI required in all my classes, and it could be used in any assignment, as long as the use and prompts were disclosed.
  4. Embrace flipped classrooms (instruction is done by watching videos/AI tutors/readings outside of class, class is for activities and active learning). . . requires structural change. Still, in the long-term this is likely the best approach.

For mathematics homework, I’m not really worried (I’d be interested in reader’s thoughts (“comment” section below)). I feel it’s quick, and clear, to test/ascertain in lesson time who does and doesn’t understand mathematics e.g. it’s not like Eng or Hist where maybe they wrote a good essay because they put in loads of work, reasearch etc but where it will take 30mins/1h to test if they can in class(exam) conditions(?) [I don’t know if this is a valid point History/English/Econ teachers?].
For mathematics, in 10mins a teacher can write an easy, medium, hard maths question on board and, mini-whiteboards / individual work on touch-screen laptops that can view at once via VPN on my computer / Onenote / go round class look at paper work etc, assess who’s understanding is, and isn’t “exam ready” and from their working get an insight into their ‘misunderstandings and misconceptions’.

A.I. and I.A.('Exploration') and External Assessment(P1,2,3) skills

What follows is not the IB’s view, only some personal reflections.

For the mathematics internal assessment, the Exploration, one way in which A.I. seems that it could be very useful is in “generating ideas for topics/aims, inspiration when stuck

In feedback I ask our IB students to complete at the end of the course, the exploration is often cited as the part of the course they enjoyed most (or one of the most) / found the most satisfying. Lots of hard work, frustrations, but then deep sense of satisfaction having first hand experience of “working like a mathematician/maths for science/math business analyst” etc and bringing together their learning to solve a problem that is personally important/of keen interest to them. Educationally, some form of “IA experience” that captures this seems important.

However, I can see marks rising, thanks to A.I.’s ability to “enhance” human thinking and creativity. This maybe carries the risk of a disjoint in IA results and exam results i.e. a trend of I.A. > external exam.
. . but if A.I. can realise it’s “personal tutor” promise, maybe, as is the aim of tech assisting human work, both will rise? (that would be fantastic to see).

A.I. as one-to-one tutor

Is potential to improve students’ ability to answer ‘P1 and P2 exam questions’ the same as its effectiveness in aiding production of a good ‘Exploration’?

Accurately ‘read/interpret’ a student’s answer?

Revision village is a very good resource and one, of a variety, we recommend to our students for self-study.
In the few experiments I’ve done using Revision village’s A.I. I found it didn’t “tailor” its response to what I entered i.e. it doesn’t seem to “interpret students' answers” and identify their, specific “misconception/misunderstanding”, which a teacher can do on looking at a student’s working.
Example question below.

In this example Newton A.I. didn’t:

  • Correctly recognise the method I used, which was SOH CAH TOA, not sine rule.
  • recognise alternative methods : sine & cosine rule can both be used (particularly given other angles had to be worked out in part (a&b above)), but Newton’s feedback not only missed the alternative method, but incorrectly stated that the sine rule could not be used.

This was a general pattern in all my feedback (11 questions) from revision village’s Newton’s A.I., it gave me “common reasons for wrong answers” for each part of a question I got wrong (which is certainly useful, and additional functionality which wasn't previously avaiable), but didn’t demonstrate understanding/or reading, of the answer I’d entered.
It’s great Revision village, and other self-study sites e.g. Khan academy etc. are integrating A.I. aimed at providing more personalised feedback for students on their misconceptions. With time, it should only improve (costs allowing: see “price” section below).

Pricing

Thanks to Mathematics: Analysis and Approaches author, Tim Garry, for sharing this New York Times article: “In Classrooms, Teachers Put A.I. Tutoring Bots to the Test” (NYTimes subscription required)]. If I’ve read the article correctly, pricing for Khan subscription, with A.I., looks quite high ($70/student ($10+$60) Khanmigo is citing):
"Districts like Newark that use Khan Academy’s online lessons, analytics and other school services — which do not include Khanmigo — pay an annual fee of $10 per student.
Participating districts that want to pilot test Khanmigo for the upcoming school year will pay an additional fee of $60 per student, the nonprofit said, noting that computing costs for the A.I. models were “significant." ’
Be great to hear, in the “comments” below, from anyone if they are involved in the Khanamigo trials/pilots etc?

ChatGPT4 costs $24/month (Aug 2023)

Detecting the use of A.I.

Give students an exam/practice question requiring the same technique/knowledge as used in their ‘Exploration’(IA). Mark and discuss it with them.
In mathematics (as, I imagine, for more technical subjects like Computer Science (sciences?)), I think this is probably easier to do. If we suspect a student doesn’t fully understand any of the mathematics used in their exploration, giving them a similar question to solve in class (or a break, after school etc) in front of you, is fairly easy to do, and instantly revealing e.g. if we’re not sure they fully understood eigen values, vectors and phase portraits work used in their IA, give them a similar question from this site, a textbook, IB exam etc. and ask them to solve it. For final clarification, ask them to talk through how they approached the problem you gave them and their solution.  

This article warns that "there is no way to detect A.I." (currently?) for the below reasons (anyone have a different view/counter-arguments to this?):

"THERE IS NO WAY TO DETECT THE OUTPUT OF GPT-4. A couple rounds of prompting remove the ability of any detection system to identify AI writing.

And, even worse, detectors have high false positive rates, accusing people (and especially non-native English speakers) of using AI when they are not.

You cannot ask an AI to detect AI writing either - it will just make up an answer. Unless you are doing in-class assignments, there is no accurate way of detecting whether work is human-created."

I tried the last one: "ask an AI to detect AI writing" and it claimed it had authored a passage from our history teacher's textbook [written a few years before A.I. was even available to the public :) ].

Different Government’s regulations on A.I. (EU example below)

With govt’s across the world likely to be issuing, and making, recommendation and laws on A.I., guess IB schools, depending on country/zone, will need to keep track of these evolution?
Zoe Badcock (IB environmental systems) shared this link recently on proposed EU laws to regulate A.I. (June 2023): https://www.europarl.europa.eu/RegData/etudes/BRIE/2021/698792/EPRS_BRI(2021)698792_EN.pdf
Some overview points from Reuter's.

Energy consumption: ChatGPT

Energy needs of ChatGPT searches also interesting, in terms of “ethical” use of A.I.(?) i.e. awareness of that there is a cost associated (even if user not necessarily paying it)).

This article (sign up for free account & access) seemed to provide a reasonable estimate (that will, no doubt, need updating with time): ChatGPT’s energy use per query. How much electricity does ChatGPT use… | by Kasper Groes Albin Ludvigsen | Aug, 2023 | Towards Data Science

SUMMARY

  • ChatGPT : 0.0017 and 0.0026 KWh of electricity per question/query
  • Two different methods were used to obtain these result, both similar results.
  • Compared to LLMs BLOOM and GPT-J A.I. and lower than there: 0.0039 and 0.196 KWh per query respectively.
  • standard 40W light bulb for 1 hour, = 15 to 24 ChatGPT queries.

Article 2 (ChatGPT’s Electricity Consumption | by Kasper Groes Albin Ludvigsen | Towards Data Science )

ChatGPT may have consumed as much electricity as 175,000 people in January 2023 (590 million visits)