Problems with surveys

Thursday 7 June 2018

So one of the things I have always enjoyed about teaching this subject is that it is a continuous journey of discovery. There are always new things to explore and learn about mathematics and ways to communicate it. I dont suppose I have ever been the first to cross paths with one of those ideas, but that didn't make my discovery of them any less enjoyable. I often use the analogy of travel. Explorers discovered territories hundreds of years ago, but that doesn't mean I dont want to go there and discover them for my self. Anyway, all of this is my usual disclaimer for leading up to telling you about my issue of the moment that is ready for an outing. The point being, that I know I am only the latest in line to write about it, but that doesn't mean it isn't worth sharing what I am thinking. After all, I am bothered by an ongoing issue and so perhaps the more people share their issues the more likely general understanding is to improve. Maybe you wont share my views and maybe you can educate me further - please don't hesitate to do so in the comments. In any case, I think the following is ripe for discussion with maths studies and ToK students and should become even more so when the new syllabus comes in.

Example 1 - The G2 form

Like many of you I hope, I have completed the G2 form which is a review of the exam that students did this year in May. I think it is excellent to have the opportunity to do that and understand that the feeling is still of some disappointment that more teachers don't do it. In a relatively small organisation, our feedback will get read and this is terrific.

I would word the survey slightly differently myself, but then I imagine that most people would and so I am not suggesting that I would be right to do so. What prompted me to write this post was the question about whether or not the exam papers were accessible/appropriate to people with special educational needs, disability,  and people of different genders, race and cultures. I actucally tried to avoid answering this question but then realised that it was compulsory.

Again, I am not complaining at the inclusion of an important question for our times which has come out of an important desire/need. My thought was simply that I don't feel qualified to answer the question properly. I think the related issues are a minefiled which leave people treading very carefully, but mostly because people are afraid of unwittingly writing questions that don't meet those accessibility requirements. It is the unwitting bit to which I am referring. Whilst I read and think about as much as I can, I can only answer that question to the best of my knowledge and I recognise that my knowledge on the topic is limited. Note - this point is less about the G2 survey and more about the general principle of asking compulsory questions to people who may not have the expertise they need to answer them fairly.

In this case, I answered as honestly as I could that I did not feel there were any issues that needed to be pointed out. With my untrained eye, I might have missed something important. Imagine, though, that a majority of people are like me (not saying they are, just imagining) then the majority view will be that there weren't issues and the more informed minority will be overuled. Again, I am not expresing any serious concern about this particular issue, because the survey gave the opportunity to elaborate with words if anyone felt there were issues and the organisation is small enough to pay attention to those remarks. That said, if the situation played out as imagnied then they would still be able to correctly state that a majority felt there were no issues.


Example 2 - Percentage percpetion

So, here I am drawn to think about this Percentage Perception activity. The activity throws up lots of issues and you should follow the link to learn more if you have not already used it. The whole basis of that survey was about asking people 'What percentage of the population in your country do you think are muslim?' The exercise goes on to expose and consider misconception. The issue for me here though is that if I am aksed that question, unless I can draw on actual knowledge to answer it, I really ought to answer with something like 'I could't possible say'. I mean why do I think I might actually be able to answer that question? I wonder, if people were asked to give a confidence score to their response how that would change things. I wonder if people were given the option to answer, I genuinely wouldn't know? What if people were asked to give a maximum and/or a minimum? I think all of these approaches would have helped the surveyer get more interesting information.

I wonder, how many people answer survey questions from uninformed positions and about the significance of the results of those surveys being used to justify positions and decisions.


Example 3 - End of year surveys

So I am a teacher and a parent at my school. At the end of the year we always get a chance to feedback from both perspectives. Students are given the chance too. Again - this is great and how it should be. That said, I have an increasing problem with questions that ask us to put an answer on a scale. 

Strongly disagree - Disagree - Agree - Strongly agree

First up is the discussion about the missing middle option. I have had lots of useful debates about this and know that there is much to read on the topic that centers on the merits of pushing people off the fence and the down side of leaving the sfatey of the fence for the weakminded. Well, I can see arguments here, but feel that the great danger of pushing people off the fence is an inaccurate inflation of peoples views which is potentially misleading and polarising. 

Next up is the notion of how respondents are expected to synthesize all the factors that go in to answering the question into one of those 4 options. I wonder how much compromise is made there. I wonder how much useful truth comes out. This is related to the points above about how well a survey response actually reflects the truth.

Lastly - and on a slightly different track - I feel these kind of surveys would be worth a lot more if people were simply allowed to write sentences in answers to questions. Of course that is untidy for statisticians, but survey makers have to go back to the main goals of the survey in the first place. In the context of these surveys, I would suggest that I wanted to know what things respondents were happy and unhappy with (actual examples)  and give them the chance to suggest solutions/alternatives.

Another such example is the end of year student survey about the IB programme. When the results are shown as pie charts that show the percentage of respondents in each category - I find myself wondering whether the exercise helps us converge on the truth or if the compounding compromises along the way mean that we diverge from information we can actually do something with.

In this sense - I would defer to the assertion that words are of more use than numbers and the notion that, not everything that counts be counted and not everything that can be counted counts.


In summary

Perhaps, unhelpfully, I have thrown out more questions than answers here, but I feel they are important questions. With data being the new 'Oil' and playing such a leading role in the current studies course and the new applications course, they are the questions that have to be asked and dealt with. As surveys are designed, we have to keep referring back to what the point of the survey is and make sure that the way we ask questions allows respondents to help us converge on truth and not diverge from it.


2019
20 May 2018