RSS

Category Archives: Legitimation Code Theory

The Great Onlining – What we have learned about Remote Learning

While questions of giving teachers and students technological access has been important, as Morrow (2007) has pointed out, the majority of students have formal access to school, but lack epistemological access – access to adequate knowledge. When schools were closed and the country went into lock-down shortly afterwards, the twin tiers of our unequal education system became terrifying visible. State schools and universities by and large went on early holiday, and little remote education was possible given the huge deficit in access to suitable devices, data or even electricity. As the lock-down continues, there seems to be little wriggle room around onlining education rather than keeping the schools closed and making up time later in the year by cancelling further holidays. On the other hand more privileged state and private schools went online, with varying degrees of success and were able to keep the school calendar intact. While there were challenges in getting devices and/or data to all teachers and students, much of the focus was on technical questions around which platforms to use to teach online rather than on access alone. This was a sine qua non. I am not convinced that all families had sufficient technological capital to cope with teaching and learning online. Parents and children often had to share devices and data. Internet connectivity was often slow and sites crashed, especially in the beginning when the large tech companies had to rapidly roll out resources to service the massive increases in demand. Many students had access to personal devices, but quickly discovered that an iPad or smart phone, while OK for completing tasks in the hybrid environment at school, were inadequate for the range of tasks expected of them when learning went online.

I honestly have no solution to the problem of giving adequate technological access to all students and teachers. This requires massive infrastructure investment by the government. While service providers have zero-rated certain educational sites, much of the country lacks the tech resources necessary to support online education and data costs are prohibitive. The teachers on the #ZAEdu twitter community have undertaken a number of initiatives to try and get zero-rated data costs on educational platforms and set up advice and assistance for teachers. Government and industry has been lobbied to take steps to enable greater access. This is vital work. But we also need to start to pivot towards thinking not just about how we get teachers and students online, but how we teach online in such a way that we can do something about reversing the trajectory towards increasing inequality. In other words we need to start moving beyond questions of formal access (to schooling, or internet infrastructure) to questions of epistemological access. If we do this right, we might be able to salvage something from the fire. It seems to me that those teachers who were able to teach remotely over this last three week period, or are starting now, have a huge role to play in terms of reflecting on effective pedagogies so that questions around the effectiveness of remote teaching and learning do not get ignored.

At the beginning of the year when teachers drew up their year plans, the assumption would very much have been that teaching would have been conducted face-to-face in the classroom. As these plans melt away, and teachers re-draw their plans, it becomes increasingly important to think very deeply about instructional design and pedagogy. If government and industry listens to teachers and starts enabling massive online access in South Africa, we as teachers need to be sure that we have figured things out so that the digital divide does not just shift from being about physical access to being about access to quality teaching online.

Those of you who follow my blog will know that I use Legitimation Code Theory (LCT)  as a lens for my research, and for understanding my own pedagogic practice. The LCT research Group at Wits University, led by Prof. Lee Rusznyak, this week held an online discussion around what LCT Theory has to tell us about remote learning generally and how we should be re-tailoring our curricula in the light of the Great Onlining. I would like to share some of this thinking, because I believe it is a useful intervention in the conversation right now. Having a theoretical perspective is important because it allows for a common language and tool-set for thinking about the problem at hand. LCT is particularly powerful in this respect because it looks at knowledge and knowing itself and allows us to use a common approach to thinking about how the nature of knowledge and of knowing impacts on education at every level, and across all fields.

But before getting into theory, I would like to frame this in terms of what I see to be the problem. As an overall comment, it is obvious that remote teaching, classroom teaching and specially designed online courses are three different animals. Each of these modes of pedagogical delivery has different sets of affordances and constraints. Put simply, there are clearly things you can do online that you can’t do in a classroom, and vice versa! Understanding these affordances and constraints is something that is largely dependent upon the subject matter being taught, and the context of each situation. One cannot make sweeping claims; the devil really is in the details.

Here’s an example.

One of the hardest things for me as a teacher of computer coding in middle school, large generalist classes, has been finding a way to help students debug their code. The initial teaching transferred quite nicely online via videos of live-coding on the interface. I used screen-cast-omatic to record my screen, with an inset of my talking head in one corner. The ability of students to pause and rewind and see my screen up close may even have been more effective than the same thing in the classroom via an interactive whiteboard. But pause and rewind is not the same as being able to ask a question or hear questions from peers that you hadn’t thought of. So I also had meeting check-ins on Teams in which I was available to answer questions and share my screen in response to queries. I recorded these so that students working asynchronously could also view this form of content delivery. But accessing student code to help them de-bug it was problematic. Most of my students use iPads and find the coding itself difficult on these devices. In class they use desktops. Sharing screens is awkward in Teams and helping students debug mistakes via email or the chat stream is definitely not the same as being able to see a student’s screen and help them notice where the error lies.

On the other hand, my colleague, who teaches programming to specialist, smaller high school classes has found it relatively easy to tackle these issues. Her students all have laptops rather than iPads, and see her every day rather than once a week! She has found that screen-sharing is much easier with a handful of students and makes debugging code easier! The devil is in the detail! Context is everything. Sweeping generalisations are not really that helpful. Nevertheless, we have to start somewhere.

What MOOCs have to tell us about remote teaching!

Massive online learning initiatives have a relatively long history now, and what they show us quite clearly, I think, is that the standard format of short videos delivering content knowledge with quizzes checking understanding works fairly well.  Lectures can be enlivened with graphics and visuals far more engaging than the normal chalkboard. Knowledge can certainly be presented comprehensibly, but possibly only if the students already have a good basic knowledge to build upon. The drop-out rate for non post-graduate students on MOOCs is massive. Modular stackable instructional design may work well if you have a knowledge base to stack upon, but may not work well for those seeking to build that base of knowledge. It seems to me that MOOCs are pretty good at extending knowledge, but not that effective at building knowledge. Those who believe that massive online content delivery to high school students can be built on expert teacher videos streamed across cell-phones are not thinking about the pedagogical work that needs to go with this. I have taken quite a few MOOCs over the years, of varying quality. To my mind what appears absolutely crucial is the digital presence of the teacher. The presence of the lecturer and/or Teaching Assistants is what really makes or breaks a MOOC. The use of online Google hangouts, social media chats or more formal check-in times allows teachers to help students navigate the content effectively. To build knowledge, students need to make connections and links between ideas, between abstractions and real-world examples, between simple building blocks and larger theoretical approaches. Students need help doing this, and unless the teacher invests a huge amount of pedagogical work in terms of being present digitally, MOOCs tend to fall short. As a general rule, though, there appears to be no reason why online delivery of content knowledge cannot be done effectively, albeit with caveats around difficulties in linking ideas.

What Classrooms have to tell us about remote teaching!

I think we also need to understand what classrooms do well, and where they fall short. Classroom interactions allow teachers to monitor students much more effectively than online platforms do. A teacher can literally see where students are engaged, and when they goof off, can monitor their progress on a task first-hand and intervene with much greater flexibility. They can not only respond to questions, but can also often sense when a student wants to ask a question, or can sense when something has not been understood. Teachers can gauge when to wait for a student response, and when to step in an answer the question, or when to rephrase it. In other words classrooms are pretty good at affording the reading of social cues, much more difficult, if not impossible online.

However, delivery of content knowledge is often compromised face-to-face by any number of factors. It is difficult for a teacher to compete with the graphic capacity of digital media, or to repeat content endlessly. There comes a point where the lesson ends, and it cannot be rewound or paused. If a teacher makes mistakes in presenting material, the moment can be lost. Videos can be more carefully scripted, rehearsed, edited or re-shot. While teachers can compensate for these mis-firings and interruptions, students often leave a class uncertain about what they have heard and no chance of a replay. There is a possible case to be made that classrooms are better at the social than the knowledge bits.

The following sections look at three key concepts within LCT. If you are feeling brave, or are familiar with LCT from previous blogs, then I would suggest reading what follows closely. If you are feeling less up to the task of difficult or new theory, I would suggest skipping the explanations and reading only the bits in italics.

What LCT has to tell us about remote learning!

I believe that the devil is always in the detail, and in the particular context, but I think it would be fair to say that the observations above about the forces at work online and in the classroom largely hold, or at least set out parameters that are useful in thinking about how to approach remote teaching. We need to recognize that the major strengths of online instruction revolve around effective delivery of content knowledge, but that social relations are severely constrained, while the major strengths of the classroom lie in affording social relations, but that content delivery may sometimes be constrained. Perhaps this is why research suggests that while classroom teaching beats online teaching, hybrid delivery equates with face-to-face in efficacy. In my normal practice I upload videos of all my lesson content which students can consult if they are absent, or if they need to: offering the best of both worlds?

LCT has a number of dimensions and each of them informs educational practice in useful ways, and I believe helps us to navigate the maze that is remote learning.

Specialization

A key concept within LCT, an approach developed by Karl Maton (2013) from the work of Pierre Bourdieu on Knowers and Basil Bernstein on Knowedge, is that of specialization, that different fields have different codes which represent what makes for legitimate knowledge in that field. Maton argues that all knowledge is made up of both knowledge (epistemic relations) and knower (social relations), but in different fields the emphasis is different.

  • In Science, for example, the knowledge is foregrounded. Who you are is relatively unimportant, having the knowledge is what legitimizes you as an expert in the field. It is a Knowledge Code.
  • In the field of English Literature, however, the rules of the game are different. What you know is far less important than having the right gaze, being the right kind of knower, having the right feel, the right eye for it makes you a legitimate knower. It is a Knower Code.
  • Relativist Codes are where neither epistemic or social relations are foregrounded, personal opinion is what counts in everyday discourse, for example.
  • Elite Codes are where both the knowledge and the ways of knowing are crucial: Music or Architecture.

From what we said about the affordances and constraints of online and classroom spaces above it would seem that different fields will experience code matches and code clashes when moving into an online space. Knowledge Code fields might find online environments offer more matches because knowledge is what is foregrounded. Knower code fields may find more obstacles because social relations are constrained. The devil will always be in the detail, and I am not saying that resourceful teachers will not be able to navigate these difficulties successfully and inventively. But they seem to me to represent underlying forces which inform online practice.

Teachers might be well-advised then when planning what to do during what might be an extended period of online teaching to take those bits of their syllabus which sit better with knowledge transmission than with knower building. An English teacher may plan to use Zoom meetings to read a text with their class, but then find half the class has not been able to get online, and the whole experience may become a nightmare. Perhaps it might be better to choose the more knowledge-heavy bits of the syllabus which can be tackled asynchronously if needs be.

Semantics

Another key concept within LCT is the dimension of semantics. The key idea here is that meaning can be analysed in terms of semantic gravity, how abstract or generalized an idea is or how concrete and contextualized an idea is and semantic density, how complex or simple an idea is. Research in LCT seems to suggest that cumulative knowledge building is predicated on movements over time between abstract and complex and concrete and simple called semantic waves. These waves could describe the course of a lesson, of a semester course plan, or a worksheet or student essay. Teachers need to make sure that these connections are being made regularly and in both directions.

For example, when a teacher explains a concept they will unpack the idea by explaining it in everyday language, giving concrete examples, using metaphors so that students can understand it. This movement between abstract and complex and concrete and simple represents what is called a down escalator. Essentially the teacher is mediating difficult concepts and helping students understand the concept by reformulating it in language and ways that are easier to understand. But cumulative knowledge building depends upon ideas being connected and understood as part of a larger whole. And upon students being able to take their raw understandings and repackage them in more academic language and understandings, in other words making up escalators.

Successful curricula describe semantic waves connecting the theoretical with the practical, the abstract with the concrete, the complex with the simple. Common semantic profiles are shown here. Often understanding remains at a theoretical level ( a high semantic flatline) or at a simple and practical level ( a low semantic flatline). Successive down escalators represents knowledge that is understood, but remains segmental, unconnected with new understandings built by students over time.

Any curriculum design would clearly aim at building semantic waves over time, connecting and consolidating knowledge, grounding theory in practice, grounding abstract ideas and concrete examples to further understanding. But online coursework may not offer affordances for this kind of cumulative knowledge building. While short videos unpacking single ideas are certainly do-able, the kind of pedagogical work necessary to sustain extensive cumulative knowledge building is heavily constrained online.

As Rusznyak has pointed out in the discussions around this held online at Wits, it may well be necessary to re-plan curriculum design to maximise the affordances of that portion of the year spent on remote teaching, and do the connecting of the semantic waves later in the year when classes resume in person. Alternatively some subjects may find it best to describe high or low semantic flatlines, and build semantic range later in class.

Since some portions of the syllabus might lend themselves better to different semantic profiles, teachers need to think carefully about how best to sequence and pace their syllabi.

Autonomy

The last dimension to be unpacked by LCT scholars has been the dimension of autonomy. I don’t want to go into too much detail here, because the discussion so far has been quite dense. But essentially autonomy looks at  the extent to which practices are inside or outside a field (strong or weak positional autonomy) and the purpose to which they are put (strong or weak relational autonomy). For example, in a Science class, the class might be doing Maths (which falls outside the field of Science), but it is being turned to the purpose of doing Science. Or in a Maths class the teacher might be talking about cricket (outside Maths) but using it for the purpose of understanding a parabola (for the purpose of Maths).

Several codes are described:

  • the sovereign code – for example doing Maths for the sake of Maths
  • the exotic code – content clearly outside the syllabus for purposes that have nothing to do with the curriculum
  • the introjected code – in which non-curriculum content is turned to the purpose of doing the curriculum
  • the projected code – in which curriculum content is turned to other purposes, eg. for the world of work

LCT research has indicated that good educational practice involves tours through different codes. For example using exotic material for sovereign purposes by introjecting, or projecting sovereign content, showing how it is useful in other purposes.

A concern has been raised that as teachers race to put material online for remote instruction, material will be positioned far too much in the sovereign code. This represents an all work and no play approach which has proven the kiss of death to much of the drill-and-kill type of digital content that has been produced for educational consumption. Teachers need to make sure that their online offerings retain the same kinds of introjection and projection that they employ in their normal classrooms.

Conclusion

This blog post has presented a great deal of dense theory, but I hope that the theory has been turned to the purpose of illuminating the kinds of instructional design decisions that teachers will need to make and the kinds of things they need to be thinking of as we move from thinking about the technology of teaching, and thinking more about how our pedagogical decisions can give students greater epistemological access.

Bibliography

Maton, Karl. (2013). Knowledge and Knowers: Towards a realist sociology of education.

Morrow, W. 2007. Learning to teach in South Africa. Cape Town: HSRC Press.

 

 

The Great Onlining – From Digital Natives to Digital Aliens – Reflections after Week Two!

After two weeks of remote teaching, I have to say that mental exhaustion is starting to set in. I can only imagine how challenging it is for students as well. In last week’s blog I highlighted the problem of reaching students online who might not be able to be reached, or might not want to be reached. Technological problems aside, the very constraints of online platforms may make it more difficult for students to focus, find relevant instructions and resources or manage their time effectively enough to be able to complete much work.

Marc Prensky popularised the idea of the Digital Native, one who appears to have the natural, in-born disposition for digital applications. Prensky defined this as a set of dispositions stemming from age alone. Anyone born after a certain date was somehow imbued with technology in their bloodstream, so to speak. The rest of us, born before this date were digital immigrants, we would have to learn how to use technology through pain and sweat. This idea has been thoroughly debunked. Anyone who has ever taught children ICTs will attest to this. Children are not born with the habits, behaviours and dispositions neatly in place to make them natural born users of technology. And many older people take to technology like a duck to water. Nevertheless the concept of digital nativity, of dispositions, a gaze which predisposes the person towards digital use does seem to hold some merit. We all know people who seem to get it naturally, and others who will probably never cope with anything digital. Perhaps digital nativity is an acquired, cultivated or trained gaze – a way of looking at things which makes some people better at dealing with the new technologies than others. This disposition is not dependent upon age, but describes a spectrum from digital nativity to digital alienation.

When teaching online this becomes absolutely crucial because the medium of delivery is so dependent upon the technology. In my experience with hybrid classrooms, any class follows a law of thirds, although the quantification of that fraction changes from year to year, class to class and lesson to lesson. Students have different digital dispositions. One third I shall call the Digital Natives with apologies to Marc Prensky. This group is quite capable of working independently online. They can find and follow instructions, manage the resources left by the teacher and manage to ask questions where needed to complete tasks totally online. They don’t really need a teacher to tell them what to do, they have a capacity and disposition for discovery and an ability to figure things out quite quickly on a digital platform. This group tends to submit assignments without prompting on time, often well before the due date.

A second third, the Digital Immigrants need instructions to be in-the-flesh, so to speak. They struggle to locate resources or instructions online, but can cope with whole class instructions. If a teacher tells them what to do, and where to look, they can then work on their own. This group needs someone to foreground what they need to notice. But once this is done, they are happy to work on the task, although they do ask more questions, and need more scaffolding generally. A quick online check-in meeting may be all they need to get working.

A third group, the Digital Aliens struggle online, but also need any instructions given to the whole class to be repeated individually. Something said to the group only seems to be processed effectively when repeated once they are ready to process the information. This group may not respond well to instructions given in a group check-in meeting for example. They need to be taken aside individually and carefully guided through every single step. This is extremely difficult on an online platform. You really need a one-on-one meeting. This can be done in class more easily whilst circulating, but for a student struggling with the technology anyway, setting up an individual tutoring session can be well nigh impossible.

If this perception is correct, it has important implications for remote (and online) instructional design. It suggests that students from each of these groups really needs different strategies. In a face-to-face classroom teachers are able to manage these differences much more seamlessly, although it is never easy. Online, differentiating teaching is much more difficult. In the last two weeks I think I have started to get the hang of managing the Digital Natives and Immigrants. By posting instructional videos online ahead of a class the Digital Natives have a head start. Then I have check-in meetings at scheduled times where I can answer questions, share my screen and show students how to do things. I record these as well as some students seem to need the question and answer to make sense of it all. What is extremely difficult is trying to reach the Digital Aliens, most of whom do not check-in during scheduled times, or probably even watch the videos. Often reaching this group involves long tortuous emails in which I try to make sense of the difficulties they are experiencing and coax them onto the platform.

Sometimes this results in a eureka moment, but often it results in radio silence. I have sent out a number of emails in the last week which basically said something like, send me what you’ve got so I can have a look. Many of tehse remain unanswered, but I live in hope that week three will bring my break-through moment with the Digital Aliens!

 

Making Semantic Waves with Robots

Semantic Waves has emerged as an explicit pedagogical approach in Computer Science Education. For example The National Centre For Computing Education in the UK has released a Pedagogy Quick Read on Semantic Waves. In this blog post I would like to look at how I have been using semantic waves in my robotics classes. Semantic waves track the relative abstraction and complexity of ideas within a lesson. Much of educational practice is geared towards helping students understand relatively complex, abstract ideas in terms they can understand: making it simpler and putting it in context. We also need to help students take their understandings and express them in ways which are more complex and abstract, more academic. When students are trying to code robots to perform particular tasks they need to be able to move between the abstract and complex and the simple and concrete. Making this process explicit can help students understand what they need to do, and helps teachers understand what to do to scaffold students’ understandings.

Semantics is a dimension within Legitimation Code Theory (Maton, 2014), and looks at the relative level of contextulaization (semantic gravity) and condensation (semantic density) of knowledge. Knowledge can be viewed as either highly de-contextualized and abstract (Weak Semantic Gravity SG-) or as strongly contextulaized and concrete (Strong Semantic Gravity SG+). It can also be seen as highly complex, with meaning heavily condensed (Strong Semantic Density SD+) or as simple (Weak Semantic Density SD-). By tracking this movement between abstract and complex (SG- SD+) and concrete and simple (SG+ SD-) we can see how meaning is changing over time within a classroom. This can help teachers see when they need to help students either strengthen or weaken their semantic gravity or density.

Commonly meaning either remains at a fairly abstract/complex level – the high semantic flatline (A), or tends to remain at a low semantic flatline (B). What we  want to see  is  a much wider semantic range,with movement up and down the  semantic axis (C).The ability to link  theory and practice is what is being aimed at.

When choosing which platform to use for teaching robotics, there were a number of considerations. The financial cost of introducing robotics into our grade 8 and 9 year groups was one consideration, but the decision to plump for physical computing was partly driven by the need to strengthen and weaken semantic gravity. Computing is often seen by students as being highly abstract and complex. Particularly they struggle with transferring programming solutions from one context to another, in other words, they struggle with applying programming concepts to different contexts, or the weakening of semantic gravity. This presupposes the ability to understand those programming concepts, requiring the strengthening of semantic gravity. The decision to frame coding within a robotics context was taken because we felt it would help strengthen semantic gravity and increase semantic range by allowing students to “see” the results of their code in more tangible ways. We felt it would help if students could test their code in a very practical way and that it would help them understand programming principles better if they could see the results of their coding.

We decided to use the BBC micro:bit chip as a platform for robotics because it has a huge amount of resources available and an online programming platform using both text-based and block-based coding, giving us options for strengthening and weakening semantic gravity and density. The online coding platform allows students to use block-based, or text-based coding simply by toggling between Blocks and JavaScript. It also has a visualizer which displays the results of the program on the chip. The program below rolls a six-sided die. Students can test the program online by clicking Button A on the visualizer. They can then download the program onto the actual chip and test it.

In terms of robotics, students program the chip, which can then be inserted into a robot to drive it.

An AlphaBot2 with a BBC micro:bit chip

 

In designing the syllabus for robotics in grade 8 & 9 we were also concerned with creating opportunities to strengthen and weaken semantic gravity and density. In the semantic profiles shown above, semantic gravity and density were tracked in unison (between abstract/complex and concrete/simple). But Legitimation Code Theory offers a more nuanced picture of semantics. If we set out the axes of semantic gravity and density on a cartesian plane, as shown in the diagram, each quadrant represents a semantic code, as follows:

The Rhizomatic Code: Meaning here is abstract (SG-) and complex (SD+). This is the world of abstract, complex theorizing. In many ways this is where students need to operate when coding a more complex program. They need to be able to decide which variables or functions to use in their code, whether to use “for loops” or nested loops. Decisions are largely abstract and complex.

The Rarefied Code: Meaning is abstract (SG-), but simple (SD-). This quadrant is where the concepts used may be fairly abstract, but are simple. So, for example a single variable is used, rather than a variable inside a function call.

The Prosaic Code: Meaning is concrete (SG+) and simple (SD-). In terms of programming, instructions may be straight-forward and operational, such as move forward 5 seconds.

The Worldly Code: Here meaning is concrete (SG+) but complex (SD+). In other words, although practical, tasks are complex. A great deal of professional programming takes place at this level.

By the way, the word code here refers to the rules which legitimate practice rather than to computer programming. In a rhizomatic code, theory is valued, and practice is not. In a Worldly code, on the other hand, practice is valued above theory.

What becomes apparent from this is that teachers need to lead students on journeys between these codes to help make programming accessible. In other words semantic waves need to be created so that abstract and complex problems can be broken down into do-able, more concrete or simple tasks, and then reassembled into larger projects. To manage this a series of tasks were created, designed to introduce several programming concepts such as loops, variables or functions. Tasks were also organised in complexity. A basic pattern was to have a concrete, simple task (prosaic code), followed by introducing an abstraction (eg. a loop), but keeping the task simple (rarefied code), then introducing more complexity (worldly code) such as setting distance or speed though a variable. Concluding tasks would put together more than one principle, together with complexity (rhizomatic code).

Here are some examples of each code in the unit on robotics. Each of these solutions makes the robot move in a square. All of these solutions are technically correct if we see the problem as making a  robot move in a square,  but some solutions are more concise (condensed) or applicable across multiple contexts (decontextualized).

 

Rarefied Code

Rhizomatic Code

Prosaic Code

Worldly Code

Whereas in the Prosaic code quadrant, the task has been completed using forward and turn moves alone, abstraction has been introduced in the rarefied code by controlling the distance travelled by using variables for distance and speed. This allows a change in the values assigned to the variables to alter the size of the square – re-contextulaizing the problem. In the worldly code, we have condensed the movements within a repeat loop. Semantic density has been increased. In the rhizomatic code, a function has been created using the repeat loop (increased condensation) and variables (increased abstraction). Semantic Density has been strengthened, and Semantic Gravity weakened.

Students had been introduced to the use of loops, variables and functions in earlier tasks, but were given the prosaic code shown in the table above as a starter code, and asked to adjust speed and time to make the robot move in a square formation by testing it on the robots, further strengthening semantic gravity. They were then asked to try to use a loop, a variable, and a function to improve on the code. What became evident in the tasks submitted was that some students were able to incorporate variables or loops, and a few were able to incorporate functions, but only a minority could accomplish all three. Some students got stuck at various points in the prosaic, rarefied or worldly code. In a busy and productive classroom, students were encouraged to ask for help, and a “Live Code” session was held demonstrating the use of variables, loops and functions together to make a different shape.

What was plain to me was the need to find pedagogical approaches to help strengthen Semantic Density or weaken Semantic Gravity in a more deliberate fashion. Reading code and tracing through it to see what it does, testing it out, really helps to strengthen Semantic Gravity, but complexifying and abstracting out is far trickier to achieve. Live Code, in which the teacher models solutions and thought processes takes some of the class all the way, but leaves some behind. Mopping up the rest, one-on-one is a bit hit and miss with a large class.

Bibliography

Maton, K. (2014). Knowledge and Knowers: Towards a realist sociology of education. London, UK: Routledge/Taylor & Francis Group.

 

Using Semantic Waves to Decolonize Literature Studies

One of the big questions in the teaching of literature surrounds what is considered part of the canon, and what is excluded, or put more simply, what literature should we be teaching in our schools? In South Africa, as in many places, this question is politically charged. Amidst calls to decolonize education, questions around what literary works to include or to exclude are crucial. English teachers often tread a fine line between works considered “universal”, Shakespeare and the like, and the work of local writers, included for “political correctness” or because the chosen writers correspond to canonical notions of what deserves inclusion. Now I am not going to argue that English teachers should immediately renounce Shakespeare or “the greats” in the name of decolonizing the English curriculum. This smacks of impoverishing students and robbing them of access to powerful ideas and sensibilities. But clearly teachers need some clear criteria for assessing what should constitute inclusion.

It occurs to me that Semantic Waving might help explain why some poems, for example, are considered canonical and others are not, and provide a set of criteria by which works can be assessed without having to ask whether the work merely mirrors the sensibilities of canonical works, or can stand alone despite not playing by the established rules.

Semantic Waves follow from the work of Karl Maton (2013, 2014), whose Legitimation Code Theory looks essentially at how knowledge in different fields is legitimated. What makes for legitimate knowledge in Science, say, or in Music? What are the rules of the game? Three dimensions have been developed within LCT research.

  • Specialisation – the degree to which the knowledge itself, or the knower and the knowing is foregrounded within a field. In some fields, like Science it does not matter who you are, what matters is what you know, the knowledge is all. In English literary studies, by contrast, having the right gaze, knowing how to approach literature is far more important than what you know. Being the right kind of knower is what really counts.
  • Semantics – the degree to which meaning is condensed [complex vs simple] or contextualised [abstract vs concrete]. Bridging the gap between academic theory and prosaic, everyday knowledge lies at the heart of education. Students need to understand abstract, complex ideas in terms they can understand, and teachers mediate difficult material for their students using metaphors and everyday examples. Students then need to re-frame their understandings in more academic language.
  • Autonomy – the degree to which knowledge stands alone, knowledge for knowledge sake, or is hitched to other wagons, for example the world of work. Building knowledge depends so much upon being able to transfer knowledge and skills across contexts and fields. For example Maths knowledge needs to be applied in a Science class, skills learned in English class about how to write and communicate effectively have applications in History or Business Studies classes. Should what is taught flow from the internal logic of a field or be driven by the needs of the workplace?

Semantic Waves are used to show how Semantic Gravity (SG) and Semantic Density (SD) changes over time, in the course of a lesson or a piece of writing. Semantic Gravity is described as weak (SG-) when ideas are abstract, divorced from particular contexts and strong (SG+) when it is concrete and strongly context-bound. Semantic density is described as strong (SD+) when ideas are complex and theoretical and weak (SD-) when ideas are simple.

For example, if you look at the figure below, line A describes a high semantic flatline. The discourse remains at an abstract (SG-) and complex, theoretical (SD+) level. For students there is little to help make the material being taught accessible. Line B, by contrast describes a low semantic flatline. Ideas are simple (SD-) and prosaic (SG+). Students are not teasing out themes and principles, practice is not being linked to theory. Line C describes a more marked movement up and down the semantic range.Theory and practice are linked.  Research in different fields (Matruglio, Maton, & Martin, 2013) strongly suggests that good educational practice, effective lessons, high achieving essays, have semantic profiles  with  larger semantic ranges and in which there is continual movement between abstract and complex and concrete and simple over time.

By looking at the semantic profile of a lesson or a student essay, for example, we can tell a great deal about how effective it is. Effective writing does not depend on narrative alone, but is able to draw out themes or ideas and link them across paragraphs. Ideas are introduced and developed, fleshed out with examples, with anecdote and metaphors, with data or facts and those ideas are developed by being linked to other ideas. Effective writing involves the creation of semantic waves.

It seems to me that the semantic profiles of literary works can tell us a great deal about their place in the canonical hierarchy. I have chosen to look at poetry, because it is shorter and perhaps easier to analyse quickly. In order to code the relative semantic gravity and density of a piece of writing, I have used the following device. At the level of strongest semantic gravity and lowest semantic density is those parts of a text in which meaning is literal. Everyday words are used with their everyday meanings. Moving up the spectrum as gravity weakens and density strengthens we find parts of the text where figurative language is used. Figures of speech are used, words are used more ambiguously to condense more meaning. Moving up the scale again, poems often reveal themes and ideas. The imagery and figurative use of language reveals understandings about whatever it is that the poet is writing about. We start to talk about what the poet means. The highest level of abstraction and condensation of meaning reflects a level sitting above the apparent meaning of the poem, its thematic concerns, in which the poet expresses ideas about the nature of poetry itself. This metapoetic level is often missed by those who do not have the gaze (the knower code) that predisposes them to look out for it.

 

The Obviously Canonical

Let us start by looking at a poem that is firmly part of the canon, a major poem by a major poet, probably taught to every student ever at some stage of their career anywhere in the world where English is taught.

Shakespeare’s Sonnet 130.

My mistress’ eyes are nothing like the sun;
Coral is far more red than her lips’ red:
If snow be white, why then her breasts are dun;
If hairs be wires, black wires grow on her head.
I have seen roses damask’d, red and white,
But no such roses see I in her cheeks;
And in some perfumes is there more delight
Than in the breath that from my mistress reeks.
I love to hear her speak, yet well I know
That music hath a far more pleasing sound:
I grant I never saw a goddess go,
My mistress, when she walks, treads on the ground:
And yet, by heaven, I think my love as rare
As any she belied with false compare.

In this poem we see rapid movements between the strong semantic gravity of descriptions of the literal mistress’ attributes, her eyes, her lips, her breasts, hair and so on and the relatively weaker gravity of an idealised mistress typical of the Petrarchan convention. It becomes apparent that Shakespeare is critiquing the hyperbole of the Petrarchan convention, he is critiquing a trope. Each idealised hyperbole is contrasted with a real mistress, warts and all. In terms of a semantic profile, real human qualities (SG+SD-) are being contrasted with an idealised convention (SG-SD+). At the level of thematics, perhaps Shakespeare is satirising false comparison, inflated hyperbole. But the poem is firmly metapoetic. This is not actually a literal mistress being described, it is Shakespeare’s muse. Shakespeare is effectively making a claim that his poetry will not rest on the inflated hyperbole of the Petrarchan convention, but on expressing real emotions, real feelings.

The semantic profile therefore reflects both a full semantic range encompassing literal meanings, figurative language, thematic treatments and a metapoetic level for those with the cultivated gaze to see it.

It seems to me that the fullness and frequency of the waving helps explain why this particular poem’s place in the canon is uncontested. It has a face validity, I believe, to suggest that because the poem works at all levels of the semantic spectrum, it deserves its place.

But does this hold for poems clearly not destined for a place in the canon, or poems whose place is more contested? Only an extensive analysis of many many works could probably establish this, but, trying not to cherry-pick, here are two further examples.

The Obviously Non-Canonical

A Love Song

Let me sing you a love song
About what I feel in my heart;
Butterflies can’t find nectar
Whenever we’re apart.

You’re a flower in bloom.
In the dark, in the gloom,
It’s you who brightens my day.
How many ways do I need you?
Every day, every way, come 
what may.

This poem by contrast operates almost entirely at a literal level, with some forays into figurative language, deploying metaphors in a somewhat random way. I could not really discern a theme being drawn beyond an expression of love. The poem certainly does not address the nature of love beyond that it is a feeling and that the poet has it.

That the poem remains at a quite concrete, simple level, does not draw out any theme, and has nothing discernible to say about the nature of poetry itself helps, I believe, to explain its obscurity.

This seems to confirm that canonical status might depend on the amplitude and frequency of the semantic profile. Poetry which works only at a literal level and deploys figurative language without advancing more nuanced or abstract meaning is not likely to be admitted to the canon.

The  Contested Canon?

But what of poetry that is more contested? The following poem is charming and engaging and was a hot favourite in 1934, but does not regularly appear in student anthologies.

This Is Just To Say

by William Carlos Williams
I have eaten
the plums
that were in
the icebox
and which
you were probably
saving
for breakfast
Forgive me
they were delicious
so sweet
and so cold
In this poem the words appear simple and concrete. A man is apologizing for eating the plums his wife was saving for breakfast. Beyond the engaging enjambement and alliterative use of the “s” sound suggesting a sensuality in the eating, there is no clear theme to draw out. Perhaps the plums are not meant to be taken literally, perhaps they are forbidden fruit and the poet is apologizing for some infidelity. The poem has a gentle tone and speaks to marital intimacy and conflict, perhaps, but if so it is alluded to thinly. The poem appears to remain at a fairly literal level.
And yet if one considers the form of the poem, a fridge note, one could argue that what William Carlos Williams is “just saying” is that ordinary realia, ordinary life, can be elevated to the status of poetry and is a fit subject for Art. This is a decidedly metapoetic twist to what appears a simple poem, and if read that way, might be enough to elevate the poem into the canonical pantheon.

The  Decolonized Canon?

I have argued so far that full semantic profiles appear to be present in works considered for the canon, and absent in those that do not belong. The only way to test this hypothesis would be to analyse large numbers of poems from school anthologies, but I am going to assume for the moment that my argument holds, and will turn to consider a few poems that might not share the same concerns as poets in the Anglo-Saxon tradition, but might be more comfortably identified as fitting a decolonized canon. Does semantic waving appear to work in the same way?

I have chosen two poems by South African poet Don Mattera, whose protest poetry is often included in anthologies of Anti-Apartheid poetry. His work clearly belongs within the canon of protest poetry, but does it share features in common with Shakespeare and Carlos Williams?

I feel a poem

Thumping deep, deep
I feel a poem inside
wriggling within the membrane
of my soul;
tiny fists beating,
beating against my being
trying to break the navel cord,
crying, crying out
to be born on paper

Thumping
deep, so deeply
I feel a poem,
inside

In this poem the metapoetic is foregrounded. A poem being created is compared to the birth of a baby. The literal descriptions of a fetus struggling to be born is compared to the pangs of creative birth in writing poetry. But something else is going on at a thematic level. The poem is not a literal poem only, the poem being born is the poem of a people yearning for freedom. The poet is feeling the stirrings of the national revolution. The semantic profile would be very similar to the Shakespearean sonnet we started with. This is a poem I think most English teachers would be very comfortable including because it fits so well with the Anglo-Saxon canonical model in which metapoetry is the highest pursuit of the poet.

The second poem, however, sits somewhat differently.

Sobukwe

On his death

It was our suffering
and our tears
that nourished and kept him alive
their law that killed him

Let no dirges be sung
no shrines be raised
to burden his memory
sages such as he
need no tombstones
to speak their fame

Lay him down on a high mountain
that he may look
on the land he loved
the nation for which he died

Men feared the fire of his soul

The sensibilities are somewhat different. The poem takes the form of an epitaph and contrasts strongly a reputation that was nourished by the suffering of the popular struggle and extinguished by Apartheid law. The meaning remains at a fairly literal level. At a thematic level the meaning is fairly literal as well. Clear sides are drawn between “our” suffering and “their” laws, but this is not explored. The poem is clearly meant to stand by itself. Its meaning does not need to be elaborated, this is a paean to a dead hero. In its simplicity it has a power and is deeply moving. The final line asserts a power beyond the grave, suggesting a flame not extinguished, and a call to solidarity of “us” versus “them”.

And yet the simple addition of the words “on his death” as a prelude to the poem itself suggests that the poem needs to be read as a formal statement, the poem itself will be the shrine to Sobukwe’s memory. Literal shrines would “burden” his memory. Just as with the William Carlos William poem, on reflection as you mull over the poem, its metapoetic impact reveals itself slowly.

 

Conclusion

Five poems alone cannot really warrant a conclusion, but I would argue that these quick glosses do indicate that semantic waves are a useful tool for analysing poetry and that it seems as if powerful poetry depends upon a broad semantic range, and that this is not a culturally bound observation. Teachers choosing works for inclusion in the decolonized curriculum need not fear that the inclusion of local poetry weakens the importance of the canon, but care needs to be taken to include work which is thematically and metapoetically broad.

Bibliography

Maton, Karl. 2013. “Making Semantic Waves: A Key to Cumulative Knowledge-Building.” Linguistics and Education 24 (1): 8–22. https://doi.org/10.1016/j.linged.2012.11.005.

Matruglio, Erika, Karl Maton, and J.R. R. Martin. 2013. “Time Travel: The Role of Temporality in Enabling Semantic Waves in Secondary School Teaching.” Linguistics and Education 24 (1): 38–49. https://doi.org/10.1016/j.linged.2012.11.007.

Maton, Karl, 2014, Knoweldge and Knowers; Towards a realist sociology of education, Routledge.

 

Towards a Taxonomy of Educational Games using Bernstein as a Guide

Games and gaming have increasingly become a part of the educational landscape, both in analog and digital formats. Teachers are keen to find out if they can use games in their classrooms to improve student learning and performance. It is often easy to demonstrate an uptake in engagement, but less easy to justify the time spent on a game, if educational benefit cannot be quantified. Taxonomies of games are largely based on their genre or features, the degree to which chance is present, or the complexity of the rules. This is great if you are trying to classify games, but not very helpful if your interest lies in its pedagogical value. One approach has been to try to map the affordances of game genres to educational concepts derived from Bloom’s taxonomy of educational objectives, Gagne’s five categories of learning outcomes, and Jonassen’s typology of problem solving (O’Brien, et al, 2010). This approach is promising, but suffers, I think from a surfeit of base concepts. By trying to account for too much, we end up with the kind of diagram beloved of these post-post times, so complex that it differs little from anecdote, and illuminates nothing.

I would like to suggest instead that a fruitful avenue might start with the work of Basil Bernstein (2004). Bernstein’s sociology of education has offered many researchers insight into the problems they were researching and a shared language which can illuminate different concerns, at different scales from the macro socio-political level to the individual lesson. By bringing this language to an analysis of types of games in education it seems to me we might be able to leverage a common language to understand better what it is in a game that might bring use value to the educational setting. I am not going to go into a lengthy summary of Bernstein’s work, which is often dense and difficult to navigate. Bernstein was basically interested in the ways in which education reproduced inequality in society, the rules and processes by which middle class students are advantaged, and working class students disadvantaged. A key tool of analysis for Bernstein was to see pedagogic practice in terms of two concepts: classification and framing.

Classification refers to the content of pedagogic discourse, the boundaries and degree of insulation between discourses. This answers the question of what knowledge is considered valid and legitimate. For example, in a Science class there is a strong sense of a body of knowledge that constitutes Science and doing Science. Even within different Science classes, some teachers may organize around tightly drawn boundaries of what constitutes doing Science, but others may operate around learning Science through problem-based approaches. A Social Studies class may have less of a sense of what might constitute legitimate knowledge in the field. In Social Studies there is more cross-disciplinary work being done, and the boundaries of the field are less tightly drawn. A class might quite legitimately be engaged in gender studies or in studying ancient history. Classification, in other words can be strong or relatively weak. Some schools organize work around themes rather than distinct subject areas. Problem-based learning probably represents the weakest classification of all.

Framing refers to the “how” of pedagogical practice, and sets out how control operates within a classroom, the ways in which the curriculum is sequenced, paced and evaluated. Strong framing reflects very much a teacher-centred approach, while weak framing is where students have greater control over what and how they are learning.

Both classification and framing are described as strong (+) or weak (-) and allowed Bernstein to identify two codes – collection codes which result in the acquistion of specialised knowledge and integrated codes in which the boundaries between subjects are weaker as are the boundaries between everyday knowledge and school knowledge. By visualizing these continua of weak to strong as a Cartesian plane – as below – we can start to identify recognizable pedagogical modes and ways of describing shifts in pedagogical practice over time. While teachers tend to favour one style or another, effective teaching relies upon the ability to shift between pedagogical modes according to the needs of the moment.

Figure 1: Pedagogies analysed with classification and framing (adapted from Jónsdóttir & Macdonald, 2013 in March et al (2017)

As Maton and Howard (2018) have shown, integrative knowledge building is dependent on movement between fields of knowledge – what they term Autonomy Tours. I have summarised what is meant by autonomy tours in a previous blog, but what research indicates is that successful lessons involve more than just sticking to the subject or topic being studied. Effective teaching involves turning everyday knowledge, knowledge from other bits of the curriculum to the purpose at hand. A Science teacher will often need to use Maths knowledge in her lesson. A History teacher might use Geography, and all teachers tend to use knowledge from students’ everyday experience to unpack and understand the concepts being built upon in their discipline. To teach effectively teachers need to take tours through content that is relevant to their field and knowledge outside their field and turn it to the purpose of teaching the topic at hand. In this way knowledge across the curriculum becomes more integrated.

It seems to me that in a similar way, effective teaching depends upon Pedagogical Tours, movements between pedagogical modes. There are times when it is appropriate for students to explore a topic on their own or with minimal guidance, but it is also appropriate for much more teacher-directed activities at other times. Movements between student-centred and teacher-centred pedagogies are necessary for learning to take place. It might well be that teachers are more comfortable in one or other pedagogical mode, but it is hard to see how effective learning can take place without movements between modes.

How are we to understand the role played by educational games?

I would argue that educational games can similarly be described through the lens of classification and framing.

Classification here would refer to the relative insulation of the game content. Some games have highly specialised content, while others have more integrated or open content. A game of Maths Blaster, for example, is clearly focused on mathematical concepts and skills, despite a space age theme. The content of the game displays strong classification (C+). On the other hand a game of I Spy with my little Eye incorporates content from everyday life around the players, and has very weak classification (C-). All games have relatively stronger or weaker classification along a continuum. Chess, for example, although it has warlike pieces and is nominally a game of conflict, is clearly more integrated in terms of general cognitive skills than a tactical wargame, which has more specialized military content.

Framing here would refer to the locus of control. Some games are tightly controlled through the operation of the rules, or software. Progress and sequencing is determined by the rules of the game and players have little opportunity to choose their own path. For example, in a game of tic-tac-toe, possible moves are heavily circumscribed. Players can only ever place a nought or a cross, and there are only nine possible starting positions. The Framing here is strong (F+). On the other hand, in a role play game, although the Games Master may have circumscribed the action by setting out a particular setting or scenario, players are generally free to try anything within their imaginations. The Framing here is much weaker (F-). In between of course might lie a continuum of games with relatively stronger or weaker framing. Chess, for example has more pieces and more possible moves than tic-tac-toe, although the framing is still strong because players cannot deviate from a set of possible board positions or legal moves. A tactical wargame might have weaker Framing because there are more pieces, more freedom to move in any direction and fewer restrictions on what a player may choose to do.

If we put the two together on a Cartesian plane, we can start to plot different games as follows:

 

Clearly we might differ in where we position any particular game on this matrix, and these are just a few examples of both analog and digital games. By using classification and framing, it seems to me that we can easily see the affordances of games for educational purposes, without being clouded by its features, genre and so on. By superimposing the two diagrams we might begin to identify possible code matches and code clashes between educational games chosen for use in a classroom and pedagogical styles. A code match is where the classification and framing of both pedagogical style and game match each other, and a code clash where this match is absent.

 

 


What exactly does this tell us though beyond a common sense understanding that teachers that value a great deal of control over the pacing and sequencing of their teaching are unlikely to choose to use a role play game in their classroom because it surrenders a great deal of control over to their students? Or that a teacher who values insulated academic boundaries is more likely to explore History through a game like The Oregon Trail than through creating an alternate world in Minecraft because there is simply more historical content in the former and learning is more tangential in the latter. This may seem obvious, but many teachers are genuinely confused by the range of material available to them, are easily swayed by sales reps, and misunderstand the affordances of the games they select for use.

What this taxonomy does offer, I believe, is a clear way into looking at those very affordances to be able to understand better the choices that teachers make. I think it also represents a useful research tool for looking at games in education generally and being able to relate it to pedagogical choices.

 

Bibliography

Bernstein, Basil. 2004. The Structuring of Pedagogic Discourse. Vol. 23. Routledge.

March, Jackie & Kumpulainen, K. & Nisha, Bobby & Velicu, Anca & Blum-Ross, Alicia & Hyatt, David & Jónsdóttir, Svanborg & Levy, Rachael & Little, Sabine & Marusteru, George & Ólafsdóttir, Margrét & Sandvik, Kjetil & Thestrup, Klaus & Arnseth, Hans & Dyrfjord, Kristín & Jornet, Alfredo & Kjartansdottir, Skulina & Pahl, Kate & Pétursdóttir, Svava & Thorsteinsson, Gisli. (2017). Marsh, J., Kumpulainen, K., Nisha, B., Velicu, A., Blum-Ross, A., Hyatt, D., Jónsdóttir, S.R., Levy, R., Little, S., Marusteru, G., Ólafsdóttir, M.E., Sandvik, K., Scott, F., Thestrup, K., Arnseth, H.C., Dýrfjörð, K., Jornet, A., Kjartansdóttir, S.H., Pahl, K., Pétursdóttir, S. and Thorsteinsson, G. (2017) Makerspaces in the Early Years: A Literature Review. University of Sheffield: MakEY Project.

Maton, K. and Howard, S. K. (2018) Taking autonomy tours: A key to integrative knowledge-building, LCT Centre Occasional Paper 1 (June): 1–35.

O’Brien, D., Lawless, K. A., & Schrader, P. G. (2010). A Taxonomy of Educational Games. In Baek, Y. (Ed.), Gaming for Classroom-Based Learning: Digital Role Playing as a Motivator of Study. (pp. 1-23).

.

 

Computational Thinking – The Ideal Knower?

The debate around the concept of Computational Thinking often revolves around a central distinction between those who see Computational Thinking as a fundamental skill useful beyond the field of computer science alone and applicable as a general problem solving tool (Wing, 2006), and those who warn that this view may make exaggerated claims (Guzdial, 2011; Denning, 2017). To my mind, the most useful way to look at Computational Thinking is to see it as first and foremost part of the extended knowledge practices of computer scientists and assess the transfer of knowledge and skills as a separate issue. After all, there is transfer of knowledge and disposition across all fields of human knowledge. Academia builds strong silos, but knowledge is often advanced by those who step outside their silos.

Karl Maton (2014) building on the work of Basil Bernstein and Pierre Bourdieu, argues that all knowledge is made up of both knowledge and knower structures. Uncovering the ways in which these knowledge/knower structures legitimate knowledge claims helps uncover the largely hidden codes to academic success.

We can describe knowledge (epistemic relations) along a continuum from weak to strong. Weak epistemic relations indicate fields where knowledge itself is relatively unimportant. Where epistemic relations are strong, knowledge is crucial in legitimating knowledge claims. Equally we can describe knowing (social relations) along a continuum from weak to strong. Weak social relations indicate fields where who you are as a knower is relatively unimportant in legitimating knowledge claims. Strong social relations, however, indicate fields where the dispositions and gaze of the knower define legitimacy in the field. If we set epistemic and social relations out on a cartesian plane as in the diagram, it allows us to identify different knowledge/knower codes.

Some fields emphasise one or the other. For example, knowledge in Science is mostly dependent upon the knowledge content – it represents a knowledge code. Who is doing the knowing, their ways of seeing and knowing is largely, but not completely irrelevant. By contrast in the field of film criticism, an encyclopedic knowledge of world cinema alone does not guarantee legitimacy, Far more important is how the critic approaches film, how they structure and validate their arguments. Here the knower is emphasised – a knower code – having a cultivated gaze is crucial. The knowledge itself is almost irrelevant. Where both are crucial to legitimating knowledge/knowing we have an elite code. For example in Music. Where neither is important – a relativist code – what you know and who you are is largely irrelevant, all perspectives tend to carry equal weight.

It seems to me that viewing all knowledge from this knowledge/knower perspective helps to illuminate much of the debate around Computational Thinking. CT is usually defined as a set of procedures as follows:

  1. Problem reformulation – reframing a problem so that it becomes solvable and familiar.
  2. Recursion – constructing a system incrementally on preceding information
  3. Decomposition – breaking the problem down into manageable bites.
  4. Abstraction – modelling the salient features of a complex system
  5. Systemic testing – taking purposeful actions to derive solutions (Shute, et al, 2017)

What is clear is that this describes a set of dispositions, ways of approaching problems, ways of seeing rather than the set of knowledge structures that make up legitimate knowledge in computer science. If you look at the syllabus of a typical computer science degree programme, you will get a fair idea of the what that needs to be studied. It largely revolves around the analysis of algorithms and programming design to enable data handling, software design, and increasingly machine learning. The definition of CT does not describe the knowledge, but rather the knower structures of computer science. It sets out what one might consider the characteristics of the ideal knower. It describes how an ideal computer scientist looks at their field, in much the same way as the Scientific Method describes how an ideal scientist approaches their field.

The clear value of the notion of CT, rests, therefore, in laying bare what constitutes legitimate knowing in the field of computer science. It reveals the rules of the game quite explicitly. Because computer science is founded on well developed knowledge structures it represents a knowledge code in Maton’s matrix. Who you are is far less important than what you know. If you are able to master the mathematical knowledge and understand the algorithms necessary for producing computational models of the world that is quite sufficient to make you a computer scientist. But, as Maton points out, all knowledge has both knowledge and knower structures. For many students these knower structures are often occluded. Curriculae often make explicit the knowledge content requirements, but leave unsaid the subliminal characteristics that make up the ideal knower in the field.

If it is correct to say that CT defines the ideal knower dispositions, ways of being, seeing, doing, then computer science is fortunate in having these dispositions set out explicitly, offering clear pedagogical guidelines.

Bibliography

Denning, Peter J. 2017. “Remaining Trouble Spots with Computational Thinking.” Communications of the ACM 60 (6): 33–39. https://doi.org/10.1145/2998438.

Guzdial, M. 2011. “A Definition of Computational Thinking from Jeannette Wing.” Computing Education Research Blog. 2011. https://computinged.wordpress.com/2011/03/22/a-definition-of-computational-thinking-from-jeanette-wing/.

Maton, K. (2014). Knowledge and Knowers: Towards a realist sociology of education. London, UK: Routledge/Taylor & Francis Group.

Papert, Seymour. 1980. Mindstorms: Children, Computers, and Powerful Ideas. The British Journal of Psychiatry. New York: Basic Books. https://doi.org/10.1192/bjp.112.483.211-a.

Shute, Valerie J., Chen Sun, and Jodi Asbell-Clarke. 2017. “Demystifying Computational Thinking.” Educational Research Review 22 (September): 142–58. https://doi.org/10.1016/j.edurev.2017.09.003.

Wing, Jeannette. 2006. “Computational Thinking.” Communications of the ACM 49 (3): 33–35. https://doi.org/10.1145/1118178.1118215.

 

Computational Thinking – a new modality of thought or just what coders do?

I want to pose a question for consideration. There is a great deal of debate and disagreement over what Computational Thinking means. For some it describes how computer scientists go about what they do, akin perhaps to the scientific method for scientists (Wolfram, 2002), and is applicable only to computer scientists. For others it is a skill set that has implications beyond the field of computer science, a set of generalizable skills of benefit to all (Wing, 2006). A third view is that it represents something of a new mode of thought capable of unique explanations (Papert, 1980) and knowledge building. In this sense it goes beyond a set of procedures, like the scientific method, and might represent a mode of thought distinct from the paradigmatic (argumentative) and narrative modes of thought proposed by Bruner (1986).

The paradigmatic mode represents knowledge founded on abstract understanding or conceptions of the world,. For example, I could explain why an apple fell to the ground by referencing the theory of gravity. This is largely the language and understanding of Science. The narrative mode of thought represents an understanding of the world founded in human interactions. I might explain why an apple fell by referencing a sequence of events in which my elbow knocked it off the table and I was not deft enough to catch it. Of course there is a continuum along which both modalities of thought intersect and interweave. So, my question is whether computational thinking represents a separate mode of thought in its own right, or simply new combinations of paradigmatic and narrative modes. If I were to model a world of apples, elbows and tables, my understanding of why apples fall might be based on a more complete understanding of how apples behave under different circumstances. The use of computational models allows for new ways of understanding the world, new ways of gaining understanding and knowledge. Chaos Theory, for example, emerged out of computational model building. Paradigmatic formulations of the world followed from computational modelling, rather than the other way round.

When we create a computational model of a weather system and run our algorithms through computers with slightly different inputs to make a hurricane path forecast, for example, or use machine learning algorithms to predict heart disease more accurately, are we deploying a new kind of thought which is somewhat different from both paradigmatic and narrative modes?

The need to ask this question rests, perhaps, on the rapid development of Machine Learning and how it threatens to disrupt our world. Machine Learning has brought us to a point where we might soon be farming most of our thinking to intelligent machines. And while probabilistic approaches to artificial intelligence allow human beings to trace back what the machine has done with our algorithms: neural networks, with their black box approaches represent thinking that is to a large extent opaque to us. It seems entirely possible then, that in the not too distant future machines will be delivering to us knowledge of the world, and we will not be able to explain the thinking behind it.

The idea of Computational Thinking (CT) has a history, and it is interesting to unpack some of it. The term was coined by Seymour Papert (1980) and popularised by Jeanette Wing (2006) and there is general consensus that it refers to the thinking skills employed by computer scientists when they are doing computer programming, derived from the cognitive processes involved when you are designing an algorithm for getting “an information-processing agent” (Cuny, et al, 2010) to find a solution to a problem. For some, information-processing agents should refer only to machines, but for others it could include human beings when they are performing computational tasks. Differences in how applicable CT is beyond computer science hinges on these nuances of understanding. I have often heard it said that getting students to design an algorithm for making a cup of tea represents CT and if students were to study designing algorithms through leaning to code they would therefore be improving their general problem solving skills.These claims are difficult to assess, but they are important because if CT applies only to the context of computer science, then its place in the curriculum occupies something of a niche, important though it might be. If, however, as claimed, it leads to benefits in general problem solving skills there is a solid case to be made for getting all students to learn programming. Equally, the case for exposing all students to some coding might rest on other claims unrelated to the transfer of CT to other domains.

Let’s start by looking at the claims made by the Coding for all lobby. Wing (2206) argued that CT skills have transferable benefits outside of computer science itself because they entail five cognitive processes, namely:

  1. Problem reformulation – reframing a problem so that it becomes solvable and familiar.
  2. Recursion – constructing a system incrementally on preceding information
  3. Decomposition – breaking the problem down into manageable bites.
  4. Abstraction – modelling the salient features of a complex system
  5. Systemic testing – taking purposeful actions to derive solutions  (Shute, et al, 2017)

Wing’s claim has received a great deal of attention and has become the bedrock for the Computer Science for All movement, the idea that all children should be exposed to CT, by teaching them to code, both because such skills will become increasingly important in an increasingly digital world, but also because they equip students for the kinds of problem solving that is increasingly important. It is debatable, though, as to whether these cognitive processes are unique to computational thought. Abstraction and decomposition, in particular, might seem to be thinking processes shared by any number of activities. Wing’s thesis that computational thinking is generalizable to all other fields could perhaps be stated in the reverse direction. Perhaps general cognitive processes are generalizable to computation? This point is not trivial, but still might not threaten the thesis that learning to code or create algorithms is excellent for developing good problem solving skills applicable to other fields.

The question of the transfer of skills gained in one context to another is, however, fraught with difficulty. Generally speaking it seems to me that knowledge and skills are gained within the framework of a particular discipline, and that the application of knowledge and skills in other contexts is always problematic to some extent. There is a close relationship between knowledge itself and what we call thinking skills. It is hard to imagine, for example, anyone possessing dispositions and thinking skills in History or Mathematics without possessing knowledge in those disciplines. As Karl Maton (2014) has pointed out, all knowledge has both knowledge and knowing structures. There is the stuff that is known and the gaze of the knower. In different fields, knowledge structures or knower structures may have greater or lesser relative importance, but one cannot distill out something which is pure knowledge, or pure knowing. Therefore the question of the transfer of skills from one context to another, from one field to another, is not a simple one. Of course we do achieve this feat. At some point in my life I learned basic numeracy skills, within the context of elementary arithmetic classes presumably, and I have been able to apply this basic knowledge and skill set to other contexts, for example computer programming. But I am not so sure that the thinking dispositions I gained while studying History at University, and my appreciation for the narrative mode of explanation are altogether much use when thinking about Computational Thinking and what I ought to be doing as a teacher of ICT skills. I am painfully aware that there are limits to the general applicability of the enquiry and data analysis skills that I learned when training to become an historian. I did not train to become a computer scientist, and therefore I am very wary of commenting on how transferable skills in computational thinking might be to contexts outside the field. But I do believe we should be wary of claims of this sort. Peter Denning (2017) has argued that the idea that all people can benefit from CT, from thinking like computer scientists, is a vague and unsubstantiated claim. For Denning, the design of algorithms (algorithmic thinking) rests not on merely setting out any series of steps, but speaks specifically to the design of steps controlling a computational model. It is context bound.

My understanding from this is that the case for teaching everyone to code cannot rest solely on an argument that CT transfers benefits. This case has yet to be proven. It does not mean that teaching coding to all is not a good thing. I believe that learning to code represents a rigorous discipline which is good for the mind, has benefits because we are living in a world where computer programs are increasingly important, and because coding involves problem solving and this too benefits the mind. All in all I think the case for teaching coding to all is extremely cogent.

I also have this sneaking suspicion that the question I posed in my opening remarks is going to be raised more and more frequently as artificial intelligence gets applied, and if so, having a population trained in some level of competence with computational thinking is probably a really good idea.

Bibliography

Bruner, J. (1986). Actual Minds, Possible Worlds. Cambridge, Mass: Harvard University Press.

Cuny, Jan,  Snyder, Larry, and Wing, Jeanette. 2010. “Demystifying Computational Thinking for Non-Computer Scientists,” work in progress.

Curzon, Paul, Tim Bell, Jane Waite, and Mark Dorling. 2019. “Computational Thinking.” In The Cambridge Handbook of Computing Education Research, edited by S.A. Fincher and A.V. Robins, 513–46. Cambridge. https://qmro.qmul.ac.uk/xmlui/bitstream/handle/123456789/57010/Curzon Computational thinking 2019 Accepted.pdf?sequence=2&isAllowed=y.

Denning, Peter J. 2017. “Remaining Trouble Spots with Computational Thinking.” Communications of the ACM 60 (6): 33–39. https://doi.org/10.1145/2998438.

Guzdial, M. 2011. “A Definition of Computational Thinking from Jeannette Wing.” Computing Education Research Blog. 2011. https://computinged.wordpress.com/2011/03/22/a-definition-of-computational-thinking-from-jeanette-wing/.

Maton, K. (2014). Knowledge and Knowers: Towards a realist sociology of education. London, UK: Routledge/Taylor & Francis Group.

Papert, Seymour. 1980. Mindstorms: Children, Computers, and Powerful Ideas. The British Journal of Psychiatry. New York: Basic Books. https://doi.org/10.1192/bjp.112.483.211-a.

Shute, Valerie J., Chen Sun, and Jodi Asbell-Clarke. 2017. “Demystifying Computational Thinking.” Educational Research Review 22 (September): 142–58. https://doi.org/10.1016/j.edurev.2017.09.003.

Wing, Jeannette. 2006. “Computational Thinking.” Communications of the ACM 49 (3): 33–35. https://doi.org/10.1145/1118178.1118215.

Wolfram, Stephen. 2002. A New Kind of Science, Wolfram Media, Inc. https://www.wolframscience.com/nks/

 
 
%d bloggers like this: