RSS

Category Archives: Legitimation Code Theory

Computational Thinking – The Ideal Knower?

The debate around the concept of Computational Thinking often revolves around a central distinction between those who see Computational Thinking as a fundamental skill useful beyond the field of computer science alone and applicable as a general problem solving tool (Wing, 2006), and those who warn that this view may make exaggerated claims (Guzdial, 2011; Denning, 2017). To my mind, the most useful way to look at Computational Thinking is to see it as first and foremost part of the extended knowledge practices of computer scientists and assess the transfer of knowledge and skills as a separate issue. After all, there is transfer of knowledge and disposition across all fields of human knowledge. Academia builds strong silos, but knowledge is often advanced by those who step outside their silos.

Karl Maton (2014) building on the work of Basil Bernstein and Pierre Bourdieu, argues that all knowledge is made up of both knowledge and knower structures. Uncovering the ways in which these knowledge/knower structures legitimate knowledge claims helps uncover the largely hidden codes to academic success.

We can describe knowledge (epistemic relations) along a continuum from weak to strong. Weak epistemic relations indicate fields where knowledge itself is relatively unimportant. Where epistemic relations are strong, knowledge is crucial in legitimating knowledge claims. Equally we can describe knowing (social relations) along a continuum from weak to strong. Weak social relations indicate fields where who you are as a knower is relatively unimportant in legitimating knowledge claims. Strong social relations, however, indicate fields where the dispositions and gaze of the knower define legitimacy in the field. If we set epistemic and social relations out on a cartesian plane as in the diagram, it allows us to identify different knowledge/knower codes.

Some fields emphasise one or the other. For example, knowledge in Science is mostly dependent upon the knowledge content – it represents a knowledge code. Who is doing the knowing, their ways of seeing and knowing is largely, but not completely irrelevant. By contrast in the field of film criticism, an encyclopedic knowledge of world cinema alone does not guarantee legitimacy, Far more important is how the critic approaches film, how they structure and validate their arguments. Here the knower is emphasised – a knower code – having a cultivated gaze is crucial. The knowledge itself is almost irrelevant. Where both are crucial to legitimating knowledge/knowing we have an elite code. For example in Music. Where neither is important – a relativist code – what you know and who you are is largely irrelevant, all perspectives tend to carry equal weight.

It seems to me that viewing all knowledge from this knowledge/knower perspective helps to illuminate much of the debate around Computational Thinking. CT is usually defined as a set of procedures as follows:

  1. Problem reformulation – reframing a problem so that it becomes solvable and familiar.
  2. Recursion – constructing a system incrementally on preceding information
  3. Decomposition – breaking the problem down into manageable bites.
  4. Abstraction – modelling the salient features of a complex system
  5. Systemic testing – taking purposeful actions to derive solutions (Shute, et al, 2017)

What is clear is that this describes a set of dispositions, ways of approaching problems, ways of seeing rather than the set of knowledge structures that make up legitimate knowledge in computer science. If you look at the syllabus of a typical computer science degree programme, you will get a fair idea of the what that needs to be studied. It largely revolves around the analysis of algorithms and programming design to enable data handling, software design, and increasingly machine learning. The definition of CT does not describe the knowledge, but rather the knower structures of computer science. It sets out what one might consider the characteristics of the ideal knower. It describes how an ideal computer scientist looks at their field, in much the same way as the Scientific Method describes how an ideal scientist approaches their field.

The clear value of the notion of CT, rests, therefore, in laying bare what constitutes legitimate knowing in the field of computer science. It reveals the rules of the game quite explicitly. Because computer science is founded on well developed knowledge structures it represents a knowledge code in Maton’s matrix. Who you are is far less important than what you know. If you are able to master the mathematical knowledge and understand the algorithms necessary for producing computational models of the world that is quite sufficient to make you a computer scientist. But, as Maton points out, all knowledge has both knowledge and knower structures. For many students these knower structures are often occluded. Curriculae often make explicit the knowledge content requirements, but leave unsaid the subliminal characteristics that make up the ideal knower in the field.

If it is correct to say that CT defines the ideal knower dispositions, ways of being, seeing, doing, then computer science is fortunate in having these dispositions set out explicitly, offering clear pedagogical guidelines.

Bibliography

Denning, Peter J. 2017. “Remaining Trouble Spots with Computational Thinking.” Communications of the ACM 60 (6): 33–39. https://doi.org/10.1145/2998438.

Guzdial, M. 2011. “A Definition of Computational Thinking from Jeannette Wing.” Computing Education Research Blog. 2011. https://computinged.wordpress.com/2011/03/22/a-definition-of-computational-thinking-from-jeanette-wing/.

Maton, K. (2014). Knowledge and Knowers: Towards a realist sociology of education. London, UK: Routledge/Taylor & Francis Group.

Papert, Seymour. 1980. Mindstorms: Children, Computers, and Powerful Ideas. The British Journal of Psychiatry. New York: Basic Books. https://doi.org/10.1192/bjp.112.483.211-a.

Shute, Valerie J., Chen Sun, and Jodi Asbell-Clarke. 2017. “Demystifying Computational Thinking.” Educational Research Review 22 (September): 142–58. https://doi.org/10.1016/j.edurev.2017.09.003.

Wing, Jeannette. 2006. “Computational Thinking.” Communications of the ACM 49 (3): 33–35. https://doi.org/10.1145/1118178.1118215.

 

Computational Thinking – a new modality of thought or just what coders do?

I want to pose a question for consideration. There is a great deal of debate and disagreement over what Computational Thinking means. For some it describes how computer scientists go about what they do, akin perhaps to the scientific method for scientists (Wolfram, 2002), and is applicable only to computer scientists. For others it is a skill set that has implications beyond the field of computer science, a set of generalizable skills of benefit to all (Wing, 2006). A third view is that it represents something of a new mode of thought capable of unique explanations (Papert, 1980) and knowledge building. In this sense it goes beyond a set of procedures, like the scientific method, and might represent a mode of thought distinct from the paradigmatic (argumentative) and narrative modes of thought proposed by Bruner (1986).

The paradigmatic mode represents knowledge founded on abstract understanding or conceptions of the world,. For example, I could explain why an apple fell to the ground by referencing the theory of gravity. This is largely the language and understanding of Science. The narrative mode of thought represents an understanding of the world founded in human interactions. I might explain why an apple fell by referencing a sequence of events in which my elbow knocked it off the table and I was not deft enough to catch it. Of course there is a continuum along which both modalities of thought intersect and interweave. So, my question is whether computational thinking represents a separate mode of thought in its own right, or simply new combinations of paradigmatic and narrative modes. If I were to model a world of apples, elbows and tables, my understanding of why apples fall might be based on a more complete understanding of how apples behave under different circumstances. The use of computational models allows for new ways of understanding the world, new ways of gaining understanding and knowledge. Chaos Theory, for example, emerged out of computational model building. Paradigmatic formulations of the world followed from computational modelling, rather than the other way round.

When we create a computational model of a weather system and run our algorithms through computers with slightly different inputs to make a hurricane path forecast, for example, or use machine learning algorithms to predict heart disease more accurately, are we deploying a new kind of thought which is somewhat different from both paradigmatic and narrative modes?

The need to ask this question rests, perhaps, on the rapid development of Machine Learning and how it threatens to disrupt our world. Machine Learning has brought us to a point where we might soon be farming most of our thinking to intelligent machines. And while probabilistic approaches to artificial intelligence allow human beings to trace back what the machine has done with our algorithms: neural networks, with their black box approaches represent thinking that is to a large extent opaque to us. It seems entirely possible then, that in the not too distant future machines will be delivering to us knowledge of the world, and we will not be able to explain the thinking behind it.

The idea of Computational Thinking (CT) has a history, and it is interesting to unpack some of it. The term was coined by Seymour Papert (1980) and popularised by Jeanette Wing (2006) and there is general consensus that it refers to the thinking skills employed by computer scientists when they are doing computer programming, derived from the cognitive processes involved when you are designing an algorithm for getting “an information-processing agent” (Cuny, et al, 2010) to find a solution to a problem. For some, information-processing agents should refer only to machines, but for others it could include human beings when they are performing computational tasks. Differences in how applicable CT is beyond computer science hinges on these nuances of understanding. I have often heard it said that getting students to design an algorithm for making a cup of tea represents CT and if students were to study designing algorithms through leaning to code they would therefore be improving their general problem solving skills.These claims are difficult to assess, but they are important because if CT applies only to the context of computer science, then its place in the curriculum occupies something of a niche, important though it might be. If, however, as claimed, it leads to benefits in general problem solving skills there is a solid case to be made for getting all students to learn programming. Equally, the case for exposing all students to some coding might rest on other claims unrelated to the transfer of CT to other domains.

Let’s start by looking at the claims made by the Coding for all lobby. Wing (2206) argued that CT skills have transferable benefits outside of computer science itself because they entail five cognitive processes, namely:

  1. Problem reformulation – reframing a problem so that it becomes solvable and familiar.
  2. Recursion – constructing a system incrementally on preceding information
  3. Decomposition – breaking the problem down into manageable bites.
  4. Abstraction – modelling the salient features of a complex system
  5. Systemic testing – taking purposeful actions to derive solutions  (Shute, et al, 2017)

Wing’s claim has received a great deal of attention and has become the bedrock for the Computer Science for All movement, the idea that all children should be exposed to CT, by teaching them to code, both because such skills will become increasingly important in an increasingly digital world, but also because they equip students for the kinds of problem solving that is increasingly important. It is debatable, though, as to whether these cognitive processes are unique to computational thought. Abstraction and decomposition, in particular, might seem to be thinking processes shared by any number of activities. Wing’s thesis that computational thinking is generalizable to all other fields could perhaps be stated in the reverse direction. Perhaps general cognitive processes are generalizable to computation? This point is not trivial, but still might not threaten the thesis that learning to code or create algorithms is excellent for developing good problem solving skills applicable to other fields.

The question of the transfer of skills gained in one context to another is, however, fraught with difficulty. Generally speaking it seems to me that knowledge and skills are gained within the framework of a particular discipline, and that the application of knowledge and skills in other contexts is always problematic to some extent. There is a close relationship between knowledge itself and what we call thinking skills. It is hard to imagine, for example, anyone possessing dispositions and thinking skills in History or Mathematics without possessing knowledge in those disciplines. As Karl Maton (2014) has pointed out, all knowledge has both knowledge and knowing structures. There is the stuff that is known and the gaze of the knower. In different fields, knowledge structures or knower structures may have greater or lesser relative importance, but one cannot distill out something which is pure knowledge, or pure knowing. Therefore the question of the transfer of skills from one context to another, from one field to another, is not a simple one. Of course we do achieve this feat. At some point in my life I learned basic numeracy skills, within the context of elementary arithmetic classes presumably, and I have been able to apply this basic knowledge and skill set to other contexts, for example computer programming. But I am not so sure that the thinking dispositions I gained while studying History at University, and my appreciation for the narrative mode of explanation are altogether much use when thinking about Computational Thinking and what I ought to be doing as a teacher of ICT skills. I am painfully aware that there are limits to the general applicability of the enquiry and data analysis skills that I learned when training to become an historian. I did not train to become a computer scientist, and therefore I am very wary of commenting on how transferable skills in computational thinking might be to contexts outside the field. But I do believe we should be wary of claims of this sort. Peter Denning (2017) has argued that the idea that all people can benefit from CT, from thinking like computer scientists, is a vague and unsubstantiated claim. For Denning, the design of algorithms (algorithmic thinking) rests not on merely setting out any series of steps, but speaks specifically to the design of steps controlling a computational model. It is context bound.

My understanding from this is that the case for teaching everyone to code cannot rest solely on an argument that CT transfers benefits. This case has yet to be proven. It does not mean that teaching coding to all is not a good thing. I believe that learning to code represents a rigorous discipline which is good for the mind, has benefits because we are living in a world where computer programs are increasingly important, and because coding involves problem solving and this too benefits the mind. All in all I think the case for teaching coding to all is extremely cogent.

I also have this sneaking suspicion that the question I posed in my opening remarks is going to be raised more and more frequently as artificial intelligence gets applied, and if so, having a population trained in some level of competence with computational thinking is probably a really good idea.

Bibliography

Bruner, J. (1986). Actual Minds, Possible Worlds. Cambridge, Mass: Harvard University Press.

Cuny, Jan,  Snyder, Larry, and Wing, Jeanette. 2010. “Demystifying Computational Thinking for Non-Computer Scientists,” work in progress.

Curzon, Paul, Tim Bell, Jane Waite, and Mark Dorling. 2019. “Computational Thinking.” In The Cambridge Handbook of Computing Education Research, edited by S.A. Fincher and A.V. Robins, 513–46. Cambridge. https://qmro.qmul.ac.uk/xmlui/bitstream/handle/123456789/57010/Curzon Computational thinking 2019 Accepted.pdf?sequence=2&isAllowed=y.

Denning, Peter J. 2017. “Remaining Trouble Spots with Computational Thinking.” Communications of the ACM 60 (6): 33–39. https://doi.org/10.1145/2998438.

Guzdial, M. 2011. “A Definition of Computational Thinking from Jeannette Wing.” Computing Education Research Blog. 2011. https://computinged.wordpress.com/2011/03/22/a-definition-of-computational-thinking-from-jeanette-wing/.

Maton, K. (2014). Knowledge and Knowers: Towards a realist sociology of education. London, UK: Routledge/Taylor & Francis Group.

Papert, Seymour. 1980. Mindstorms: Children, Computers, and Powerful Ideas. The British Journal of Psychiatry. New York: Basic Books. https://doi.org/10.1192/bjp.112.483.211-a.

Shute, Valerie J., Chen Sun, and Jodi Asbell-Clarke. 2017. “Demystifying Computational Thinking.” Educational Research Review 22 (September): 142–58. https://doi.org/10.1016/j.edurev.2017.09.003.

Wing, Jeannette. 2006. “Computational Thinking.” Communications of the ACM 49 (3): 33–35. https://doi.org/10.1145/1118178.1118215.

Wolfram, Stephen. 2002. A New Kind of Science, Wolfram Media, Inc. https://www.wolframscience.com/nks/

 

Decolonizing Computer Education – What People’s Education has to teach us.

Education in South Africa is in turmoil. In many ways our post-Apartheid educational dispensation has failed to address the problems it inherited. The big question of how to grant greater access and equity through education – perhaps best summed up by the slogan Decolonizing Education was not settled after the fall of Apartheid. Our education system is still unequal, and still largely divided along racial lines.

When I was training to become a teacher in the 1980s, the big question was some version of a Liberation Before Education or Liberation Through Education debate. Education is clearly a key component in social empowerment and social justice. The 1976 riots sprung from deep-seated unhappiness with a separate and unequal education system which taught white students blind obedience, and black students subservience. People’s Education for People’s Power emerged as a movement in response to a call by the Soweto Parents’ Crisis Committee in 1985 and a series of conferences and publications issued by the National Education Crisis Committee in 1986. Two subject committees were active in advancing content for a People’s English and People’s History curriculum. The focus was on a reformed curriculum reflecting the agency and needs of ordinary South Africans. While greater local (ie. African) content and focus was a key component of the vision, the really crucial concern was with turning knowledge into an agent for greater power and control. The history syllabus, for example was concerned not just with the study of South African history, but with a history from below approach. Using E.H. Carr’s seminal text What is History? as a basis, the NECC pushed for a rigorous and critical skill-set which would allow students to use their own, their local and national histories as a lens for developing social agency and power. People’s English, likewise, sought to use English as a means of critiquing power and empowering agency – ” to think and speak in non-racial, non-sexist and non-elitist ways” (Gardiner, n.d, p.9). The need to develop an alternative educational vision for a post-Apartheid South Africa was clear and urgent.

The South African History Archive – Images of Defiance

My first teaching job was as a teacher in a People’s Education pilot project school called Phambili in Durban in the early 1990s in the period leading up to the first democratic elections in 1994. Phambili school, a flagship of People’s Education, had two aims, to intervene in the educational crisis caused by the massive exclusion from the schooling system of students who had protested against Apartheid Education in the decade and a half after the 1976 uprising, and to pilot new democratic forms of school governance and curriculum. The school was, however, bedeviled by mismanagement and corruption by some “struggle” dignitaries. The school managed to continue thanks to a dedicated staff and board members, but faced severe lack of funding and persistent attacks from both the Apartheid State and corrupt opposition politicians who wanted to secure the building for their own personal gain. Phambili refused to go away and when I joined the staff in 1991, was struggling to resurrect itself. I was employed as an English and History teacher, and in both these faculties we tried to pilot People’s Education curricula. The English Faculty invited student representatives to join our meetings and this proved an incredibly enriching experience. As a Matric teacher there was not much I could do to change the setworks studied, but I Africanised the unseen setworks and comprehension passages chosen. In my first week of teaching I was challenged by my Matric 10B class on the whole question of why we studied Shakespeare. I hummed and hawed a bit, said a few things about universal human values, and the need to study the canon, but I could see the class was unconvinced. Luckily for me that weekend an article appeared in the Sunday newspapers about Chris Hani, leader of the Communist Party and liberation hero, who said how much he admired Shakespeare and had studied him in the guerilla camps in Tanzania. I cut it out and pinned it to my door. Not only did opposition to studying Shakespeare disappear, but my classroom was renamed Chris Hani Base Camp. I had clearly passed some kind of test.

The way that we came to theorize what people’s English looked like at Phambili was founded on our notion of agency. I was not aware of the works of Mikhail Bakhtin at that time, but the sense of the need to give our students access to the literacies and knowledges of power while at the same time developing the power of their own voice was central. This deeply dialogic notion foregrounds the agency of student voices while recognising that hegemonic literacies and discourses need to be mastered.

The History faculty used the NECC published textbook What is History? as its central text, but I believe we built a strong sense of history from below as a critical tool for confronting power. Poor historians make poor revolutionaries could have been our mantra. Things came to a head at a History teachers’ conference at the University of Natal where plans for a new History syllabus were unveiled which directly conflicted with our notion of People’s Education. The syllabus seemed to us to be triumphalist. History was to become the story of the ANC’s rise to power, much as history under the National Party had been subservient to political propaganda. We agitated from the floor, and were eventually granted an audience with John Pampallis. When the new curriculum was eventually unveiled, after the elections in 1994, very little remained of people’s Education.

And that I think is the problem. Subsequent revisions of the curriculum were to incorporate an extreme version of Outcomes Based Education, a somewhat reactionary and behaviourist educational philosophy, which was opaque and technicist and views education as the mastery of discrete skills. Although, clearly, much has changed for the better, our education system remains a two-tiered system replicating inequality and stifling agency. The sense of liberation through education that people’s Education engendered has all but disappeared and the focus is now on South Africa’s failing matriculation pass rate and position at the bottom of the international league tables. The current cry to decolonize education can only be seen as an indictment of the failures to implement an education system that meaningfully addresses the inequalities of the past. We need a return to the People’s Education agenda.

So, what would a People’s Computer Education curriculum look like? Computing represents a literacy of power, increasingly so as our lives become dominated by digital technologies. I would argue that computing education needs to empower students and promote agency both by giving students access to these voices of power, but also by empowering the power of students’ voice, their ability to express their creativity and ideas through digital media. Robert Reich (1992) has argued that the new Information Economy is reproduced by a two-tiered education system that produces a labour force of data capturers on the one hand, and a managerial class of information/symbol manipulators on the other. As computer educators we need to ensure that we are giving all our students access to the skills and dispositions which will enable them as digital masters rather than merely hewers of wood and carriers of water in the new digital economy. If we teach spreadsheets it should not just be about the how, it also needs to be about the why, it needs to prepare students for entrepreneurship and creativity. If we teach coding, it should not be just so that students can write some code, it needs to encompass a vision of a humanity that can rise above the challenge of Artificial Intelligence, that has a purpose and dream, that has a destiny.

I realise that this formulation is hopelessly Romantic, but I am an optimist and I believe we need to teach hope, and inspire our students to be the masters of their own lives. Ultimately Computing from below is the story of a new humanism that rejects a society that is mechanical and technocratic, but sees technology as an extension of the human will to survive and thrive. Ultimately People’s Computing needs to teach students to see a society in which they can use digital technologies to advance their lives and build a world that is non-classist, non-sexist and non-racist.

Bibliography

Gardner, M, (n.d) Transforming Itself: People’s Education for people’s Power and Society in South Africa. Accessed https://www.sahistory.org.za/sites/default/files/archive-files2/remar87.5.pdf

Reich, R. (1992). The Work of Nations: Preparing Ourselves for 21st Century Capitalism. New York: Vintage Books. Retrieved from https://www.amazon.com/Work-Nations-Preparing-Ourselves-Capitalis-ebook/dp/B004CFAW7A 

The South African History Archive. http://www.saha.org.za/imagesofdefinace/10_fighting_years_1976_1986_peoples_education_for_peoples_power.htm

 

Developing Tools to Help Students Construct Meaning in Computer Skills

As a teacher of computing applications I have found that the area my students struggle with the most is creating and using spreadsheet formulae and database queries. That is to say they struggle most where they have to apply mathematical formulae, which are by nature abstract, to a concrete task such as applying a 10% discount if certain conditions pertain. The ability to move seamlessly between abstract and concrete is not something all students possess. Piaget described the movement between concrete and formal operational thinking as a maturational process, with children only becoming capable of abstract thought at around 12 years of age. It is also thought that abstract thinking develops into adulthood as individuals gain more experience with it. This suggests that students need extensive scaffolding to help abstract thinking skills develop. It is also clear that it is difficult to generalise concepts across different contexts generally.

I have looked at the Semantic Wave Theory previously on this blog (eg. Maton, 2014), a framework drawn from Legitimation Code Theory, which shows how the movement between the abstract and highly condensed to the concrete, contextualised and simple can be used as a tool to show how meaning is being unpacked and re-packed within the classroom. Researchers have shown how successful teaching and learning depends on creating repeated movements between the two, describing semantic profiles.

The diagram above illustrates various semantic profiles, which will be instantly recognisable to any teacher. The high semantic flatline is when all discourse in the classroom remains at a general and abstract, very theoretical level. The low semantic flatline is when discourse is simple and practical. Clearly what is needed over time is movement between abstract and concrete, complex and simple, a wave-like graph with greater semantic range. Teachers need to help students understand complex, abstract ideas by unpacking the concepts using concrete examples, personal experience and metaphors. Students also need to learn how to repackage their understanding in more abstract academic language in their own words, and teachers need to carefully scaffold, this process.

Understanding semantic waves, helps to understand how best to scaffold spreadsheet formulae and database queries by finding strategies to strengthen and weaken semantic gravity and density as it is called, in other words to scaffold movement up or down the semantic wave. To do this requires an understanding of the relative strengths of semantic gravity and density in various computing applications. I have to say that this is in itself not an easy task. It seems to me that what appears to be a concrete, practical task for an experienced practitioner, often appears abstract and complex for the novice. This is perhaps just another way of saying that as we get used to traversing the gap between abstract and concrete we get better at doing it, and cease to notice it, or struggle with it. We operationalise abstract formulae without a second thought and it seems like a simple, concrete task to us. We need to try and see it from the perspective of the novice. The novice needs to bring together an understanding of what the computer needs to do expressed in plain language, the mathematical or logical language of the problem and the syntax of the application or programming language. And this process needs very careful scaffolding and support.

I have very recently come across a cognitive tool called the Abstraction Transition Taxonomy (Cutts et al, 2012). The illustration below comes from the paper cited and demonstrates one way of visualising the processes involved in coding a computer program, or indeed an excel spreadsheet.

This design process helps bridge the gap between understanding a problem and its solution and translating that into a working program which then needs to be debugged and checked to see if it does what it is supposed to do. The key stage is the story boarding in the middle. I like to think of the steps shown above as following the following stages:

  • Plain Language: Think about the problem and work through a solution in your mind
  • Maths/Logic: Build in any mathematical or logical operators into your solution
  • Application Syntax: Implement your solution using the particular syntax of the app or programming language you are using.

For example:

  • If a class has collected the most money in the school, they get the day off school.
  • If money collected = most money, then day = day off, else day = normal school
  • =IF(cell=max(range);”Day Off”;”normal school”) [in an Excel spreadsheet]

It is tempting to see each of these levels (plain language, maths/logic, app syntax) as discrete strengths of semantic gravity, moving from plain language (strong semantic gravity) to maths/logic (weak semantic gravity) and then back to app syntax (strong semantic gravity). This would describe a wave much like the graph shown below. This is a useful way to conceive of the shifts in levels of abstraction while using a computer to solve a problem.

Over the years teaching spreadsheets, databases and coding, I have come to develop a routine of modelling how to go about using computers to solve problems which follows the three steps enumerated above. It is summarised as the ELS method:

  • State the problem and solution in plain English
  • Plug in any mathematical or Logical operators
  • Enter it using the particular Syntax of whatever application you are using

This helps students, I think, by giving them a process to follow and helps move up and down the semantic range, but my grade 8s and 9s still struggle to apply it.

Although the three step process helps build in a movement up and down the semantic range, it is not enough. Each step represents a semantic range in its own right, for the novice at any rate. When stating a problem’s solution in plain language, one needs to hold in mind the contextual parameters of the problem and an ideational, abstraction of the solution in one’s mind. When working through the mathematical and logical expression of the solution, one needs to continually jump back to the context of the problem and forth to the emerging formula. When translating this formula into the particular syntax of the application you are using, also requires rapid jumps back and forth up the spectrum between weak and strong semantic gravity. Although the curve above may well describe the overall movement of meaning in the task, it seems to me to be made up of rapid oscillations back and forth between two states, abstract and concrete, a kind of quantum wave, if you like, as the student superimposes an abstract solution on top of a concrete problem’s solution. I believe it is this which makes it particularly difficult for novice programmers and spreadsheet/database creators to navigate the coding of a solution. More experienced programmers handle these shifts with ease.

How and Why Questions help move up and down the semantic range

When using the ELS method in a whole class situation I model the mental process of thinking through the task very closely, drawing on student contributions. But getting students to work in pairs is also very necessary as it forces them to voice their mental processes and this helps strengthen and weaken semantic gravity. If you are explaining why you think the formula should be this rather than that, you are effectively making jumps up and down the semantic range because you are dealing with why questions, which tend to raise the level of abstraction, and with how questions which help concretise your solution. When you try something and it doesn’t work, having to discuss possible reasons with a peer helps do the same.

Bibliography

Cutts et al., 2012. The abstraction transition taxonomy: developing desired learning outcomes through the lens of situated cognition. In Proceedings of the ninth annual international conference on International computing education research. ACM, pp. 63–70. Available at: https://doi.org/10.1145/2361276.2361290.

Maton, Karl. (2014). A TALL order?: Legitimation Code Theory for academic language and learning. Journal of Academic Language and Learning. 8. 34-48.

 

 

Meaning Making in Computer Education

One of the difficulties in looking at the knowledge practices of teachers of middle and high school computing is the diverse nature of educational practices around the world. In some contexts the curriculum resembles Computer Science at a tertiary level, with an emphasis on computer programming and the theory of hardware, software and networking. In other contexts, however, the emphasis is on computing applications. In South Africa, for example, students can take Information Technology as a matriculation subject, in which programming is studied, or Computer Applications Technology, with an emphasis on Office applications. At middle school levels the emphasis is often on basic computer literacy. Coding and robotics are, however, often taught alongside basic computer literacy and information literacy.

Waite, et al (2019) have argued that Legitimation Code Theory (LCT), in particular the idea that effective knowledge building practices involve the formation of semantic waves, provides a framework for assessing the effectiveness of practices in the teaching of computing by providing a common language for describing diverse practices. I have described Semantic Wave Theory before in this blog, But here is a brief summary.

Karl Maton (2014) has described semantic waves as how teachers try to bridge the gap between high stakes reading  and high stakes writing where ideas are highly abstract and context independent (Weak Semantic Gravity) and highly complex and condensed (Strong Semantic Density). In the classroom these highly abstract and complex ideas are introduced in the form of texts. Students are expected to reproduce these ideas in their own words in the form of essays and examination answers. In order to do this teachers need to help students by giving concepts greater context (Strong Semantic Gravity) and make them simpler (Weak Semantic Density). They do this by using examples, metaphors and personal experience. If you map the changes in semantic gravity and density over time you can describe waves. The ability to make links between abstraction and the concrete, between theory and practice, complex and simple ideas is what makes for effective teaching and learning.

Waite, et al (2019) show how a semantic analysis of an unplugged computer programming task describes just such semantic waves and makes for a successful lesson plan. They also suggest that using semantic waves to analyse lesson plans, and actual lessons, is a way of assessing the effectiveness of lessons teaching computer programming of different kinds. Many teachers use online coding platforms, like Codecademy or Code Combat. In this article I would like to look at a semantic wave analysis of a code combat course on web development to see what it reveals about its strengths and weaknesses as a pedagogical platform. Code Combat uses as its basic structure a series of courses covering a computer science syllabus teaching JavaScript or Python programming and some HTML and CSS. Each course is divided into a series of levels, and each level introduces key concepts such as loops, conditional statements and so on, using quests and tasks performed by an avatar. Students enter the code in a command line interface and can run to test success.The platform provides hints and text completion prompts to help scaffold activities.

Students generally enjoy the platform, and take pleasure in grappling with problems and succeeding at each task. I use it in my grade 8 & 9 computer skills classes. In this analysis I looked at the 13 levels that make up the Web Development 1 course, introducing HTML tags and CSS properties. I looked at Semantic Gravity alone. SG- (Weak Semantic Gravity) representing highly abstract ideas and SG+ (Strong Semantic Gravity) representing highly concrete ideas. I used three levels of degree to indicate strength and weakness (SG— to SG +++)

I used the following translation device for rough coding the level of semantic gravity, and looked at the instructions in each level. The purpose of a translation device is to help translate the theory into what it looks like in practice. What does Weak Semantic Gravity look like when using HTML, what does Strong Semantic Gravity look like?

SG – – – Over-arching concepts Tags used to mark-up text
SG – – Coding Concepts Tags do different things eg <h1> regulates size of a heading
SG – Properties of concepts Tags have properties eg <img> has source, alignment, width
SG + Examples of concepts students must decide which tag to enter
SG + + Examples of properties Student must edit a property eg <img src=”” align=”left”> change to right align
SG + + + Data entry Typing in text

The coding of the thirteen levels was done using only the text used in the platform. I did not look at the graphics. I would argue that the graphics display tends to scaffold all activities by strengthening semantic gravity and helping students visualise what they are coding. The semantic waves formed over time looked as follows:

What we can see is a non-continuous wave which loosely describes the movement between abstract and concrete. Each unit followed a pattern of introducing a particular concept and giving students a chance to practice enacting it. The next level would then introduce a new concept and so on. In some levels students are able to partially practice their developing understanding of the concepts by making choices of which tags to use rather than merely practising enacting the one explained. The movement between weak and strong semantic gravity has been described as a down escalator, and is common in teaching practice. Teachers are generally good at explaining concepts so that students understand them, less common in classroom practice and less common here is the full movement of the wave in such a way that students take concrete examples and are able to express the underlying concepts and display their own understanding effectively. In programming terms this would translate into being able to use concepts learned in novel situations to develop unique solutions, in other words move from a concrete problem to be solved to a conceptual enactment of that by designing an algorithm or coding.

What the semantic wave analysis seems to indicate is that the units in this course are doing a good job in explaining the programming concepts, but not good enough a job in giving students a chance to explore and display their understanding in new contexts. As a teacher, I have to say that this is what struck me immediately. The platform could do some things better than I could. It could allow students to work at their own pace and gave instant feedback, and was certainly more engaging with graphics and its game-like interface, but was not able to set more open-ended tasks, or give students a chance to explain their own understanding of the concepts. The course ends with a “design your own poster” exercise which potentially does this, but each level lacks a degree of this full movement through the semantic wave.

This weakness appears to be hard-coded in, and would require teachers using the platform to mediate as a way of creating fuller semantic waves. Given that students are working at their own pace, my own solution was to use mentors in every class. It was the job of the mentor, anyone who had already completed the course, to help peers who were struggling to complete levels by explaining what needed doing. The mentors at least were then consolidating their knowledge and understanding by explaining it to others, and mentees were benefiting from having the problem re-phrased or re-contextualized.

I would argue that semantic wave analyses like this one would help inform better instructional design decisions. It might appear as if I am being critical of Code Combat, but I believe that other platforms of a similar kind suffer the same weaknesses. This platform, in fact is better than most in using Constructivist learning principles by asking students to design their own solutions, but more could clearly be done to create full semantic waving.

Bibliography

Maton, Karl. (2014). A TALL order?: Legitimation Code Theory for academic language and learning. Journal of Academic Language and Learning. 8. 34-48.

Waite, J., Maton, K., Curzon, P., & Tuttiett, L. (2019). Unplugged Computing and Semantic Waves: Analysing Crazy Characters. Proceedings of UKICER2019 Conference (United Kingdom and Ireland Computing Education Research).

 
 
%d bloggers like this: