RSS

Category Archives: Teaching

EduTech Africa 2019 – Coda

Last week I attended the EduTech Africa 2019 Conference in Johannesburg and would like to wrap up my thoughts on the conference with a few observations. Now that the dust has settled the thing that sticks out most in my mind is the clear recognition of the rise of Computer Science as a K-12 academic discipline. The government’s commitment to rolling out IT as a subject, and the focus on coding across all age groups has established a clear sense that Computational Thinking and Computer Science belongs in the core curriculum in all schools. The big question is then how we get there. The announcement recently that PISA Assessments, which offers international benchmarks in Maths & Science, will now include Computational Thinking and Computer Science is confirmation of this. Most of the talks I attended addressed the issue of how best to teach Computer Science in some form or other. Passionate teachers shared their best practice, and their failures. So, the coda to my reflections on the conference is really to address that question. Is there a best method to teach Computer Science?

NS Prabhu (1990) in answering the question of whether there is a best method of teaching or not, concluded that the key factor in teaching success lay with the teacher’s sense of plausibility, the teacher’s sense of self belief that what they are doing makes sense, how passionate they are. There is clearly a great deal of plausibility around the teaching of Computer Science at the moment. Obstacles are being dealt with as opportunities, and there is a very real sense that inventiveness and creativity can overcome the constraints of budget and lack of training.

The clear consensus amongst teachers seems to be that physical computing forms the best approach. Most presentations highlighted the use of coding in conjunction with 3D printing and robotics. My very first exposure to teaching computing was with Seymour Papert’s (1980) logo system. I did not have the turtles, using only the computer interface, but I tried to make it more concrete by using physical cards with shapes students had to emulate. Computer Science is a very abstract subject and needs to be concretised for students as much as possible. The cost of all the kit needed to do this is prohibitive.

I recently came across micro:bits which uses a web-based platform for coding. The code created is then downloaded as compiled hex code to the microbit chip which executes the code. But crucially it also has a web visualisation tool, which executes the code in the code editing window. The micro:bit controllers are themselves fairly cheap, but having a visualization tool means that more students can code at any one time. A class would need fewer physical chips at any one time. I have not yet been able to test the real thing, but it seems to me a perfect fit for the kinds of physical computing tasks I would wish to introduce. It uses a block coding interface, but you can toggle to program in JavaScript or Python, making it ideal for transitioning between block-based coding to the text-based fare students will need higher up the school. You can also design 3D printed parts for interesting projects.

But I digress, back to best methods. Another strong thread in the conference was computing for problem solving. I have to say that I am a little dubious about the whole Computational Thinking leads to better problem solving generally. I believe it leads to better problem solving in computational contexts, but transfer of skills from one context to another is always problematic in my view. Nevertheless, I do believe that students should be given real world problems to solve as far as possible and Computer Science teachers are leading the way in envisioning how coding could form a central plank in cross-disciplinary problem solving exercises. There was a great deal of talk at the conference about the need for teachers to “come out of their silos.” There is certainly no need for CS teachers to set projects divorced from the real world, or set problems narrowly about computers.

The final method that was raised at the conference was unplugged computing, an approach which involves modelling algorithmic thinking without a computer. For example students might be asked to write code to control a class-mate acting as a robot to perform a certain task. A talk by a primary school teacher on coding in the junior years had us all playing rock, paper, scissors. I’ve forgotten why, but it was great fun!

 

In the end, my take-away from the conference was to think about the best approaches for my own classes. And most particularly how to integrate all three of these approaches better. To my mind this is the best sort of take-away!

 

Bibliography

Papert, S, 1980. Mindstorms : Children, Computers, and Powerful Ideas. Basic Books. https://dl.acm.org/citation.cfm?id=1095592.

Prabhu, N.S, There Is No Best method – Why?, TESOL Quarterly, vol. 24, issue 2 (1990) pp. 161-176
 

Computational Thinking – a new modality of thought or just what coders do?

I want to pose a question for consideration. There is a great deal of debate and disagreement over what Computational Thinking means. For some it describes how computer scientists go about what they do, akin perhaps to the scientific method for scientists (Wolfram, 2002), and is applicable only to computer scientists. For others it is a skill set that has implications beyond the field of computer science, a set of generalizable skills of benefit to all (Wing, 2006). A third view is that it represents something of a new mode of thought capable of unique explanations (Papert, 1980) and knowledge building. In this sense it goes beyond a set of procedures, like the scientific method, and might represent a mode of thought distinct from the paradigmatic (argumentative) and narrative modes of thought proposed by Bruner (1986).

The paradigmatic mode represents knowledge founded on abstract understanding or conceptions of the world,. For example, I could explain why an apple fell to the ground by referencing the theory of gravity. This is largely the language and understanding of Science. The narrative mode of thought represents an understanding of the world founded in human interactions. I might explain why an apple fell by referencing a sequence of events in which my elbow knocked it off the table and I was not deft enough to catch it. Of course there is a continuum along which both modalities of thought intersect and interweave. So, my question is whether computational thinking represents a separate mode of thought in its own right, or simply new combinations of paradigmatic and narrative modes. If I were to model a world of apples, elbows and tables, my understanding of why apples fall might be based on a more complete understanding of how apples behave under different circumstances. The use of computational models allows for new ways of understanding the world, new ways of gaining understanding and knowledge. Chaos Theory, for example, emerged out of computational model building. Paradigmatic formulations of the world followed from computational modelling, rather than the other way round.

When we create a computational model of a weather system and run our algorithms through computers with slightly different inputs to make a hurricane path forecast, for example, or use machine learning algorithms to predict heart disease more accurately, are we deploying a new kind of thought which is somewhat different from both paradigmatic and narrative modes?

The need to ask this question rests, perhaps, on the rapid development of Machine Learning and how it threatens to disrupt our world. Machine Learning has brought us to a point where we might soon be farming most of our thinking to intelligent machines. And while probabilistic approaches to artificial intelligence allow human beings to trace back what the machine has done with our algorithms: neural networks, with their black box approaches represent thinking that is to a large extent opaque to us. It seems entirely possible then, that in the not too distant future machines will be delivering to us knowledge of the world, and we will not be able to explain the thinking behind it.

The idea of Computational Thinking (CT) has a history, and it is interesting to unpack some of it. The term was coined by Seymour Papert (1980) and popularised by Jeanette Wing (2006) and there is general consensus that it refers to the thinking skills employed by computer scientists when they are doing computer programming, derived from the cognitive processes involved when you are designing an algorithm for getting “an information-processing agent” (Cuny, et al, 2010) to find a solution to a problem. For some, information-processing agents should refer only to machines, but for others it could include human beings when they are performing computational tasks. Differences in how applicable CT is beyond computer science hinges on these nuances of understanding. I have often heard it said that getting students to design an algorithm for making a cup of tea represents CT and if students were to study designing algorithms through leaning to code they would therefore be improving their general problem solving skills.These claims are difficult to assess, but they are important because if CT applies only to the context of computer science, then its place in the curriculum occupies something of a niche, important though it might be. If, however, as claimed, it leads to benefits in general problem solving skills there is a solid case to be made for getting all students to learn programming. Equally, the case for exposing all students to some coding might rest on other claims unrelated to the transfer of CT to other domains.

Let’s start by looking at the claims made by the Coding for all lobby. Wing (2206) argued that CT skills have transferable benefits outside of computer science itself because they entail five cognitive processes, namely:

  1. Problem reformulation – reframing a problem so that it becomes solvable and familiar.
  2. Recursion – constructing a system incrementally on preceding information
  3. Decomposition – breaking the problem down into manageable bites.
  4. Abstraction – modelling the salient features of a complex system
  5. Systemic testing – taking purposeful actions to derive solutions  (Shute, et al, 2017)

Wing’s claim has received a great deal of attention and has become the bedrock for the Computer Science for All movement, the idea that all children should be exposed to CT, by teaching them to code, both because such skills will become increasingly important in an increasingly digital world, but also because they equip students for the kinds of problem solving that is increasingly important. It is debatable, though, as to whether these cognitive processes are unique to computational thought. Abstraction and decomposition, in particular, might seem to be thinking processes shared by any number of activities. Wing’s thesis that computational thinking is generalizable to all other fields could perhaps be stated in the reverse direction. Perhaps general cognitive processes are generalizable to computation? This point is not trivial, but still might not threaten the thesis that learning to code or create algorithms is excellent for developing good problem solving skills applicable to other fields.

The question of the transfer of skills gained in one context to another is, however, fraught with difficulty. Generally speaking it seems to me that knowledge and skills are gained within the framework of a particular discipline, and that the application of knowledge and skills in other contexts is always problematic to some extent. There is a close relationship between knowledge itself and what we call thinking skills. It is hard to imagine, for example, anyone possessing dispositions and thinking skills in History or Mathematics without possessing knowledge in those disciplines. As Karl Maton (2014) has pointed out, all knowledge has both knowledge and knowing structures. There is the stuff that is known and the gaze of the knower. In different fields, knowledge structures or knower structures may have greater or lesser relative importance, but one cannot distill out something which is pure knowledge, or pure knowing. Therefore the question of the transfer of skills from one context to another, from one field to another, is not a simple one. Of course we do achieve this feat. At some point in my life I learned basic numeracy skills, within the context of elementary arithmetic classes presumably, and I have been able to apply this basic knowledge and skill set to other contexts, for example computer programming. But I am not so sure that the thinking dispositions I gained while studying History at University, and my appreciation for the narrative mode of explanation are altogether much use when thinking about Computational Thinking and what I ought to be doing as a teacher of ICT skills. I am painfully aware that there are limits to the general applicability of the enquiry and data analysis skills that I learned when training to become an historian. I did not train to become a computer scientist, and therefore I am very wary of commenting on how transferable skills in computational thinking might be to contexts outside the field. But I do believe we should be wary of claims of this sort. Peter Denning (2017) has argued that the idea that all people can benefit from CT, from thinking like computer scientists, is a vague and unsubstantiated claim. For Denning, the design of algorithms (algorithmic thinking) rests not on merely setting out any series of steps, but speaks specifically to the design of steps controlling a computational model. It is context bound.

My understanding from this is that the case for teaching everyone to code cannot rest solely on an argument that CT transfers benefits. This case has yet to be proven. It does not mean that teaching coding to all is not a good thing. I believe that learning to code represents a rigorous discipline which is good for the mind, has benefits because we are living in a world where computer programs are increasingly important, and because coding involves problem solving and this too benefits the mind. All in all I think the case for teaching coding to all is extremely cogent.

I also have this sneaking suspicion that the question I posed in my opening remarks is going to be raised more and more frequently as artificial intelligence gets applied, and if so, having a population trained in some level of competence with computational thinking is probably a really good idea.

Bibliography

Bruner, J. (1986). Actual Minds, Possible Worlds. Cambridge, Mass: Harvard University Press.

Cuny, Jan,  Snyder, Larry, and Wing, Jeanette. 2010. “Demystifying Computational Thinking for Non-Computer Scientists,” work in progress.

Curzon, Paul, Tim Bell, Jane Waite, and Mark Dorling. 2019. “Computational Thinking.” In The Cambridge Handbook of Computing Education Research, edited by S.A. Fincher and A.V. Robins, 513–46. Cambridge. https://qmro.qmul.ac.uk/xmlui/bitstream/handle/123456789/57010/Curzon Computational thinking 2019 Accepted.pdf?sequence=2&isAllowed=y.

Denning, Peter J. 2017. “Remaining Trouble Spots with Computational Thinking.” Communications of the ACM 60 (6): 33–39. https://doi.org/10.1145/2998438.

Guzdial, M. 2011. “A Definition of Computational Thinking from Jeannette Wing.” Computing Education Research Blog. 2011. https://computinged.wordpress.com/2011/03/22/a-definition-of-computational-thinking-from-jeanette-wing/.

Maton, K. (2014). Knowledge and Knowers: Towards a realist sociology of education. London, UK: Routledge/Taylor & Francis Group.

Papert, Seymour. 1980. Mindstorms: Children, Computers, and Powerful Ideas. The British Journal of Psychiatry. New York: Basic Books. https://doi.org/10.1192/bjp.112.483.211-a.

Shute, Valerie J., Chen Sun, and Jodi Asbell-Clarke. 2017. “Demystifying Computational Thinking.” Educational Research Review 22 (September): 142–58. https://doi.org/10.1016/j.edurev.2017.09.003.

Wing, Jeannette. 2006. “Computational Thinking.” Communications of the ACM 49 (3): 33–35. https://doi.org/10.1145/1118178.1118215.

Wolfram, Stephen. 2002. A New Kind of Science, Wolfram Media, Inc. https://www.wolframscience.com/nks/

 

Developing Tools to Help Students Construct Meaning in Computer Skills

As a teacher of computing applications I have found that the area my students struggle with the most is creating and using spreadsheet formulae and database queries. That is to say they struggle most where they have to apply mathematical formulae, which are by nature abstract, to a concrete task such as applying a 10% discount if certain conditions pertain. The ability to move seamlessly between abstract and concrete is not something all students possess. Piaget described the movement between concrete and formal operational thinking as a maturational process, with children only becoming capable of abstract thought at around 12 years of age. It is also thought that abstract thinking develops into adulthood as individuals gain more experience with it. This suggests that students need extensive scaffolding to help abstract thinking skills develop. It is also clear that it is difficult to generalise concepts across different contexts generally.

I have looked at the Semantic Wave Theory previously on this blog (eg. Maton, 2014), a framework drawn from Legitimation Code Theory, which shows how the movement between the abstract and highly condensed to the concrete, contextualised and simple can be used as a tool to show how meaning is being unpacked and re-packed within the classroom. Researchers have shown how successful teaching and learning depends on creating repeated movements between the two, describing semantic profiles.

The diagram above illustrates various semantic profiles, which will be instantly recognisable to any teacher. The high semantic flatline is when all discourse in the classroom remains at a general and abstract, very theoretical level. The low semantic flatline is when discourse is simple and practical. Clearly what is needed over time is movement between abstract and concrete, complex and simple, a wave-like graph with greater semantic range. Teachers need to help students understand complex, abstract ideas by unpacking the concepts using concrete examples, personal experience and metaphors. Students also need to learn how to repackage their understanding in more abstract academic language in their own words, and teachers need to carefully scaffold, this process.

Understanding semantic waves, helps to understand how best to scaffold spreadsheet formulae and database queries by finding strategies to strengthen and weaken semantic gravity and density as it is called, in other words to scaffold movement up or down the semantic wave. To do this requires an understanding of the relative strengths of semantic gravity and density in various computing applications. I have to say that this is in itself not an easy task. It seems to me that what appears to be a concrete, practical task for an experienced practitioner, often appears abstract and complex for the novice. This is perhaps just another way of saying that as we get used to traversing the gap between abstract and concrete we get better at doing it, and cease to notice it, or struggle with it. We operationalise abstract formulae without a second thought and it seems like a simple, concrete task to us. We need to try and see it from the perspective of the novice. The novice needs to bring together an understanding of what the computer needs to do expressed in plain language, the mathematical or logical language of the problem and the syntax of the application or programming language. And this process needs very careful scaffolding and support.

I have very recently come across a cognitive tool called the Abstraction Transition Taxonomy (Cutts et al, 2012). The illustration below comes from the paper cited and demonstrates one way of visualising the processes involved in coding a computer program, or indeed an excel spreadsheet.

This design process helps bridge the gap between understanding a problem and its solution and translating that into a working program which then needs to be debugged and checked to see if it does what it is supposed to do. The key stage is the story boarding in the middle. I like to think of the steps shown above as following the following stages:

  • Plain Language: Think about the problem and work through a solution in your mind
  • Maths/Logic: Build in any mathematical or logical operators into your solution
  • Application Syntax: Implement your solution using the particular syntax of the app or programming language you are using.

For example:

  • If a class has collected the most money in the school, they get the day off school.
  • If money collected = most money, then day = day off, else day = normal school
  • =IF(cell=max(range);”Day Off”;”normal school”) [in an Excel spreadsheet]

It is tempting to see each of these levels (plain language, maths/logic, app syntax) as discrete strengths of semantic gravity, moving from plain language (strong semantic gravity) to maths/logic (weak semantic gravity) and then back to app syntax (strong semantic gravity). This would describe a wave much like the graph shown below. This is a useful way to conceive of the shifts in levels of abstraction while using a computer to solve a problem.

Over the years teaching spreadsheets, databases and coding, I have come to develop a routine of modelling how to go about using computers to solve problems which follows the three steps enumerated above. It is summarised as the ELS method:

  • State the problem and solution in plain English
  • Plug in any mathematical or Logical operators
  • Enter it using the particular Syntax of whatever application you are using

This helps students, I think, by giving them a process to follow and helps move up and down the semantic range, but my grade 8s and 9s still struggle to apply it.

Although the three step process helps build in a movement up and down the semantic range, it is not enough. Each step represents a semantic range in its own right, for the novice at any rate. When stating a problem’s solution in plain language, one needs to hold in mind the contextual parameters of the problem and an ideational, abstraction of the solution in one’s mind. When working through the mathematical and logical expression of the solution, one needs to continually jump back to the context of the problem and forth to the emerging formula. When translating this formula into the particular syntax of the application you are using, also requires rapid jumps back and forth up the spectrum between weak and strong semantic gravity. Although the curve above may well describe the overall movement of meaning in the task, it seems to me to be made up of rapid oscillations back and forth between two states, abstract and concrete, a kind of quantum wave, if you like, as the student superimposes an abstract solution on top of a concrete problem’s solution. I believe it is this which makes it particularly difficult for novice programmers and spreadsheet/database creators to navigate the coding of a solution. More experienced programmers handle these shifts with ease.

How and Why Questions help move up and down the semantic range

When using the ELS method in a whole class situation I model the mental process of thinking through the task very closely, drawing on student contributions. But getting students to work in pairs is also very necessary as it forces them to voice their mental processes and this helps strengthen and weaken semantic gravity. If you are explaining why you think the formula should be this rather than that, you are effectively making jumps up and down the semantic range because you are dealing with why questions, which tend to raise the level of abstraction, and with how questions which help concretise your solution. When you try something and it doesn’t work, having to discuss possible reasons with a peer helps do the same.

Bibliography

Cutts et al., 2012. The abstraction transition taxonomy: developing desired learning outcomes through the lens of situated cognition. In Proceedings of the ninth annual international conference on International computing education research. ACM, pp. 63–70. Available at: https://doi.org/10.1145/2361276.2361290.

Maton, Karl. (2014). A TALL order?: Legitimation Code Theory for academic language and learning. Journal of Academic Language and Learning. 8. 34-48.

 

 

Meaning Making in Computer Education

One of the difficulties in looking at the knowledge practices of teachers of middle and high school computing is the diverse nature of educational practices around the world. In some contexts the curriculum resembles Computer Science at a tertiary level, with an emphasis on computer programming and the theory of hardware, software and networking. In other contexts, however, the emphasis is on computing applications. In South Africa, for example, students can take Information Technology as a matriculation subject, in which programming is studied, or Computer Applications Technology, with an emphasis on Office applications. At middle school levels the emphasis is often on basic computer literacy. Coding and robotics are, however, often taught alongside basic computer literacy and information literacy.

Waite, et al (2019) have argued that Legitimation Code Theory (LCT), in particular the idea that effective knowledge building practices involve the formation of semantic waves, provides a framework for assessing the effectiveness of practices in the teaching of computing by providing a common language for describing diverse practices. I have described Semantic Wave Theory before in this blog, But here is a brief summary.

Karl Maton (2014) has described semantic waves as how teachers try to bridge the gap between high stakes reading  and high stakes writing where ideas are highly abstract and context independent (Weak Semantic Gravity) and highly complex and condensed (Strong Semantic Density). In the classroom these highly abstract and complex ideas are introduced in the form of texts. Students are expected to reproduce these ideas in their own words in the form of essays and examination answers. In order to do this teachers need to help students by giving concepts greater context (Strong Semantic Gravity) and make them simpler (Weak Semantic Density). They do this by using examples, metaphors and personal experience. If you map the changes in semantic gravity and density over time you can describe waves. The ability to make links between abstraction and the concrete, between theory and practice, complex and simple ideas is what makes for effective teaching and learning.

Waite, et al (2019) show how a semantic analysis of an unplugged computer programming task describes just such semantic waves and makes for a successful lesson plan. They also suggest that using semantic waves to analyse lesson plans, and actual lessons, is a way of assessing the effectiveness of lessons teaching computer programming of different kinds. Many teachers use online coding platforms, like Codecademy or Code Combat. In this article I would like to look at a semantic wave analysis of a code combat course on web development to see what it reveals about its strengths and weaknesses as a pedagogical platform. Code Combat uses as its basic structure a series of courses covering a computer science syllabus teaching JavaScript or Python programming and some HTML and CSS. Each course is divided into a series of levels, and each level introduces key concepts such as loops, conditional statements and so on, using quests and tasks performed by an avatar. Students enter the code in a command line interface and can run to test success.The platform provides hints and text completion prompts to help scaffold activities.

Students generally enjoy the platform, and take pleasure in grappling with problems and succeeding at each task. I use it in my grade 8 & 9 computer skills classes. In this analysis I looked at the 13 levels that make up the Web Development 1 course, introducing HTML tags and CSS properties. I looked at Semantic Gravity alone. SG- (Weak Semantic Gravity) representing highly abstract ideas and SG+ (Strong Semantic Gravity) representing highly concrete ideas. I used three levels of degree to indicate strength and weakness (SG— to SG +++)

I used the following translation device for rough coding the level of semantic gravity, and looked at the instructions in each level. The purpose of a translation device is to help translate the theory into what it looks like in practice. What does Weak Semantic Gravity look like when using HTML, what does Strong Semantic Gravity look like?

SG – – – Over-arching concepts Tags used to mark-up text
SG – – Coding Concepts Tags do different things eg <h1> regulates size of a heading
SG – Properties of concepts Tags have properties eg <img> has source, alignment, width
SG + Examples of concepts students must decide which tag to enter
SG + + Examples of properties Student must edit a property eg <img src=”” align=”left”> change to right align
SG + + + Data entry Typing in text

The coding of the thirteen levels was done using only the text used in the platform. I did not look at the graphics. I would argue that the graphics display tends to scaffold all activities by strengthening semantic gravity and helping students visualise what they are coding. The semantic waves formed over time looked as follows:

What we can see is a non-continuous wave which loosely describes the movement between abstract and concrete. Each unit followed a pattern of introducing a particular concept and giving students a chance to practice enacting it. The next level would then introduce a new concept and so on. In some levels students are able to partially practice their developing understanding of the concepts by making choices of which tags to use rather than merely practising enacting the one explained. The movement between weak and strong semantic gravity has been described as a down escalator, and is common in teaching practice. Teachers are generally good at explaining concepts so that students understand them, less common in classroom practice and less common here is the full movement of the wave in such a way that students take concrete examples and are able to express the underlying concepts and display their own understanding effectively. In programming terms this would translate into being able to use concepts learned in novel situations to develop unique solutions, in other words move from a concrete problem to be solved to a conceptual enactment of that by designing an algorithm or coding.

What the semantic wave analysis seems to indicate is that the units in this course are doing a good job in explaining the programming concepts, but not good enough a job in giving students a chance to explore and display their understanding in new contexts. As a teacher, I have to say that this is what struck me immediately. The platform could do some things better than I could. It could allow students to work at their own pace and gave instant feedback, and was certainly more engaging with graphics and its game-like interface, but was not able to set more open-ended tasks, or give students a chance to explain their own understanding of the concepts. The course ends with a “design your own poster” exercise which potentially does this, but each level lacks a degree of this full movement through the semantic wave.

This weakness appears to be hard-coded in, and would require teachers using the platform to mediate as a way of creating fuller semantic waves. Given that students are working at their own pace, my own solution was to use mentors in every class. It was the job of the mentor, anyone who had already completed the course, to help peers who were struggling to complete levels by explaining what needed doing. The mentors at least were then consolidating their knowledge and understanding by explaining it to others, and mentees were benefiting from having the problem re-phrased or re-contextualized.

I would argue that semantic wave analyses like this one would help inform better instructional design decisions. It might appear as if I am being critical of Code Combat, but I believe that other platforms of a similar kind suffer the same weaknesses. This platform, in fact is better than most in using Constructivist learning principles by asking students to design their own solutions, but more could clearly be done to create full semantic waving.

Bibliography

Maton, Karl. (2014). A TALL order?: Legitimation Code Theory for academic language and learning. Journal of Academic Language and Learning. 8. 34-48.

Waite, J., Maton, K., Curzon, P., & Tuttiett, L. (2019). Unplugged Computing and Semantic Waves: Analysing Crazy Characters. Proceedings of UKICER2019 Conference (United Kingdom and Ireland Computing Education Research).

 

EduTech Africa 2018 – Moving Beyond the Technology to Make a Difference t

Over the last decade or so the focus of the ed tech conferences I have attended has shifted increasingly away from the technology itself towards what we can do to transform education. In the early years it was as if ed tech enthusiasts were like magpies, dazzled by every shiny new tool. Some of that sense of wonder still exists, of course, and is healthy. We need to be alive to new possibilities as technology evolves. But over the years we have learned to become more discriminating as we found what tools actually worked in our classrooms, and learned not to try to do too much at one time. The focus started shifting towards pedagogy, towards how to use the tools effectively. Behind this was always some thought as to the significance of the impact of technology on education. Common refrains have been the development of 21st Century Skills, personalised learning, a movement away from teacher-centred to student-centred approaches, problem-based learning, what technologies will disrupt education and learning based on the burgeoning field of neuroscience. The overall sense has been one of promise, that technology has the potential to make teaching and learning more effective, and that education will become transformative in liberating humanity from a model  grounded in the factory system and a mechanised reproduction of knowledge and skills.

 

This year’s conference was no different in content although the technologies have changed somewhat. The focus has shifted towards Artificial Intelligence, robotics and coding, especially how to involve women in STEM and how to infuse computational thinking across the curriculum. However, this is the first time the sense I have is not one of advocacy, but of militancy. Speakers from the world of work were united and adamant in a condemnation of schooling itself. A clear preference for extra-curricular learning and the futility of academic qualifications was presented stridently. Employers, we were told, prefer people able to solve problems. If any learning is required it can be delivered, just-in-time at the point of need, online via MOOCs. Tertiary qualifications should be modular and stackable, acquired over time when required to solve real world problems. Educators endorsed this stance stressing personalised learning and the use of Artificial Intelligence and even real-time feedback from brain activity. The sense was one of an urgent need for a curriculum based on problem solving rather than subject disciplines. If you need some Maths to solve a problem you can get it online. You don’t need to study Maths divorced from real world imperatives.

 

The very idea of tertiary institutions is clearly under massive assault, and it cannot be long before they come for secondary schools as well. What scares me about this is not that I don’t agree that learning should be problem-based at some level, or that degree programmes should not be using MOOCs and blended models to achieve greater modularity and be more student-driven. What scares me is what we lose by doing that. My fears are based on two premises.

 

Firstly, I believe that knowledge should be pursued for knowledge sake rather than for the needs of the world of work alone. Of course our education should prepare us for employment or entrepreneurship. To argue that it shouldn’t is folly. But knowledge has its own trajectory and logic. Mathematical knowledge, for example, represents a body of knowledge bounded by rules and procedures. It forms a coherent system which cannot be broken up into bite-sized chunks. Can one quickly study calculus without studying basic algebra just because you need calculus to solve a problem? Historical knowledge is not just about reading up on Ancient Sumeria on Wikipedia quickly. Historical knowledge is founded on a system of evidentiary inquiry within a narrative mode of explanation. I worry that just-in-time knowledge will lack a solid enough base. If we erode the autonomy of the universities and do away with academic research, what happens to knowledge? It will become shallow and facile.

 

Secondly, I believe that the discovery model of learning is deeply flawed. Of course, if left to our own devices, following our curiosity, we can discover much. It is a fundamental learning principle. But it is not very efficient. There is no earthly reason why teaching should be ditched. Being told something by someone else is as fundamental a learning principle as learning something for yourself. It is an effect of socialised learning. We learn from each other. Teaching is an ancient and noble profession, and there seems no reason to ditch it now. The scholar’s dilemma is that it is unusual to discover anything unless you know it is there, and this requires guides and mentors. The world we live in is complex and vast and we need a working knowledge of a great deal. Without extensive teaching, it is difficult to see how we could acquire the knowledge we need.

 

I would argue that we need a broad-based liberal education, focusing on critical thinking and problem solving, which gives us a grounding in Mathematics, the Sciences, the Arts and Humanities. At this stage, after a first degree, say, the best approach could well be just-in-time content delivery delivered online.

 

Just because technology can disrupt education doesn’t mean it should. Teachers have been very conservative in their adoption of new technologies, and I think this is a good thing. Education and knowledge are just too important to change willy nilly. We need to be certain that we are not destroying our evolutionary advantage, our ability to think, simply because we can.

 

EduTech Africa 2018 – Day 2 of Just-in-time Learning

 

Dr Neelam Parmar

On the second day of the Conference the focus seemed to shift from what schools should be doing, to the nature of learning itself. Dr Maria Calderon took us on a whistlestop tour of what neuroscience has to tell us about learning. Key to understanding this is the surprising role played by emotion in mediating learning experiences. If the amygdala is too excited learning is blocked. Ian Russell then stressed the importance of changing the way learning happens in schools so that it reflects how the world now works and students are better prepared for the world of work. Learning needs to be flexible and delivered just in time. Employers are interested in your skills not your qualifications. The days of students earning a degree and then entering the world of work are gone. Mark Lester amplified this idea by stressing how tertiary learning is increasingly blended and modular. Life-long learning is the new norm.

Dr Neelam Parmar presented us with a model for weaving together technology and pedagogy. Choices around technology and pedagogy are driven by decisions around curriculum and finding a match between schools and the world of work. She left us with an image of the accelerated use of AI in schools: robots in China that monitor student attention and nudge them awake when they fall asleep.

It is in many ways an image which encapsulates the future and its possibilities. Technology can deliver a more personalised, seamless tracking of educational achievement, much of it delivered online. Students of all ages can learn what they need to learn just in time, building their own curriculum. The curriculum can be based on the task, the challenge at hand. And yet there is a danger, a danger that we will lose the ability to discriminate out what it is that is important to learn. The dilemma of self directed study is that you can’t know what you need to learn until you have learned it.

There is a strong movement away from traditional school disciplines, towards problem based learning, and I believe this is a mistake. Knowledge is coherent because it is bounded by a field. If it becomes nothing more than fodder for solving problems we lose something very valuable and that is the pursuit of knowledge for knowledge sake. Something happens when you do history for its own sake, not just to prepare for a career in politics, for example. Or if you do maths just for engineering. You lose a certain perspective, you lose knowledge itself. Knowledge is not just something you gain to live, it is something, almost tangible that enriches our lives because it throws up surprising perspectives and unleashes powerful forces of change.

The conference this year had a strong sense that the teacher is increasingly irrelevant, and I’m not that convinced that wide awake robots are the best solution. I think the teacher will be with us for quite a while yet!

 

 

EduTech Africa 2018 – Day 1 Just-in-time Teaching


The first day of the Conference started with an impassioned plea from Sameer Rawjee to make schools places where possible futures could be prototyped rather than relying on the reproduction of the present. He envisioned a future world of technology where the role of technology was to make our lives easier and liberate humanity. Schools should be places where this vision of a future where humanity has a place and can thrive is fostered and explored. This set the tone for a conference where coding, robotics and Artificial Intelligence was foregrounded, and where the role of technology was to transform pedagogical practice, empowering flexible, life long learning focusing on the development of skills, attitudes and dispositions in tune with a changing world.

Chris Rodgers spoke next on robotics and the importance of makerspaces in fostering learning and problem solving as a basis for integrating and reorganizing the curriculum. When solving problems, students arrive at a diversity of solutions, and draw on what they need to know, when they need to know it. Teaching becomes just-in time interventions, reflecting the way the world works.

In the break away sessions this theme was amplified. The role of the teacher has to change. Learning needs to become more flexible, and with this change comes the need for relevant knowledge on demand. A move from a push to a pull model, if you like. The classroom of 2030 will have to reflect this out we will have failed or students.

 
 
%d bloggers like this: