This article first appeared in Data Science Briefings, the DataMiningApps newsletter. Subscribe now for free if you want to be the first to receive our feature articles, or follow us @DataMiningApps. Do you also wish to contribute to Data Science Briefings? Shoot us an e-mail over at email@example.com and let’s get in touch!
In a now classic 1964 experiment, Harvard professor Robert Rosenthal let several classrooms of students fill in a standard IQ test – but covered it as a ‘special test from Harvard’ and explained to teachers that the test could predict which students would soon experience a dramatic growth in their IQ. He then secretly chose 1/5th of the students of every class totally at random, and told the educators that these specific students were about to see an improvement in their IQ. He discovered that the expectations a teacher held did noticeably affect the students: the students who were expected to see an increase in their IQ, did wind up having a faster growing IQ compared to other students.
This clever experiment showed that expectations of educators can indeed positively affect students’ performance. Educators even portrayed different behavior towards the students they wrongfully had increased expectations of: they nodded more, smiled more and gave them more time to answer questions. This phenomenon is now called the Pygmalion Effect, and has been confirmed in other studies: educators’ expectations can predict student achievements for several years.
If even something as simple as expectations can influence learning outcomes this greatly, then the right expectations of students who need it most can significantly help their potential success. Besides expectations, there are several other shortcomings in the traditional learning process one can observe in educational institutions. First these shortcomings are discussed, and then learning analytics is introduced and explained. Next the opportunities and possibilities of learning analytics are discussed, and finally some of the challenges in implementing a learning analytics solution are touched upon.
A first shortcoming is that most of the time, figuring out ‘how students learn’ is a black box process from the educator’s point of view as they can’t observe students when they are learning. How can an educator know if students are engaged or making progress when they don’t even know if students are learning? Another shortcoming is that when a student has trouble understanding a course, it depends on her initiative to go to the educator’s office hours or seek external help. As a consequence, self-selection of those seeking help takes place. At fixed intervals and eventually at the end of a course, some evaluation in the form of writing an essay or doing an oral or written exam takes place. But these traditional forms of evaluation fall short: they look back on what students have been able to learn during the course, instead of giving them feedback so they can further develop and better master the material. Sometimes an educator wants to change her teaching methods or alter course material. But what and how to change is a decision perhaps based on gut feeling and implicit feedback, like students showing lowered attendance towards the end of a course. And as every student has a different learning style, a change that might work well for one group of students might have an adverse effect on the engagement of another.
Learning analytics paves the way in solving some of these shortcomings. Learning analytics is defined as the measurement, collection, analysis and reporting of data about learners and their contexts, for purposes of understanding and optimizing learning and the environments in which it occurs. The goal of learning analytics for students is to change their behavior so they can learn better, and for institutions to inform change in their systems to also enable better learning. So the key question of learning analytics becomes: How can opportunities for online learning be optimized?
Learning analytics is implemented by combining all available data on students. On one hand from Management Information Systems (MIS) which are already in place for administrative and reporting purposes, and one the other hand from newer Virtual Learning Environments (VLE) which bring learning materials to students via the web. This data is then used to build analytic models that map and track the behavior of every student and inform several different decisions.
But what can learning analytics be used for? These models can give a prediction of which students are at risk of failing a course or dropping out. The knowledge uncovered by analytic models can be visualized in the form of learning dashboards, as a tool for educators. By efficiently using these dashboards, educators or staff can now intervene themselves, instead of counting on students to come ask for help when they feel the need to. Instead of leaving initiative to the student, analytic models enable institutions to be proactive. By using data from the VLE specifically, learning analytics allows looking into the otherwise black box learning process. This includes checking how students’ learning goes between evaluations, and following up on students’ progression towards overall course targets. When wanting to change course materials, educators can augment their gut feeling by using insights from analytics to make an informed decision. The analysis of how VLE’s are used should permit to build in motivational software components to draw students in and get them more engaged.
The opportunities provided by learning analytics sound well in theory. But as in any analytics project, some challenges need to be faced first. Institutional readiness is an important factor in the success of implementing learning analytics. An institution should have a certain digital maturity before diving into learning analytics. Most of the time, the data is plentiful, but the people needed to transform the data into useful information are scarce. If learning analytics are not seen by the institution as a regular, consistent responsibility, but are performed on a one-off basis, then adoption scale and long-term sustainability of the learning analytics services can become crucial issues. The importance of visualization in learning analytics cannot be underestimated. There are two reasons for this: Educators and students need to be able to work with insights provided by the analytical models independently from their data literacy, whether it’s a history or engineering course. The second reason is that knowledge from learning analytics is intended to change students’ behavior. Changing behavior is extremely hard, so these visualizations need to be easily understandable and inspire change. This goal of behavioral change also requires delivering a transparent process to the end user. Confusion arises when students get told they are on a path to failure but don’t know why they are at risk, or educators don’t know why a certain recommendation to change their course material pops up, could cause a lot of confusion.
Last but not least, as with any analytics application domain, there should be buy-in from and a strong connection with domain experts when introducing a learning analytics solution. In this case, the domain experts are pedagogues and people who develop educational programs. They need to be involved in introducing a learning analytics project from start to finish.
Overcoming these challenges will result in unlocking the value of learning analytics. Then, the flipped classroom model can be embraced where everything that can be done online, is done online, and the classroom is only used for those things for which face-to-face interaction is crucial. The Pygmalion Effect can be used to the educators’ advantage as expectations become something educators can leverage to help their students succeed. Educators’ interaction with students can become more personalized, offering guidance to each student’s customized learning style.