We have multiple web-services (wiki, LMS, forums etc.). They store a huge amount of information about students' attendance and group activities. Are there any papers that describe correlation between the grades and information from web-services?
The Department of Education did a meta-analysis of the literature on online learning in 2009. One of their most notable conclusions was that it was probably the EXTRA time spent by students in hybrid courses that lent itself to higher levels of achievement rather than the medium of delivery per se. The reference is:
Means B, Toyama Y, Murphy R, Bakia M, Jones K. "Evaluation of Evidence-Based Practices in Online Learning: A Meta-Analysis and Review of Online Learning Studies" US Department of Education. Available from: http://www.eric.ed.gov/ERICWebPortal/contentdelivery/servlet/ERICServlet?accno=ED505824
Although I cannot suggest a paper describing the corelation between these two, the programmes that I am involved in give specific weightage to whatever a student does on the course website, individually or as a group member. The reason behind it is our approach to have comprehensive and continuous assessment for each student, including whatever he does in terms of learning activities, contributions to the OER, attendance (virtual and face-to-face), assignments or any team work.
To really experience how grades and the e-learning attendance correlated why don't you try how the lectures do on https://www.coursera.org. You also can take any courses and research at one time :)
This theme is exciting. Grading envolves criteria and it is important to know what defines de final grade of a student in a specific e-learning course. What means "attendance" in this context? To sign in in LMS ? Doing the activities? Thanks for sharing related papers.
I can agree to the fact, that the time on task has a highly significant effect on grades. This is also true for e-learning attendance in my studies: more edits in wikis lead to higher grades. Of course, this is only a correlation, no causality!
Often times from my experience (empirical observation and limited range stats on e- and hybrid courses), the exposure to e-learning is somewhat correlated to better results. The eternal question is : how to get the poor performers to be better students ? Because the motivated and already high performing usually are doing the e-stuff and less the ones that need the extra work.
I had done one paper in 2005, but my (qualitative) experience is that measuring the contribution of "transfer of learning" from the e-activities and content, "W5H2 questions about learning situations" and "ability to reflect in a context sensitive (W5H2) environment" involve too many variables. Thus, reliable pattern of grade achievement and learning behaviour might be very hard to draw, while addressing these variables into quantitative consideration. Different instances of same activity logs behave differently and there is no genaralization possible (our log data inherently erogenous on the session time, time online count etc. and actual activity with the content). I have not published any article on this experimentation. Rather I published only one the success story, claiming the importance of log. In that work I found that different methods of MCQ test results are environment independent (SMS, moodle and paper), which had f2f lecture and moodle activities. But, in your question you asked about coming to a decision about grade, which includes what types of tests being taken etc. etc. So, there are more fundamental assessment question that the technology question. Therefore, the assessment design is a big question.
While the point has been made by a number of respondents, it is so important that I will make it again. There are so many factors impacting grades, that singling out one, just because you have a lot of data seems a little superficial. Your web-services can give you time, key strokes etc., but will be silent regarding the quality of the interactions. For example: were the students making a significant contribution to the activity or merely asking a banal question so that they will be recoreded as 'contributing'. Perhaps their contribution, while long and sophisticated was, in fact, a 'red herring'. There was some work done in the seventies and eighties regarding attendence 'face-to-face' and results. Most of that proved in-conclusive, so I doubt that similar quests in the online environment would prove to be any more reliable. By the way, I think that 'participation' or 'activity' might be a better term to use than 'attendence'.
Thank you all for your comments! It was my first question on researchgate and I never imagined, that it would be that effective.
The case is that in my university we've established not only LMS, but some distibuted social environment. I know that some LMS support wiki and forum-based activities. But in my case LMS was installed long after forum and wiki services, where we've held some activities.
This year we've tried to find correlation between expert grades (not only course grades, but also we graded student by their personal qualities). And we've found, that there is corelleation on different levels between theese expert grades and data, extracted from wiki, forums and LMS. We've extracted about 50 to 60 different measures and tried to build a simple linear regression between data and expert grades. And the corelleation was about 80%.
So i was interested in some new foreign experience on that topic from different people. Thank you all!
Try the Interaction Analysis Indicators that I have built for my research. See my papers and if you can't find any of them, contact me. I think the most appropriate one for you is the one I presented in UM2007
My own experience with e-learning was when I enrolled in a fully online Masters degree program in 2004-2007. A lot of emphasis was placed on how often a student contributed to discussion topics and how tangible their contribution was particularly when backed up by some literature or published work. However, this would be capped at a certain percentage (I think 40%) while the rest of the marks would come from the written term papers and group activity leading to these papers. The criteria used by the instructors to weight once contribution against the 40% was still not clear.
I think you can find a good correlation between participation/attendance and grades in most online activities for students involved in a hybrid or completely online course. However, the causality might be the other way around: most good achieving students are also highly involved in these different activities.
We have analyzed several hundred of chat conversations for a hybrid learning situation and have built some classifiers to determine the most important features of the utterances issued by each participant that determine hers/his grade (assigned independently by tutors). Participation in the conversation has been the most important criteria (weighting more than the semantic content of the utterances, the argumentation acts, centrality in social networks, etc.) for our conversations.