What different methods do Universities use to help students articulate and evidence individual achievement of attributes? Looking for methods that provide individual output evidence, not input statements that are generic please.
In what area? I am aware of protocols used in mathematics to get at students' self-assessment of their skills, and there is some research in physics on the same area. But if you want it in literature or social science, those protocols might not translate.
Of course, self assessment in general is a well-established topic in educational scienc (theory as well as practice). Thus you will find a host of literature on this topic ~ general as well as specific ~ if you will have a quick look via Google Scholar.
The more specific, the more you will have to develop yourself based on the general approach of your choice, e.g. classical 5-point (Likert) scaling method. Sure, if you are lucky, you will find on Google Scholar or related on-line resources exactly the kind of test that you need, which you may have to adapt to local circumstances. Beware, however, that modifying a published test may invalidate it in your current context.
One of the books that I would recommend to start with is the well-known reader by Roberts 2006 on self, peer and group assessment in e-learning, while the basic principles and techniques won't be so different for traditional off-line learning.
If this all sounds too much like do-it-yourself R&D, I might perhaps offer to co-develop with you a customized self-assessment scheme using Excel with a little bit of Excel-VBA to implement my own standardized assessment formulae. Guess it will only require to adapt a little bit a self-assessment scheme I recently developed for a short course in scientific-technical writing. The principles behind this approach are easy to understand, and the Excel sheets take over all dirty work from you ;-)
There are many online advanced courses providing for smart self-assessment tools for students. I like Aunt Minnie Radiology courses, http://www.auntminniecourses.co.uk/ and Touch Surgery https://www.touchsurgery.com/.
Could You kindly write some more about the self-assessment formulae in scientific technical Writing You have been developing? Is it in natural or humanistic sciences You are working on the asessment tool?
Student to know the quality of their achievement ,attributes,teachers have to play an very important part.It is only teachers at the initial stage carefully observe the student individually to find the development his progress ,his curriculum ,his nature of behavior ,his day to day the performance of his time study & such other areas may certainly help the teachers to assess the individual development & progress which may help the student to show the pathway of his progressive line of study .
I have been working for 12 years in the domain of software development as well as business computer science. The scientific-technical writing course however was part of freshmen's first semester courses in the field of mechatronic. The students didn't know anything about simple report writing, let alone the writing of any longer essays, literature studies, research reports or bachelor theses. Thus their task was to study a short guideline about report writing, to attend a 4-hour lecture about it, and then to write in teams of 2-3 students a (fictitious) report long enough to show the application of all important guidelines, especially the handling of report structure as well as referencing.
In order to judge how well the teams did I made up a list of about 20 formal criteria covering all the guidelines. Each report was evaluated using those 20 criteria (see attachment). The result was a team score indicating the quality of the report on an ordinal scale from 0 to 1.
Because I needed a separate score for each student, not just his or her team, I concocted two questionnaires both with the same list of 20 formal quality criteria, but with two different questions to be answered (on a five point scale) by each student individually. One question aimed at ascertaining each student's productivity (something like "did you work on this aspect of the report, and if so, how much?"). The other question aimed at ascertaining, how much influence each student effectively had on the end product of the team (something like "how much of the work that you really did, got accepted by the whole team and found its way into the end product?").
Both scores on those questions I used to derive an individual score starting from the team score and adjusting it for each team member. Works like a sort of bonus-malus-factor. The form of this formulae is too complex for writing out here, but you will find all details, if needed, in the conference papers and other stuff I've uploaded on ResearchGate. But for your interest I have appended here the criteria listing and calculation of the team score; calculation of each student's score on each of the two questionnaires runs in a similar way: the result is always a score between 0 and 1. Thus, each student's end score results from a special way of merging three scores, one of which is my own judgment (as an assessor) and the other two are pure self-assessments. All is done on a five-point scale, as usual in applied research.
If you need more explanations we should communicate by e-mail, not here.
It is a useful tool in assessing learners' perceptions of themselves as learners.
'Whilst MALS+ is straightforward to administer and score, the subsequent analysis of data obtained and interpretation of its possible significance is a much more complex process. This revised manual therefore provides more extensive information about ways of analysing MALS+ scores together with examples of possible research and clinical uses. This edition also includes two questionnaires (one for Post 16) and an acetate scoring sheet.'
The formulae is interesting! I´m at a startpoint of research on student writing and group tutoring of bachelor thesis in nursing science. I appreciate Your construction of question areas as very relevant. Thank You for the material, I will read it through and keep on touch further ahead as my project proceed for launching.
@ Ani : As an afterthought I have to say, that the 20 criteria/questions on the questionnaire are still very coarse: they only point to aspects or locations in a report which are relevant, important and frequently misunderstood. They don't tell the student yet, what precisely wen't wrong in his or her case (if anything went wrong indeed).
Thus I used the actual results of the first group of about 15 students to collect a much longer list of about 80 very specific violations of the technical writing guidelines and recommendations. That's indeed a long list!
I gave this list to the other two groups of the same size. And see: suddenly the results were much better, even after adjusting (i.e., lowering) my assessment tolerance, because these groups had a definite advantage over the first group, which acted as the source of this longer list.
What does that tell us? It reconfirms a well-known truth about guidelines: they are less useful during application as many people think or hope, because (most) guidelines only tell the newcomer in a general way what to do; they don't usually list all the ways, in which application of a specific guideline can go wrong, for whatever reason.
Furthermore, in the case of rather simple guidelines (from my point of view), it tells me, that many students have difficulties to "decode" the meaning inherent in the heavily "encoded" guidelines: the way back from abstract to concrete is much more difficult, mostly, than the original forward way from concrete to abstract. In the latter case the expert leaves out certain "irrelevant" details, in the former case the novice has to intuite them again, which is exactly what we want him or her to learn in the first place.