How to check construct validity for self assessment instrument??
Construct validity inclusive Convergent & Discriminant Validity can be found in the following YouTubes which are using SPSS, AMOS & SmartPLS. You can search similar YouTubes if you are using other software to evaluate the construct validity.
I think Eduard has the most innovative suggestion -- it lets the researcher decide (based on his/her knowledge of student population and overall theoretical framework and research design, subject studied, etc.) the best methodology for studying the construct validity of the self-assessments being used.
I quite agree with Eduard and Mccombs' suggestions on this issue. The researcher should decide based on his/her knowledge of the population under study, theoretical framework etc. The "construction" should agree with existing theories underlying the issue. In addition, the research design, i.e., either quantitative or qualitative will inform the approaches to adopt when assessing construct validity of self-assessment instrument.
A los aportes de Eduard y McCombs, agregaría la presentación de la tabla de especificaciones del instrumento en que se definen las dimensiones e indicadores a observar y el instrumento, a jueces. Se trata de solicitar juicio de expertos sobre la validez de las dimensiones, los indicadores y su relación con el instrumento y sus reactivos.
I've looked over what others have provided and hopefully add to what you now have.
In the broadest sense you can approach self-report scale development and validation from two approaches; exploratory or confirmatory.
If the scale you are developing is based on well established/grounded theory then you're best to go with a confirmatory study using a Structural Equation Modelling approach such as Confirmatory Factor Analysis (CFA). Basically you create your scale items in reference to the theoretical construct you wish to measure. You would want to have your scale looked over by a panel of "experts" to check for face/content validity, maybe run a pilot study, and then deploy your scale. You can analyse the data using AMOS or SAS. AMOS has a nice visual GUI interface, but SAS handles coding better (that's my opinion). In CFA you set your model (theory) up in advanced ( a priori) and try and "fit" the data to your predefined model. Your data either fits or it doesn't, although there is some room for debate on that, but not much. So the stakes are high if you come up with nothing. If your data fits you have establish evidence that your theoretical construct has quantifiable psychometric properties. You are also providing evidence for the grounded theory. This is interesting work if you are passionate about scale design, and Structural Equation Modelling.
If you are not starting from grounded theory, you should approach your scale design using Exploratory Factor Analysis. Which is a lot easier than CFA, but you are not likely to establish any evidence for construct validity. If you find something worthwhile in your EFA study, then you would run multiple studies (at least one) to formulate your grounded theory, and the run a CFA.
Just a word of caution on before you start planning, you CAN'T run both CFA and EFA in the same study. To do so is not rigorous scholarship and unethical in my opinion.
On my profile you can find the references to some of the scales I have worked on. Have a look at Sun's Educational Stress Scale for Adolescents. There are several articles that describe the scale development from start to multinational/multilingual normative studies. It is a good example of EFA and CFA for scale development on the same scale in different studies, and starting from a less grounded theoretical, construct.
You can also look at my Attitudes Towards Acculturative Behaviour Scale for an example of scale development from grounded theory using CFA. I have provided a link to a dissertation with this scale that you could use a guide or example of what's involved and with plenty of detail.
I hope this helps, and please read what my colleagues above have provided, as it will give you the best insight into what is involved and how committed you will need to be. My golden rule is that if there is already a scale to measure what you want to, use that first! There's no need to reinvent the wheel.
For me as long as the construct is based on a considerable theory and defined operationally, the construct validity evidence is now proved to a great extent. You can also let some experts or specialists read it and compare your items against your definition to make sure they reflect it.
we used a self assessment instrument to validate an objective competence test, but discovered that the correlation between the two different measurement methods was kind of nonexistent. This is described in literature (if you are interested, I can provide some references ) and I think that this an important approach to validate self assessment instruments.
If the instrument has a conceptual framework that clearly defines the dimensions measured, one of the most used methods is factor analysis. I have used with different instruments of self-evaluation and succeeded to identify some or all dimensions proposed for the instrument.
I think Jason Dixon gave a very detailed response in addition to others to your question Muhammed. My only concern is whether you are designing an entirely new instrument or you will be using/adapting an existing one. If it is an existing one, i think you can go with CFA. But if it is a new tool you're developing, then you can do both EFA and CFA. I like Jason's concern of doing EFA and CFA with the same sample. So if you have a large sample size, then you can divide the sample into two such that you use one sub-sample for the EFA and the other for the CFA.