If it's a truly ipsative measure (where you have multiple scales, each assessed by forced-choice with one pitted against another, as in some measurers of values), then you are facing some statistical challenges. Basic item analysis statistics such as the percentage of respondents choosing a given option and its correlation with the remaining items should be little affected, but I'm not sure about the item-whole correlations. Some better statistician than I needs to weigh in on that.
However, I've noticed that many people (including experts) seem to be confused over this issue, so *maybe* I can be of some help. The simple fact that you are using a forced-choice response format does NOT mean that your test is ipsative. In many cases, we're looking at a single scale (e.g., social interest or locus of control of reinforcement). In such cases there is nothing special to worry about. Use the standard item analytic techniques you would have employed if it was a yes/no, true/false, or likert-type format.
The problem with ipsative multiscale inventories (the Edwards Personal Preference Schedule is the classic example) is that the scales are not independent of one another. Each response favoring one scale constrains the range of possible scores on other scales. This doesn't prevent you from running correlations, but it does raise doubts as to the meaning of those correlations. More important, it prevents you from running multivariate procedures that involve all of the scales - literally renders them impossible. (I think some tests, such as the Minnesota Importance Questionnaire, build in "dummy" scales to fix that.)
Thank you so much for your valuable inputs. The scale has multiple dimensions and each response categorizes them into groups like Ex: Introvert or Extrovert.
I will read about the Edwards Personal Preference Schedule.
I had some input from some experts to do tetrachoric correlation but I was lost as the data cant be categorized into 2X2 contingency table. Got one more input recently to compute Bivariate correlation, which I am yet to work on.
Start by doing the Kolmogorov-Smirnof test for all item, to see if you should apply parametric or non-parametric tests. Then do an exploratory factor analysis (Promax or varimax, depending on whether it is parametric or not). First do it without forcing the solution. If you obtains the expected factors, everything is fine. Otherwise you can try to force the solution. It calculates Crombach's alpha and Crombach's alpha if the factor is eliminated, also the correlations between items if a factor is eliminated. If everything is positive, do the confirmatory factor analysis. You may first separating the items by the expected factors. In this way better results are obtained. But, in my opinion, it isen't the right thing.
If the "ipsative" scales are all dichotomous (such as introverted-extraverted) you should not have any major problems. The issue arises when all the scales are dependent on one another. My advice is to treat each dichotomy as a single scale rather than as two. Then follow the usual procedures: nothing extra-fancy. For each item you might calculate:
(1) The percentage of respondents answering in each direction
(2) the correlation of the score on the item (say, +1 for the extraverted option) with total score on the scale.
Ideally, (2) would be done after removing the item in question. Doing this one by one is very tedious, though, so some folks don't bother. If you are working in SPSS or a similar program, however, these statistics are readily available when running a reliability analysis (such as Cronbach's alpha).
Again: the main point is that you don't really have two separate scales for extraversion and introversion, even if you are going to use the results to classify people into one crude category or another. You have one overall measure of how extraverted (vs. introverted) they are.
I understand your recommendation. But as you say, something simple should be done.
I use SPSS and AMOS. As it is a new scale, I think it is best to use SPSS, load the responses of all the items and carry out the exploratory factor analysis (EFA) to check if the items are distributed as expected. If so, we can do confirmatory factor analysis (CFA) with AMOS. From here we can define which items belong to each dimensions and calculate the internal reliability, its discriminant and concurrent validity, ......