No matter what kind of measure you are building, you no doubt have in mind some intended use(s) of the scores from that measure.
Gathering data and demonstrating that the scores have technical adequacy for your intended uses (both score reliability and score validity) is always a good idea. This may well involve at least some aspects of item analysis.
If your items/questions/stimuli are nominal, as in the example you give, you likely won't be computing things such as item-total correlations in the traditional sense. But, for determining temporal stability, you would like to see that the answers given by respondents on one occasion agree to a very high extent with those they might give, say, three weeks later. That could be quantified as a correlation (e.g., Cramer's V) or as a percent agreement, or via Cohen's kappa, computed for each item.
Not sure what you mean by "item analysis." If you mean look at the items to make sure they are appropriate for whatever purpose you have, then yes. If you mean more formally Item Response Theory, this is traditionally done with binary observed items (and assuming dimensions for the latent constructs), but there are also version for variables with more than two categories (as well as ordinal). More information about what you want to do with the scores from these items is necessary to advise further.
It seems to me that no matter the data type, you need to understand how your items are behaving. Item analysis will give you that. It need not be using IRT which has a number of assumptions which may not be appropriate to you model.
Item analysis is mandatory procedure for tool construction, it is like the foundation before building the superstructure.No tool will be valid without this step.