Establishing construct validity with a small sample size is challenging but possible. You might seek expert validation to scrutinise the tool's content and applicability. Employ triangulation to cross-validate findings from different sources. Longitudinal tracking can demonstrate temporal consistency. Limited statistical methods, such as confirmatory factor analysis, may be viable but should be interpreted cautiously.
mix method is combine method such as qualitatif and quantitatif, as long as i know only in qualittatif approach you may concern in validity sample but in qualitatif you dont have to concern about that. you may look at my book qualitatif research
First, decide what you mean by construct validity, especially paying attention to Messick. Strauss, M. E., & Smith, G. T. (2009). Construct validity: Advances in theory and methodology. Annual review of clinical psychology, 5, 1-25.
Secondly, you might dispense with construct validity altogether. See Borsboom, D., Cramer, A. O., Kievit, R. A., Scholten, A. Z., & Franić, S. (2009). The end of construct validity. In The concept of validity: Revisions, new directions and applications, Oct, 2008. IAP Information Age Publishing.
Third, check the methodologies and methods for similar instruments.
Pragmatics. A.) Correlates with other validated/reliable instruments. B.) Determine initial sensitivity and specificity to check later outcomes. C.) Plan a future study with other concurrent instruments. D.) Cognitive interviews. E.) Expert review. F.) Though you cannot run EFA/CFA, you can run other statistical tests. G.) Establish limitations. H.) Examine limitations and measurement error.
At the end of the day, does the test offer valid/reliable discrimination in an efficient manner? You might also crowdsource and use collaboration to acquire an adequate sample in the future.
If you can do correlations with measures of "other established tools," then that is all you need for discriminant validity and convergent validity, as well as other aspects of construct validity.
Given your small sample size, you might consider using a p value cut-off of .10 instead of .05