The answer is, "it depends." If you have two dichotomous variables and compute the phi coefficient between them, you'll get exactly the same value as the Pearson correlation applied to the same data. There are a good many statistical procedures that were developed prior to low-cost, widely available computers and statistical software, which were developed as computationally simpler versions of traditional parametric statistics. In linear models, categorical variables can be used if recast into sets of dummy variates (or, as is, in the case of dichotomous variables).
I agree with David Eugene Booth 's advice that folks could give a more definitive reply if you'd describe in more detail the variables involved and the intended type of analysis.
For each right-hand-side categorical variable, I would create a dummy for each category apart from category, which will be reference.
Then, you can regress the left-hand side variable on those dummies (using OLS). You will get a t-test for each category (and you can compute and F-test for testing the joint of all the categories together) and and R2.
Example:
You have a variable X with three categories X1, X2 and X3. Create two dummies, D2 (if X == X2, 0 otherwise) and D3 (if X = X3, 0 otherwise) and regress y on D2 and D3. You will get a t-test for D2 and D3. It is easy to perform a test where the null hypothesis is that both coefficients of D2 and D3 are equal to 0 at the same time. Any standard package can do that and it will give you the R2, too.