That's an interesting question. Sometimes we face such a situation that a one-factor solution doesn't fit the data well, and yet the best fitting solution suggests the presence of a strong general factor. In practical terms, factor correlations higher than .70 could be considered a sign of unidimensionality. I believe it could be very useful to examine the direct effect of both specific factors and a general factor on your indicators (i.e. bifactor modeling) and analyze how important such general factor is (omegaH and ECV are useful here).
That's an interesting question. Sometimes we face such a situation that a one-factor solution doesn't fit the data well, and yet the best fitting solution suggests the presence of a strong general factor. In practical terms, factor correlations higher than .70 could be considered a sign of unidimensionality. I believe it could be very useful to examine the direct effect of both specific factors and a general factor on your indicators (i.e. bifactor modeling) and analyze how important such general factor is (omegaH and ECV are useful here).
I would recommend running a Principal Component Analysis on your "factors", looking at the loading on the first component, removing any "factors" with a loading less than 0.4 and re-running if necessary, then saving the regression model scores as this will be more accurate than adding your "factors" together anyway. In this way you do not need to worry about the question you have asked.
I have put your word factor in quotes because we normally start with variables then summarise them into factors, but if you have already done this then your question is well posed.
Please see my study guide on scale reliability for more information.
I might have some slightly different perspectives, or additional suggestions. First, my hunch is that factor analysis rather than principal components analysis might be a better choice (but Peter, please feel free to respond to that suggestion; my sense is that PCA is a bit "old fashioned" and was used prior to computers being able to perform the more satisfactory process of factor analysis quickly).
Second, Abdullah, I suggest that you try both orthogonal (e.g., varimax) and oblique (e.g., direct oblimin) rotations of your data, just to see if you get consistent results or results from one type of rotation that seem to make more sense. I would also, of course, obtain a scree test at the start to see how many factors are likely to be in your data.
Third, I would agree with Pedro that factors that are correlated above .70 are likely to be correlated with each other, particularly if they result from an oblique rotation. I would also agree with Peter that variables / items that load less than .40 should probably be removed.
Finally, if the correlations between your two factors are really as low as .02 or zero, it seems you really do have independent factors and should not add up all of the items to obtain a single total score.