I've only come across models that were trained and evaluated on one of these data. I don't see why it would be a problem to do both, as there are many similar features.
I assume when you say Models, you mean AI Models, not modality models.
While the findings (clinical interpretation) of the results might be similar, there are significant differences in the modalities and their output.
X-Ray images of the chest are A/P and Lateral (if lateral are even being obtained), while for this diagnosis the CT images are predominately Axial.
There is also the issue of where in the chain the AI is going to be run. If you are running it at the modality, then you could have differences in vendor (between modalities), and software levels that prevent a single solution (either in the same institution/practice or network of institutions). If you are planning to use the AI at the PACS/RIS level, there are another set of inherent issues. If you are going to run the AI off-line, you have a different set of potential issues.
The simplest solution (KISS) is to have packages designed for the specific modality and then adjust the programs to handle the data from the modality/vendor in use.
I mean AI models. What you wrote sounds like deployment. I'm still at evaluation stage. I managed to train a couple of models on a combination of datasets, but can't say they generalize that well yet.
Simplistically, the two different modalities rely on two different image generation approaches, and the underlying images are very very different. Resolution, dimensions, etc. are very different between the modalities, despite both being derived from x-rays. The underlying network could be similar, I guess, if that is what you are asking.