The answer will depend on information you do not provide, such as what is your sample size and number of variables or degrees of freedom. Assuming your sample size is at least 300, CFI = .95 means your variables are highly correlated and as a result your model is better than assuming they are uncorrelated (this is what the CFI means), RMSEA = .06 means that your model fits well relative to its df (that's what the RMSEA means), SRMR = .14 means that your model does not capture the data well, that it fits poorly (that's what the SRMR means). In other words, that you can find a much better model. As Dr. Shevlin says, the SRMR is crudely the average of your residual correlations. If the average is .14 it means that some of them are VERY large (.3, .4?).
With an RMSEA = .06 and SRMR = .14, my guess is that you're modeling a large number of variables and df are high. RMSEA will be low in that case (e.g. Savalei, 2012) indicating that your model is parsimonious but SRMR high in this case means that your model fails to capture VERY important associations among some of the variables. In summary, your model is too parsimonious. Given the size of the SRMR relative to the RMSEA, it seems to me that there's an easy to locate improvement you have not considered.
What you observe cannot happen when df is small (generally small number of variables as well), as in this case RMSEA must be large, and SRMR tends to be small (unless your model is really bad, in which case both will be large).
The SRMR is an absolute measure of fit and is defined as the standardized difference between the observed correlation and the predicted correlation. It is a positively biased measure and that bias is greater for small N and for low df studies." (Baron & Kenny)
Also, TLI and CFI (and others) incorporate the degrees of freedom, which means simpler models show high TLI and CFI, but SRMR does not. There is no benefit from a more parsimonious model.
In your case, you do not have a measurement issue (that is, you do not need to examine whether there should be alternate factors or different groupings of the items per se). However, the results suggest that the observed correlations among the factors are not high although the items are grouped to the proposed factors. For measurement model itself, it may not be a serious problem per say. For the structural models, you may need to specify the models appropriately (e.g., correlating some measurement errors depending on the characteristics of your measures).
The SRMR is the average of the residuals between the observed and implied covariance matrix (or correlation matrix). So if the SRMR is large and the other indices are OK then maybe there is one (or two) very large residuals that indicate misspecification in a particular part of the model. I suggest that you look for the largest residuals and then look at the part of the model where those variables are involved, may there will be some very low factor loadings or error covariances that need to be added.
The answer will depend on information you do not provide, such as what is your sample size and number of variables or degrees of freedom. Assuming your sample size is at least 300, CFI = .95 means your variables are highly correlated and as a result your model is better than assuming they are uncorrelated (this is what the CFI means), RMSEA = .06 means that your model fits well relative to its df (that's what the RMSEA means), SRMR = .14 means that your model does not capture the data well, that it fits poorly (that's what the SRMR means). In other words, that you can find a much better model. As Dr. Shevlin says, the SRMR is crudely the average of your residual correlations. If the average is .14 it means that some of them are VERY large (.3, .4?).
With an RMSEA = .06 and SRMR = .14, my guess is that you're modeling a large number of variables and df are high. RMSEA will be low in that case (e.g. Savalei, 2012) indicating that your model is parsimonious but SRMR high in this case means that your model fails to capture VERY important associations among some of the variables. In summary, your model is too parsimonious. Given the size of the SRMR relative to the RMSEA, it seems to me that there's an easy to locate improvement you have not considered.
What you observe cannot happen when df is small (generally small number of variables as well), as in this case RMSEA must be large, and SRMR tends to be small (unless your model is really bad, in which case both will be large).