Thank you for your insightful question. Indubitably, literal and cultural norms governing the text play a pivotal role in translation. In point of fact, they represent the micro and macro-levels of translation respectively. However, the models introduced for handling these often different but complementary poles in translation processes are totally different due to the nature of the equivalence involved. For instance, while Catford's formal/textual model lends itself satisfactorily to literal features of text with a universalist nature, Koller's relational model may appropriately be used for socio-cultural, macro aspects of the text under scrutiny. In a nutshell, denotative equivalence fits the requirements defined by the literal norms , but connotative, pragmatic equivalence is a proper tool for dealing with cultural norms. On this basis, you need different models for your purpose, unless the model is hybrid and takes care of both micro (literal) and macro- level (cultural) aspects.
When you say 'literary and cultural norms' I would suggest that the boundary between these is quite fuzzy. What you categorise as literary and culture norms will vary depending on literary/cultural context (literature being, after all, a form of culture).
I can suggest two ways of approaching this. The first is to look at translation shifts (see Catford's work, for instance, on this, but also the work on equivalence of Mona Baker in In other Words) to see how the text has been adjusted in translation. This could be evidence of literary and/or cultural norms at work to adjust the text further than it might otherwise have been (following Toury's tertium comparationis - but this is a problematic concept). Another, more challenging but perhaps more rewarding method in the longer term, would be to start by examining the literary and cultural norms in the target culture, and then seek to map them back onto whatever text you are looking at in translation; are they represented in the text? (identification) and How/why? (analysis).