Nick Shea has a lovely paper arguing that 'Reward Prediction Errors are Metarepresentational' (that's the title). I am interested to find any other examples of papers relating philosophical analyses of representations, especially teleosemantics, to temporal difference learning, or reinforcement learning more generally. There seems to be surprisingly little work on this.