28 June 2016 4 8K Report

Dennett sees intentional ascription as a convinient pragmatic solution for making sense of our environment, but takes such interpretation to be merely a less precise and detailed version of any other "stance" (as I understand it). A physical, design or intentional stance is all about the same, and differ only with respect of accuracy and computational strain. A system, such as a rock, has behavior simple enough that the loss of information under an intentional interpretation is less warranted than a physical interpretation, and vice versa for computers, dogs and humans.

Is there really nothing else to intentional interpretation (putting other issues aside) than prediction? It seems to me that there are some crucial differences:

(1) Often more than one interpretation will have the same predictive consequences.

(2) Often we don't interpret to predict.

(3) Often intentional interpretation opens up more content than other interpretations.

For instance, two predictively equivalent interpretations may result in cognitive access to vastly different scenarios. I may rationalize a systems behavior under ascriptions that makes it either rational or irrational, and under one interpretation we would say that the system is competent, worth relying on etc., the other would make it incompetent and unreliable. The difference need not be a matter of predictive succes, yet it makes a huge difference. How can we account for that without going further than Dennett's deflationary account?

We can for instance interpret a sports player as faking poor performance - perhaps (s)he is trying to honour an agreement to let in a lot of goals in exchange for a payment from a mafioso. We could say that the player is incompetent, or is competent but trying to lose. Here the predictions would be the same (roughly), but the scenarios are incompatible and different. One is true at the expense of the other.

When it comes to industrial robots, and if we only want to understand the systems intrinsic states, then it seems like we lose nothing under a mechanical interpretation. But we lose something under such an interpretation of social or psychological behavior. The robot does X under circumstances Y, and we need nothing more to handle the robot. But that the player does X under Y may mean that (s)he is incompetent or illoyal. This has very different consequences on appropriate interaction with the system.

What are the functions of intentional interpretation of systems? Can Dennett's account really explain it all, or is there more to intentionality?

More Luca Noro's questions See All
Similar questions and discussions