Hi, i'm kind of new in this subject but i would like to share what i know.
I can see that you've asked several questions into one. I'll try to help as much as i can.
As you probably know, an Iris Recognition System is usually divided into several stages (there might not be a general agreement on which the stages are):
i) image adquisition
ii) image preprocessing (adjustment, quality control methods, etc.) and iris segmentation (actually locating the iris region)
iii) iris normalization
iv) iris encoding
v) template matching
Regarding the step i) you initially might go with a simple face detection method (maybe the haar cascade in OpenCV) or something like that.
In step ii) there are a lot of iris segmentation methods: Daugman's Integro Differential operator, Hough Transform based methods, active contours based methods, Tieniu Tan et. all method, etc.
In step iii) the most famous approach is the Rubersheet method proposed by Daugman, but there are another normalization methods different than this one (i think Wilde proposed a different approach a while ago).
In step iv), the most relevant (as far as i know) iris encoding methods are the proposed by Daugman (but it is patented), the proposed by Libor Masek in 2003 (this one is free), the proposed by Tieniu Tan's group (this has got the best results and won the last NICE Iris Competition if i remember correctly), and others.
The step v) kind of depends of the previous step.
Finally, you mention the subjects being on the move. This is another subject inside the iris recognition field, like iris recognition under non-ideal conditions, etc. I think this it would be appropriate to tackle this problem later, once you already have some sort of prototype of your system.