If you don't mind, I would like to discuss the Freeman chain code some more. It is used for shape description (not extraction as you mistakenly wrote), but I never understood why it was -and apparently still is- so popular.
OK, one can describe a shape (a thin contour of a shape) with it, but the encoding mechanism has a pronounced discontinuity (between 1 and 8).
In my opinion, a much better way to describe the same thin contour would be: (Imagine yourself driving along the contour)
Straight ahead is 0
45 to the left is -1
90 degrees to the left is -2
(135 degrees to the left is -3)
45 degrees to the right is +1
90 degrees to the right is +2
(135 degrees to the right is +3)
This way of encoding has two major advantages over Freeman. Less bits needed for the encoding (which was important at the time that Freeman proposed his encoding). More importantly, this way of encoding is suitable for frequency analysis; it can easily be seen whether a shape is smooth or jagged.
@Lambert Zijp: OK, one can describe a shape (a thin contour of a shape) with it, but the encoding mechanism has a pronounced discontinuity (between 1 and 8).
Good point! Also, it's nice to be exchanging thoughts with you again after quite a long time.
Incidentally, I found the following paper that may interest you:
Just in case this article is not accessible to you, see the attached copy. Also, I will look into trying out your proposed approach, which is promising.
Article Shape Recognition Based on Freeman Chain Code
Sorry I did not reply earlier; I've been too busy on my farm...
Both papers use thinning aka skeletonisation aka medial axis.
While well defined in the continuous domain, thinning is pretty arbitrary in the discrete domain. The problem is: what are features that need to be preserved, and what are discretisation artifacts that need to be discarded. Feature or artifact cannot be discerned automatically. But this would be another (interesting) discussion.
I think I need to withdraw my claim that my proposed encoding mechanism would require less bits than Freeman's; now that I have given it some thought, both methods seem equally efficient.
Still, my proposed encoding mechanism is more suitable for frequency analysis or curvature calculations. Of course it is trivial to convert one to the other, so: they are equally good!!