Dear @Vyacheslav, allocating image edge is a challenging issue; the following paper entitled:"Edge-Enhancement – An Algorithm for Real-Time Non-Photorealistic Rendering" by Marc Nienhaus and Juergen Doellner discusses an algorithm for enhancing edges of real-time non-photorealistic renderings. Abstract: In this paper, we propose an algorithm for enhancing edges of real-time non-photorealistic renderings. It is based on the edge map, a 2D texture that encodes visually important edges of 3D scene objects, and includes silhouette edges, border edges, and crease edges. The edge map allows us to derive and augment a wide range of nonphotorealistic rendering styles. Furthermore, the algorithm is designed to be orthogonal to complementary realtime rendering techniques. The implementation is based on multipass rendering: First, we extract geometrical properties of 3D scene objects generating image-space data similar to G-buffers. Next, we extract discontinuities in the image-space data using common graphics hardware to emulate image-processing operations. In subsequent rendering passes, the algorithm applies texture mapping to combine the edge map with 3D scene objects. The paper is on the page:
We recently did a review on the topic, which appeared on PatRec in 2013. Still, a good set of readings on the topic, if you really want to get a full panorama, would include:
- Early works by Abdou & Pratt and Haralick.
- The review by Peli and Malah
- The works by Baddeley in 1992
- The papers by Heath and co-workers at the USF
- The PhD theses by Martin and Arbelaez, both tackling the quality of segmentation expressed as boundary images.
Feel free to contact for any guidance you may need.
Sure, you can find it in my profile. It's called "Quantitative error measures for edge detection", published 2013 in PR. In case you have troubles to download, just drop me an email.
The raw statistical approach (point coincidence) is simply not good enough for the task. You need one of this two alternatives:
a) Perform an explicit matching between the points at each of the images with a certain tolerance. This is means tackling the problem as bipartite graph matching. It was well analyzed by Gang and Haralick (2002). Since optimal algorithms are too heavy (namely Munkres/Hungarian), people usually employs the CSA implementation by the Group at Berkeley (check out the papers by Malik and co-workers). An alternative is that by Estrada and Jepson (2009), but I would recommend the CSA algorithm, probably.
b) Dilating the images for the edges to overlap even if they are not in coincidental positions. Check chapter 7 of the PhD dissertation of Arbelaez, in which he uses it. That's way more simple, yet less elegant.
normally we judge an edge detection algorithm subjectively, the easiest way is to subtract your algorithm output edge image from the output edge image of other well known algorithms such as Sobel, Prewwit, Canny ... to see what edges are missed by those algorithms and your algorithm did catch.
if the output of the subtraction is significant and can be subjectively seen, then your edge detection algorithm is good or perhaps better than those compared .
Dear Sundaram edge detection algorithms sometimes give noise, or scatter edges, dotted lines, such a noisy output give higher entropy, because of that I do not think that entropy is a good measure for the quality of an edge detector.
perhaps working on the connectivity of the output edges give an objective measure for the goodness of an edge detector. this need to be investigated.