I am working in a new edge detection technique. Visually, it is giving good results. Now, how I can evaluate it and compare it with other techniques objectively?
First you should create a gold standard, that is an edge all pixels of which are confirmed one by one by an expert or a team of experts. In the case of clear-cut edges, it is simple. However, the gold standard itself will reduce to a subjective guessing, when the edge is not quite clear (in this case, the objectivity will be fade away, no matter which method you use).
Well suppose you have an edge that is not very clear. You give it to 10 experts, each of whom identify it for you. Now you have 10 overlapping edges as a gold standard. Of course not all of the gold standard edges overlap completely. So you need to use centrality measures: for each pixel of the edge, you will have a set of coordinates: meanX and meanY; now the gold standard is ready.
The next step is to calculate the program's error, that is the euclidean distance between the pixels of the edge detected by your software algorithm and the closest pixel on the gold standard edge.
Now, if you want to objectively compare your software's results with the results of other edge detection algorithms, you can perform those other edge detection methods, then calculate their distances from the gold standard (i.e., the error), and objectively compare the errors pertaining to your method with each of them. Note: You will need a dataset of edges (for example 60, 70 images full of different edges), in order to be able to compare them statistically.
A very critical hint from my own experience: Collect a very large sample, to make sure the statistical test does not fall short of power. Do a pilot study and determine the necessary sample size, based on power calculations. This is very very critical in your case.
Another hint: Besides calculating the error as stated above, you should also calculate horizontal and vertical errors, separately. Some algorithms work good on the X axis, but create a considerable error on the Y axis. Some algorithms work the other way around. So it is good to know that the algorithm acts properly on both vertically and horizontally inclined edges, and otherwise improve it.
Human experts, not the "expert system" as you might have thought. The machine vision technology is not that advanced (yet), so the best gold standard we can provide is the human eye and human brain.
The point is that when the edge is blurred beyond a certain threshold, the human eye/brain itself can fail considerably. This is when a "Panel of Human experts" will be necessary. Also make sure the participants are all healthy in terms of vision and color blindness.
do your edge detection on some images, perform other well-known edge detection algorithms on the same images, subtract your algorithm output from the others output and vice versa, you will see then what edges missed by other algorithms and hit by yours, atthe same time you will see what edges missed by your algorithm and hit by the others.