To detect corners but not any other features, you must somehow insert information about what you define a corner to be. Thus the algorithm must be 'supervised' in some way, even if it is only implicit in the training algorithm.
There are networks that are able to identify and separate line and corner features in images in an unsupervised way. A small number of features will be inferred, which you can then separate into 'corner' and 'not corner'. Perhaps this is what you are looking for. Google for 'convolutional neural nets'.
Either on 2D or 3D, those features can be captured by using a Self Organizing Map (SOM) network. By definition, SOM networks are unsupervised, and they handle the concept of "neighborhood", which, I think, could be very useful to define a "corner"
As Alireza mentioned the NN must somehow be guided to search for corners. This knowledge must be given to the NN as:
1) pre-processing using techniques from image processing
2) encoded into the feature vector. One encoding value could be the difference between pixel values which can then be fed into a SOM, ART or other NN. Using the raw pixel values are better suited for image segmentation but not necessarily edge detection.
Both strategies have their merits and shortcomings and you should explore each separately and even combine them to get the best result.