In pattern-recognition applications is common the use of log-sigmoidal neurons at the output of the NN to indicate a sort of "probability" of occurrence for events. Nevertheless, in a scenario in which the possible events must be ordered to select the most probable ones, linear activation functions could reproduce the same order (in fact, any monotonic function would maintain the same order). Therefore, the difference could be the simplicity since a linear neuron is easily implementable in a VLSI circuit, as an example, rather than a log-sigmoidal. Does this mean that log-sigmoidal can be replaced in any case? I think that probably no, since back-propagation training could be performed easily when a log-sigmoidal function is used so that training is performed with log-sigmoidal but for the final application, a linear one replaces this function.