So, now I propose the topic of computational efficiency, particularly with regard to shustring search.
A review of relevant literature mentions several concepts, each providing a portion to the published search algorithms. These concepts are:
LCP least common prefix array
Suffix array
Suffix tree
and variations on these examples.
It is quite possible to efficiently (say, with under 300Mb of memory and under 150 seconds of time for a sequence of 31Mbp) compute shustrings without, again, I say without the use of any of those crutches; the computation is instead direct. Further, it is a simple matter of sorting. Gross character of machine is also important, like speed of processor and processor environment overhead - dedicated processors solve one problem more quickly than does a multitasking processor.
My questions concern the run-time performance of algorithms that implement the above listed concepts. The gross measures are sufficient, amounts of time and memory versus volume of input but, order measures are useless to my particular need.
Has a reader any sense for such measures on algorithms for the above listed concepts?