I have not come across any mention of string as a processing unit in NLP. You may call it everything written, characters or the sequence of them. Token is an abstraction that is defined by external rules in order to identify the unit of processing in a NLP task.
String refers to any sequence of characters (as in many programing languages), while token refers to some unit that should be somewhat defined and delimited (ie. tokenization). Try to find something about type-token dichotomy (or type-token distinction) to understand why the term "token" is used and what does it mean even outside the NLP.
A simplified definition of a token in NLP is as follows: A token is a string of contiguous characters between two spaces, or between a space and punctuation marks. A token can also be an integer, real, or a number with a colon (time, for example: 2:00). All other symbols are tokens themselves except apostrophes and quotation marks in a word (with no space), which in many cases symbolize acronyms or citations. A token can present a single word or a group of words (in morphologically rich languages such as Hebrew) as the following token "ולאחי" (VeLeAhi) that includes 4 words "And to my brother".
A stirng as written by one of the previous researchers who responded
Token is a category that a string(sequesnce of characters) may be added to on the basis of what sort of tokwnization you wish to perform.
For instance: in pre-processing of a textual sentence, you lemmatize and tokenize the strings based on whether they are nouns, verbs, adjectives, or adverbs.
On the other hand, in the first phase of compilation process the source code (which is a sequence of strings) are tokenized on basis of the category that each word belongs to (namely, keyword, literal, variable-name, etc).
In short, tokenization can be applied on strings as a preporcessing step in many applications including, but not restricted, to NLP