According to one formulation (Karl Popper's), a "problem" arises when an " expectation" is unfulfilled; that is, "problems" are generated when the subjective projected desires of an organism clash with the objective constraints in the environment.
If so, there would be only two possible ways for a "problem" to be "solved" : the organism would have to either 1) bring about a change in the objective state of the environment (remove the constraints, for instance) or 2) rearrange its subjective conceptual scheme (change its attitude toward the problem hence "dissolving" it).
"Expecting" my favorite healthy snack, dried figs, I open the kitchen cupboard but find some peanuts instead; here I have a "problem". Now, I could either hit the supermarket and buy some dried figs or rather console myself thinking "umm...peanuts are not that bad after all!" problem solved.
Depending on the nature of the "problem", humans occasionally opt for either of the solutions--in cases where an objective change in the environment is practically impossible, death for instance, one has to change one's attitude.
Given the definition of a "problem" above, how can a HLMI itself possibly have a problem at all? since to have a problem is, by definition, to have an already existing expectation/ projected desire.
Moreover, how a human level artificial intelligence would go about "problem-solving" most of the time?
Most importantly, given the occasional equal viability of both "solutions" (objective vs. subjective readjustment), which one the machine will proceed with? Based on what criterion, if any, will the machine eventually prefer one "solution" over the other?