the analogy seems a bit far fetched for several reasons. E.g. the digestive tract which transforms raw material into "fuel" suitable for animals is outsourced in the case of computers in remote power plants; computers don't have to make any effort to stay "alive" and reproduce. Surely, the specific exponent of 0.75 in Kleiber's law cannot be expected to hold in the case of computers.
However, the general idea (higher complexity (= larger "mass") demands slower clocking, mainly due to the problem of heat dissipation) can well be seen on the chip level: For example, in the 80ies ECL technology exceeded a clock frequency of 1 GHz in rather simple logic devices like gates, flipflops, and counters. At the same time, the clock rate of microprocessors was located between about 5 and 80 MHz. Today, the transition frequency of some transistors is way beyond 200 GHz but the clock frequency of the average microprocessor has not surpassed 5 GHz for nearly two decades.
The dependency of power consumption on the current computing power as mentioned in your question can be easily seen by connecting a PC through a power meter to the wall outlet: While the operating system is idle the input power might be around 30 W; it can increase to 130 W during a math task. Similarily, simulation programs which are able to distribute tasks over several graphics cards cause differences of about 100 W per graphic card when switching between idle state and simulation.
However, since common computers are designed for continuous operation with maximum clock rate, it would not be useful under normal conditions to decrease the clock frequency in order to reduce the input power: Assuming that the computer is switched off when not used, that the 30 W mentioned above are independent of the clock rate (hard drives, memory etc.), and that the difference of 100 W is proportional to the clock rate, an 1 hour-task would take an energy of 1 h * (30 W + 100 W) = 130 Wh with maximum clock rate, and 2 h * (30 W + 50 W) = 160 Wh with halved clock frequency.