The best way to understand in-memory computing is via a very simple example.
Say that you have an eCommerce application. For example, this application uses a database. This database normally stores tables on the hard drive and reads and writes this hard drive when accessing or modifying data in those tables. Hard drives are slow, relative to RAM.
Nowadays, RAM isn't as expensive as it used to be.
Therefore, say that we migrate the database into RAM so that, instead of reading and writing information from a hard drive it reads and writes data from table files stored in RAM.
This database would periodically store modified portions of its table data from RAM to hard drives for permanent storage (in case of power loss), but for all general purpose querying, it will execute queries and commands on data in RAM. This way, the database would work much faster.
Since database querying is the most time consuming part of an eCommerce application's handling of requests, reducing the database querying time by storing the database in-memory (in RAM) would increase the overall eCommerce application's performance.
This is the main point of "in-memory" or "in RAM" techniques. It is nothing new and revolutionary. It is a well known technique. But, until recently it just wasn't practical and economically justified to use RAM, because of its price. Today, this is no longer a big problem.
Today's in-memory techniques take advantage of parallel computing and multi-core architectures with shared memory to increase the performance of in-memory applications.
But, as I have explained, the main point is moving frequently accessed data from slower to faster memory units, i.e. from hard drives to RAM.
How this benefits businesses is so obvious that I don't think it is necessary to illustrate and explain.