I have just started working in this area using BI tools such as Microsoft Access, Microsoft Power Query, Power Pivot and Power BI. As far as I can see, having consulted with IT colleagues to help with my own knowledge gaps, the underlying technology is based on databases and Structured Query Language (SQL).
It is interesting that Microsoft have incorporated Power Query and Power Pivot into the latest version of their common Excel spreadsheet system, or as a free download to earlier versions.
I think the reason Microsoft has done this is that the traditional user-friendly Excel spreadsheet is not capable of processing data volumes larger than about 500,000 records. This is insufficient capacity for the data volumes and analytical challenges we now have to deal with.
Traditional BI is based on Relational database with SQL, to go further data-scientists are using OLAP cubes (See Wikipedia).
I think the game is changing with the big-data. The number of data is huge now and it's also semantic, so we are storing the data even before knowing how it is related to each other : it is called a DataLake (See Wikipedia)
On those DataLakes, we are building learning algorithm which are going to detect the correlation between the data.
The main technologies for this is Hadoop/Spark, you can also find U-SQL (Wikipedia)(Microsoft) to request inside the datalake and some No-SQL databases (MongoDB etc)(Wikipedia)