If the size of the data means that any effects that you are interested are so obvious that no one would dispute them, you may not need to. So if the evidence I want is to show that men on average are taller than women and I have SRS of each for the population of interest, and say a million in each, then calculating a t-test would be a waste of ink (albeit a small cost compared to doing an SRS of two million people). Of course even if you aren't DOING the inferential statistics, you are only not doing them because in your head you know about how small the standard errors will be (so in a way you have done them).
But, in the era of big data we also often have complex patterns that we are looking for and many ways to look for these (and usually these patterns are not well-defined prior to analyses). This goes by different names in different fields (multiple comparisons, the look-elsewhere-effect in physics, etc.). Consider hurricane forecasts. There are tons of data going into those but the cone of uncertainty is hugely important (and difficult to communicate to people).
A great book on the topic is Efron and Hastie's Computer Age Statistical Inference: Algorthms, evidence, and data science.
As an aside, big data from bad designs make the inference more difficult, but it also makes any conclusions more difficult. A hard disk full of data does not solve this problem.
Inferential statistics use is still relevant whether you have BIG data or not. With BIG data it is also possible to include many more variables and study the complex interactions using inferential statistics.