Nowadays, we hear the trendy words Business Intelligence or Big Data. But how we can differnece these? What are the characteristics and how we can deviate from each other?
I don't know that there really is a difference. Analytics was a "hot" term for a while and then Data Science came around. It seems like there are a lot of "new sciences" that have appeared in the last 30 years or so, like "network science" and so on. To me, network science is basically applied graph theory, and graph theory has been "applied" at least since the 50s or 60s. See, for example, Harary's work. (This excludes the applied graph theory work sociologists had been doing before that also, although it seems they did not have a solid understanding of the breadth of graph theory, even at that time, i.e., before the 50s).
Anyway, I have worked in what we now call "business intelligence" for many years. As I see things, it doesn't matter whether you have "big data" or "small data", the same basic principles and so on apply. Understand the nature of your data, be careful how you mine it and interpret it, i.e., have a good understanding of the mathematics/statistics/computer science you're applying to your data.
Perhaps interestingly, as a self-taught mathematician, I have found in the business world that people with computer science degrees or actuarial degrees forget a fair amount of what they learned as they enter the business world. That may be a separate question, but it may be worth mentioning as you work on business intelligence projects in whatever respect. One example is that I have encountered an FSA actuary asking me on a project I worked on why I didn't use standard deviation to find outliers in a large dataset. I explained to this person that the data I was working with was in theory and practice heavily right-skewed with a lower limit of 0, as is much financial data, so that standard deviation would not be a robust way to determine outliers in that data set. Moreover, I was working with empirical data, so modelling it would present some challenges, and I had to use a heuristic method for estimating a robust way to determine outliers in the dataset and also because this determination is automated in SQL, in this case by considering ordinal statistics such as percentiles and so on. The point here being that I found it concerning that an FSA would uncritically apply the concept of standard deviation to this dataset. So I do think a big consideration is being knowledgeable and careful about how to interpret data, whether your data is big or small.
With that being said, of course, sample size matters, i.e., it is hard to draw robust conclusions from small sample sizes, in general, and so business intelligence is more speculative there. However, as responsible big data books warn, mining and interpreting of big data has its own perils, i.e., patterns can arise by chance and so on. Bigger is not necessarily better. Weapons of Math Destruction is a book that concentrates on this idea.