On definition, Data veracity is the degree to which data is accurate, precise and trusted. In reality data is often uncertain, imprecise and difficult to trust.
Biases:
A decision is being made using values that suffers from statistical bias, i.e unrepresentative of the population under consideration.
Data Lineage:
Getting data from hundreds of sources. One or some of the sources are extremely inaccurate but no lineage information to identify where the data has been stored or coming from.
Bugs:
A software bug causes data to be calculated or transformed incorrectly.
Abnormalities:
Two weather sensors (or aircraft sensors) in close proximity report dramatically different conditions.
Sources:
A large number of negative comments about a brand show up in social media. It is unclear if they are from a robotic source or if customers are actually unhappy.
And so on and on...
The bottom line: garbage in- garbage out, no matter whether data Big or Small size
I'm not sure what you mean, but I think the major problem with 'big data' is that it covers parts of a population very well, but may not cover other parts well, or may even neglect parts and/or be unaware of other parts. So the main problem is bias.
Small samples may be inadequate to cover any part of the population well. Thus variance is a problem. Typically in small sample applications, one might have greater knowledge of the population, so that proper stratification may take place, and a randomized design, or one based on a model, or combination of the two may be done.
So I think that although a small sample may give us high standard errors, at least one usually has a good plan. For 'big data,' I think that the problem is the lack of a good plan.
Is this what you mean by your comparison of a small sample with 'big data?'