In terms of technology, big data differs from ‘ordinary’ data is that a large data set is spread across many computers and hard disks, and processed in parallel.
Big Data is a collection of unstructured different sets of entities which don't have a relationship with each other. It is normally not be used without any per-processing.
Whereas, large data set is a collection of same entities which have some structure and relation. And we can use any type of structure query language to retrieve the data.
They're almost synonymous. I think data sets have some ordering, for example, they are separate or related tables. And just data can be in a variety of presentation formats, i.e. it is raw data "as is". For computer processing, it is necessary to first convert the data into datasets, then develop dictionaries, use them to encode the data resulting in a training sample, and then analyze the data in the training sample by identifying cause-and-effect patterns in them (the concept of sense Schenck-Abelson). In this way, big data is transformed into big data sets by normalization, and then into big information by identifying cause-and-effect relationships, i.e., making sense of the data sets. More information is converted into more knowledge if it is useful to achieve the goal. The activity to achieve the goal is management. Therefore, if information is used for management - it is already knowledge, i.e. technology.