Smart grids are all about being "smart". To make them be just that we can rely on analysis of the Big Data that arrives from the smart grid sensors like smart meters for instance. To give you an example consider the case of Demand Response. Here utilities try to curtail their peak demand to avoid having to buy extra energy from the open market. In doing so the rely on their customers by using techniques based on incentives, voluntary control or direct control. The question that arises is how to pick the right set of customers to ensure the objective is achieved. You need customers that are responsive, and that can curtail the needed energy amount. To determine all these you rely on Big Data intelligence, namely prediction methods for consumption, curtailment, and compliance. All these are based on the data from the smart sensors and usually need to be taken fast, although some day-ahead planning can be used as well.
To summarize Big Data intelligence can be used to predict energy consumption and curtailment of different customers for the benefit of utilities (and their own if part of home automation systems).
In my opinion, if you want to use BD (which as a technology) for predicting and making decisions, the best option is to combine it with DWH (which is an architecture).
Please see discussion on http://www.b-eye-network.com/view/17017 for "Big Data Implementation vs. Data Warehousing" and also a little bit older on http://it-tna.com/2013/02/15/big-data-versus-data-warehouse-only-one-will-survive/ for "Big Data Versus Data Warehouse: Only One Will Survive".
Anyway, with both technology and architecture in-place, you have solid platform for different kind of analysis, which includes SG also.
You can also use Big Data to analyse how the weather influence the energy production. With this information and the weather forecast, it is possible to predict the behaviour of the renewable energy production in the future.
Imagine for example the weather information at a particular point and the associated production of wind energy at the same point.
Analyzing huge volumes of time series data may be more efficient if some of the algorithms run on the incoming data streams. Querying large databases can introduce delays if real time operational decisions are to be made from the analytics.
Smart Grid data is structured; typically time stamped measured data more related to traditional SCADA time series data than unstructured internet data. The analytics are also related to the tree structure of the Grid topology. Smart Grid analytics are quite different to generic internet "big data" solutions.
In my opinion, the efficiency and fluidity in the execution of data intelligence in terms of smart grid technology and big data is entirely dependent upon the scalability of the architecture within the smart grid itself. I believe, hardware-wise, is the final limiting factor in such systems, but software shouldn't be neglected. You can do only so much with software if you're approaching the absolute limits of the hardware itself.
The most efficient and scalable "smart grid" technology I can think of right now are the neurons in your brain. From what I understand, your brain can contain up to 100 billion neuron nodes (+/- a few hundred million depending upon epigenetic, evolutionary, and environmental factors) and each node can create up to 10,000 different connections which totals up to a staggering 1 Quadrillion connections (10k x 100 Billion = 1 x 10^15), if all neural connections were saturated at the same time (all in 3 pounds of weight, and it's the most energy efficient supercomputer in the world running on only roughly 12 watts of energy). There is no artificial intelligent computer or grid in existence that can do that, ever (at least for now).
Keep in mind, when I'm talking about neural connections, it's logical connections, not physical connections. The logical connection algorithms are unique to the neuron itself, which is why you can have two neurons with one physical connection, but you can have the capacity of 10,000 x 10,000 logical connections. You can form 100 million algorithmic logical connections with just two neurons if each neuron contains a capacitance of 10,000 logical connections and each logical connection performs a process and stores a unique memory pattern at the same time. Now THAT, is a smart grid, isn't it great?
We must analyze and implement our smart grid technology the same way. In other words, let's just try to mimic human neural evolution and apply such concepts to our own artificially fabricated neural networks. We must examine how we can recall data as fast as we store it (parallel data manipulation). Instead of promoting asynchronous throughput, why not support near flawless completely synchronous full duplex throughput, at least at the local grid level?
Don't get me wrong, I do understand that the law of thermodynamics applies in terms of data speed in relation to semiconductors and the copper media in which the electrical signals travel through (the brain, unlike the computer, both the processing center and media in which and electrical signal travels are all made of the same material, neurons), but why not have each node that does almost the same thing as a neuron (basically like microscopic routers that can form hundreds of multi-point data connections with other mini-routers and each logical connection forms a unique memory path and the unique forming of the memory path is also simultaneously a process as if the memory and processing activity were synonymous, not separate), but on a smaller and more energy efficient scale? These mini-routers should function as smoothly and as quickly as a high-end layer-3 switch.
The only difference between the brain and an artificial machine is that the brain stores memory in the form of neural connections or pathways and this memory can be recalled forming the same neural algorithm. So in reality, the question is how can we build a smart grid that can "store" and "process" data in the form of artificial neural connections at the same time? (parallel function) This will help not only increase data storage capacity, but also increase processing capacity at the same time, of course everything I've state is theoretical, but it doesn't mean it isn't probable or even plausible.
Allowing this concept to be implemented in current or future technology can pave a pathway for the simultaneous growth of both the grid and data. This allows the grid to evolve in proportion to the amount of data being stored. In essence, two neurons can form 10,000^2 connections (100 million total unique connections) while two transistors can only form 2^2 connections (8 total connections, which doesn't even account for separate memory (because the process itself doesn't hold a memory pattern at the same time, it produces an output, yes, but it doesn't save the output in its own relative logical network path) storage such as cache, RAM, and SSD/HDD). Modern machines are still in their infancy in terms of development, but we're slowly trucking along and correcting fallacies on the way; no rush. :-)
This is the power of human imagination. Let's dream big shall we?