Given dynamic increase in volume of data transmitted via Fast Internet and process and stored in Cloud Computing, the Fog Computing has become integrated part of Big Data Systems.
I'm also hoping that some ones can answer this question. There should be some distinct metrics for researchers to implement fog computing and improve the system performance.
Fog Computing and Cloud Computing are highly interconnected to one another. Indeed, Fog Computing is the extension of Cloud Computing toward the network edge. Resources and services of computing, storage, and networking are distributed in closer proximity to end users and devices in order to enable benefits such as low response times and bandwidth efficiency.
Fog and Cloud are complementary, as a complex service can be often subdivided in subservices that can be deployed anywhere along the continuum from Cloud to Things. Some subservices are better to be placed closer to the end devices (e.g., at the network edge); others are meant to run in the Cloud because for example they perform long-term analysis and storage.
A first system performance metric for Fog Computing is Quality of Service and in particular the Round Trip Time between end devices and a Fog service.
Another metric can be the reduction of bandwidth consumption enabled by the Fog.
Placing a critical subservice in the Fog rather than in the Cloud can improve service availability in the presence of hostile environments (e.g., those with intermittent network connectivity). Therefore, service uptime may be another performance metric.
Fog nodes are in general less powerful than Cloud Servers. Thus, all those metrics that quantify the resource efficiency of your solution (e.g., number of distinct services that you are able to deploy on a Fog node) may be of interest to you.
Please check our recent work "Performance Evaluation Metrics for Cloud, Fog and Edge Computing: A Review, Taxonomy, Benchmarks and Standards for Future Research"Article Performance Evaluation Metrics for Cloud, Fog and Edge Compu...