Today differential privacy has been used widely to prevent data reconstruction attacks? There are several algorithms in the literature that can construct original data from the gradients obtained during the training process of distributed machine learning. When we try to reconstruct the data with these differentially private gradients, we get very poor accuracy in building the original data. Have anyone tried to develop an algorithm that can extract the necessary knowledge from the differentially private gradients achieving a significant amount of accuracy(>90%) in data reconstruction?

More Krishna Yadav's questions See All
Similar questions and discussions