关键词: data privacy image de-identification privacy-preserving deep learning

来  源:   DOI:10.3390/s24103166   PDF(Pubmed)

Abstract:
Differential privacy has emerged as a practical technique for privacy-preserving deep learning. However, recent studies on privacy attacks have demonstrated vulnerabilities in the existing differential privacy implementations for deep models. While encryption-based methods offer robust security, their computational overheads are often prohibitive. To address these challenges, we propose a novel differential privacy-based image generation method. Our approach employs two distinct noise types: one makes the image unrecognizable to humans, preserving privacy during transmission, while the other maintains features essential for machine learning analysis. This allows the deep learning service to provide accurate results, without compromising data privacy. We demonstrate the feasibility of our method on the CIFAR100 dataset, which offers a realistic complexity for evaluation.
摘要:
差分隐私已成为隐私保护深度学习的实用技术。然而,最近关于隐私攻击的研究已经证明了深度模型的现有差分隐私实现中的漏洞。虽然基于加密的方法提供了强大的安全性,他们的计算开销往往令人望而却步。为了应对这些挑战,提出了一种基于差分隐私的图像生成方法。我们的方法采用了两种不同的噪声类型:一种使图像无法被人类识别,在传输过程中保护隐私,而另一个保持机器学习分析必不可少的功能。这允许深度学习服务提供准确的结果,不影响数据隐私。我们在CIFAR100数据集上证明了我们方法的可行性,这为评估提供了现实的复杂性。
公众号