Advanced Search
Volume 47 Issue 8
Aug.  2025
Turn off MathJax
Article Contents
LIU Zhenhua, WANG Wenxin, DONG Xinfeng, WANG Baocang. One-sided Personalized Differential Privacy Random Response Algorithm Driven by User Sensitive Weights[J]. Journal of Electronics & Information Technology, 2025, 47(8): 2768-2779. doi: 10.11999/JEIT250099
Citation: LIU Zhenhua, WANG Wenxin, DONG Xinfeng, WANG Baocang. One-sided Personalized Differential Privacy Random Response Algorithm Driven by User Sensitive Weights[J]. Journal of Electronics & Information Technology, 2025, 47(8): 2768-2779. doi: 10.11999/JEIT250099

One-sided Personalized Differential Privacy Random Response Algorithm Driven by User Sensitive Weights

doi: 10.11999/JEIT250099 cstr: 32379.14.JEIT250099
Funds:  The National Cryptologic Science Fund of China (2025NCSF02032), The Natural Science Foundation of Shaanxi Province (2022JZ-38), The National Natural Science Foundation of China (61807026), The Secure Communication Key Laboratory Stabilization Program Support Project (2023)
  • Received Date: 2025-02-20
  • Rev Recd Date: 2025-04-14
  • Available Online: 2025-04-15
  • Publish Date: 2025-08-27
  •   Objective  One-sided differential privacy has received increasing attention in privacy protection due to its ability to shield sensitive information. This mechanism ensures that adversaries cannot substantially reduce uncertainty regarding record sensitivity, thereby enhancing privacy. However, its use in practical datasets remains constrained. Specifically, the random response algorithm under one-sided differential privacy performs effectively only when the proportion of sensitive records is low, but yields limited results in datasets with high sensitivity ratios. Examples include medical records, financial transactions, and personal data in social networks, where sensitivity levels are inherently high. Existing algorithms often fail to meet privacy protection requirements in such contexts. This study proposes an extension of the one-sided differential privacy random response algorithm by introducing user-sensitive weights. The method enables efficient processing of highly sensitive datasets while substantially improving data utility and maintaining privacy guarantees, supporting secure analysis and application of high-sensitivity data.  Methods  This study proposes a one-sided personalized differential privacy random response algorithm comprising three key stages: sensitivity specification, personalized sampling, and fixed-value noise addition. In the sensitivity specification stage, user data are mapped to sensitivity weight values using a predefined sensitivity function. This function reflects both the relative importance of each record to the user and its quantified sensitivity level. The resulting sensitivity weights are then normalized to compute a comprehensive sensitivity weight for each user. In the personalized sampling stage, the data sampling probability is adjusted dynamically according to the user’s comprehensive sensitivity weight. Unlike uniform-probability sampling employed in conventional methods, this personalized approach reduces sampling bias and improves data representativeness, thereby enhancing utility. In the fixed-value noise addition stage, the noise amount is determined in proportion to the comprehensive sensitivity weight. In high-sensitivity scenarios, a larger noise value is added to reinforce privacy protection; in low-sensitivity scenarios, the noise is reduced to preserve data availability. This adaptive mechanism allows the algorithm to balance privacy protection with utility across different application contexts.  Results and Discussions  The primary innovations of this study are reflected in three areas. First, a one-sided personalized differential privacy random response algorithm is proposed, incorporating a sensitivity specification function to allocate personalized sensitivity weights to user data. This design captures user-specific sensitivity requirements across data attributes and improves system efficiency by minimizing user interaction. Second, a personalized sampling method based on comprehensive sensitivity weights is developed to support fine-grained privacy protection. Compared with conventional approaches, this method dynamically adjusts sampling strategies in response to user-specific privacy preferences, thereby increasing data representativeness while maintaining privacy. Third, the algorithm’s sensitivity shielding property is established through theoretical analysis, and its effectiveness is validated via simulation experiments. The results show that the proposed algorithm outperforms the traditional one-sided differential privacy random response algorithm in both data utility and robustness. In high-sensitivity scenarios, improvements in query accuracy and robustness are particularly evident. When the data follow a Laplace distribution, for the sum function, the Root Mean Square Error (RMSE) produced by the proposed algorithm is approximately 76.67% of that generated by the traditional algorithm, with the threshold upper bound set to 0.6 (Fig. 4(c)). When the data follow a normal distribution, in the coefficient of variation function, the RMSE produced by the proposed algorithm remains below 200 regardless of whether the upper bound of the threshold t is 0.7, 0.8, or 0.9, while the RMSE of the traditional algorithm consistently exceeds 200 (Fig. 5(g,h,i)). On real-world datasets, the proposed algorithm achieves higher data utility across all three evaluated functions compared with the traditional approach (Fig. 6).  Conclusions  The proposed one-sided personalized differential privacy random response algorithm achieves effective performance under an equivalent level of privacy protection. It is applicable not only in datasets with a low proportion of sensitive records but also in those with high sensitivity, such as healthcare and financial transaction data. By integrating sensitivity specification, personalized sampling, and fixed-value noise addition, the algorithm balances privacy protection with data utility in complex scenarios. This approach offers reliable technical support for the secure analysis and application of highly sensitive data. Future work may investigate the extension of this algorithm to scenarios involving correlated data in relational databases.
  • loading
  • [1]
    DWORK C. Differential privacy[C]. 33rd International Colloquium on Automata, Languages and Programming, Venice, Italy, 2006: 1–12. doi: 10.1007/11787006_1.
    [2]
    JORGENSEN Z, YU Ting, and CORMODE G. Conservative or liberal? Personalized differential privacy[C]. The 31st International Conference on Data Engineering, Seoul, Korea (South), 2015: 1023–1034. doi: 10.1109/ICDE.2015.7113353.
    [3]
    KOTSOGIANNIS I, DOUDALIS S, HANEY S, et al. One-sided differential privacy[C]. The 36th International Conference on Data Engineering, Dallas, USA, 2020: 493–504. doi: 10.1109/ICDE48307.2020.00049.
    [4]
    RATLIFF Z and VADHAN S. A framework for differential privacy against timing attacks[C]. ACM SIGSAC Conference on Computer and Communications Security, Salt Lake City, USA, 2024: 3615–3629. doi: 10.1145/3658644.3690206.
    [5]
    WANG Shiming, XIANG Liyao, CHENG Bowei, et al. Curator attack: When blackbox differential privacy auditing loses its power[C]. ACM SIGSAC Conference on Computer and Communications Security, Salt Lake City, USA, 2024: 3540–3554. doi: 10.1145/3658644.3690367.
    [6]
    LIU Junxu, LOU Jian, XIONG Li, et al. Cross-silo federated learning with record-level personalized differential privacy[C]. ACM SIGSAC Conference on Computer and Communications Security, Salt Lake City, 2024: 303–317. doi: 10.1145/3658644.3670351.
    [7]
    IMOLA J, CHOWDHURY A R, and CHAUDHURI K. Metric differential privacy at the user-level via the earth-mover’s distance[C]. ACM SIGSAC Conference on Computer and Communications Security, Salt Lake City, USA, 2024: 348–362. doi: 10.1145/3658644.3690363.
    [8]
    魏立斐, 张无忌, 张蕾, 等. 基于本地差分隐私的异步横向联邦安全梯度聚合方案[J]. 电子与信息学报, 2024, 46(7): 3010–3018. doi: 10.11999/JEIT230923.

    WEI Lifei, ZHANG Wuji, ZHANG Lei, et al. A secure gradient aggregation scheme based on local differential privacy in asynchronous horizontal federated learning[J]. Journal of Electronics & Information Technology, 2024, 46(7): 3010–3018. doi: 10.11999/JEIT230923.
    [9]
    DU Minxin, YUE Xiang, CHOW S S M, et al. DP-Forward: Fine-tuning and inference on language models with differential privacy in forward pass[C]. ACM SIGSAC Conference on Computer and Communications Security, Copenhagen, Denmark, 2023: 2665–2679. doi: 10.1145/3576915.3616592.
    [10]
    DWORK C, KENTHAPADI K, MCSHERRY F, et al. Our data, ourselves: Privacy via distributed noise generation[C]. 25th International Conference on the Theory and Applications of Cryptographic Techniques, St. Petersburg, Russia, 2006: 486–503. doi: 10.1007/11761679_29.
    [11]
    GEHRKE J, HAY M, LUI E, et al. Crowd-blending privacy[C]. 32nd Annual Cryptology Conference on Advances in Cryptology, Santa Barbara, USA, 2012: 479–496. doi: 10.1007/978-3-642-32009-5_28.
    [12]
    YANG Bin, SATO I, and NAKAGAWA H. Bayesian differential privacy on correlated data[C]. ACM International Conference on Management of Data, Melbourne, Australia, 2015: 747–762. doi: 10.1145/2723372.2747643.
    [13]
    FU Yucheng and WANG Tianhao. Benchmarking secure sampling protocols for differential privacy[C]. ACM SIGSAC Conference on Computer and Communications Security, Salt Lake City, USA, 2024: 318–332. doi: 10.1145/3658644.3690257.
    [14]
    LU Mingjie and LIU Zhenhua. Improving accuracy of interactive queries in personalized differential privacy[C]. 6th International Conference on Frontiers in Cyber Security, Chengdu, China, 2024: 141–159. doi: 10.1007/978-981-99-9331-4_10.
    [15]
    NISSIM K, RASKHODNIKOVA S, and SMITH A. Smooth sensitivity and sampling in private data analysis[C]. Thirty-Ninth Annual ACM Symposium on Theory of Computing, San Diego, USA, 2007: 75–84. doi: 10.1145/1250790.1250803.
    [16]
    HUANG Wen, ZHOU Shijie, ZHU Tianqing, et al. Privately publishing internet of things data: Bring personalized sampling into differentially private mechanisms[J]. IEEE Internet of Things Journal, 2022, 9(1): 80–91. doi: 10.1109/JIOT.2021.3089518.
    [17]
    朱友文, 王珂, 周玉倩. 一种满足个性化差分隐私的多方垂直划分数据合成机制[J]. 电子与信息学报, 2024, 46(5): 2159–2176. doi: 10.11999/JEIT231158.

    ZHU Youwen, WANG Ke, and ZHOU Yuqian. A multi-party vertically partitioned data synthesis mechanism with personalized differential privacy[J]. Journal of Electronics & Information Technology, 2024, 46(5): 2159–2176. doi: 10.11999/JEIT231158.
    [18]
    LIU Zhenhua, WANG Wenxin, LIANG Han, et al. Enhancing data utility in personalized differential privacy: A fine-grained processing approach[C]. Second International Conference on Data Security and Privacy Protection, Xi’an, China, 2025: 47–66. doi: 10.1007/978-981-97-8546-9_3.
    [19]
    LI Yijing, TAO Xiaofeng, ZHANG Xuefei, et al. Break the data barriers while keeping privacy: A graph differential privacy method[J]. IEEE Internet of Things Journal, 2023, 10(5): 3840–3850. doi: 10.1109/JIOT.2022.3151348.
    [20]
    LIU Jiandong, ZHANG Lan, LV Chaojie, et al. TPMDP: Threshold personalized multi-party differential privacy via optimal Gaussian mechanism[C]. 20th International Conference on Mobile Ad Hoc and Smart Systems, Toronto, Canada, 2023: 161–169. doi: 10.1109/MASS58611.2023.00027.
    [21]
    XU Chuan, DING Yingyi, CHEN Chao, et al. Personalized location privacy protection for location-based services in vehicular networks[J]. IEEE Transactions on Intelligent Transportation Systems, 2023, 24(1): 1163–1177. doi: 10.1109/TITS.2022.3182019.
    [22]
    ZHANG Mingyue, ZHOU Junlong, ZHANG Gongxuan, et al. APDP: Attribute-based personalized differential privacy data publishing scheme for social networks[J]. IEEE Transactions on Network Science and Engineering, 2023, 10(2): 922–933. doi: 10.1109/TNSE.2022.3224731.
    [23]
    GENG Quan, DING Wei, GUO Ruiqi, et al. Tight analysis of privacy and utility tradeoff in approximate differential privacy[C]. The 23rd International Conference on Artificial Intelligence and Statistics, Palermo, Italy, 2020: 89–99.
    [24]
    HUANG Wen, ZHOU Shijie, LIAO Yongjian, et al. Optimizing query times for multiple users scenario of differential privacy[J]. IEEE Access, 2019, 7: 183292–183299. doi: 10.1109/ACCESS.2019.2960283.
    [25]
    NIU Ben, CHEN Yahong, WANG Boyang, et al. AdaPDP: Adaptive personalized differential privacy[C]. IEEE Conference on Computer Communications, Vancouver, Canada, 2021: 1–10. doi: 10.1109/INFOCOM42981.2021.9488825.
    [26]
    HANEY S, MACHANAVAJJHALA A, and DING Bolin. Design of policy-aware differentially private algorithms[J]. Proceedings of the VLDB Endowment, 2015, 9(4): 264–275. doi: 10.14778/2856318.2856322.
    [27]
    BÖHLER J, BERNAU D, and KERSCHBAUM F. Privacy-preserving outlier detection for data streams[C]. 31st Annual IFIP WG 11.3 Conference on Data and Applications Security and Privacy XXXI, Philadelphia, USA, 2017: 225–238. doi: 10.1007/978-3-319-61176-1_12.
    [28]
    ASIF H, VAIDYA J, and PAPAKONSTANTINOU P A. Identifying anomalies while preserving privacy[J]. IEEE Transactions on Knowledge and Data Engineering, 2023, 35(12): 12264–12281. doi: 10.1109/TKDE.2021.3129633.
    [29]
    LI Xin, ZHU Hong, ZHANG Zhiqiang, et al. Item-oriented personalized LDP for discrete distribution estimation[C]. 28th European Symposium on Research in Computer Security, The Hague, The Netherlands, 2024: 446–466. doi: 10.1007/978-3-031-51476-0_22.
    [30]
    GOLDBERG A, FANTI G C, and SHAH N B. Batching of tasks by users of pseudonymous forums: Anonymity compromise and protection[J]. Proceedings of the ACM on Measurement and Analysis of Computing Systems, 2023, 7(1): 22. doi: 10.1145/3579335.
    [31]
    PAPPACHAN P, ZHANG Shufan, HE Xi, et al. Preventing inferences through data dependencies on sensitive data[J]. IEEE Transactions on Knowledge and Data Engineering, 2024, 36(10): 5308–5327. doi: 10.1109/TKDE.2023.3336630.
    [32]
    MCSHERRY F. Privacy integrated queries: An extensible platform for privacy-preserving data analysis[J]. Communications of the ACM, 2010, 53(9): 89–97. doi: 10.1145/1810891.1810916.
    [33]
    DWORK C, MCSHERRY F, NISSIM K, et al. Calibrating noise to sensitivity in private data analysis[C]. Third Theory of Cryptography Conference on Theory of Cryptography, New York, USA, 2006: 265–284. doi: 10.1007/11681878_14.
    [34]
    DEVAKUMAR K P. Data survey of COVID-19[EB/OL]. https://www.kaggle.com/datasets/imdevskp/corona-virus-report, 2024.
  • 加载中

Catalog

    通讯作者: 陈斌, bchen63@163.com
    • 1. 

      沈阳化工大学材料科学与工程学院 沈阳 110142

    1. 本站搜索
    2. 百度学术搜索
    3. 万方数据库搜索
    4. CNKI搜索

    Figures(6)  / Tables(1)

    Article Metrics

    Article views (154) PDF downloads(28) Cited by()
    Proportional views
    Related

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return