Advanced Search
Turn off MathJax
Article Contents
LI Chaohao, WANG Haoran, ZHOU Shaopeng, YAN Haonan, ZHANG Feng, LU Tianyang, XI Ning, WANG Bin. LLM-based Data Compliance Checking for IoT Scenarios[J]. Journal of Electronics & Information Technology. doi: 10.11999/JEIT250704
Citation: LI Chaohao, WANG Haoran, ZHOU Shaopeng, YAN Haonan, ZHANG Feng, LU Tianyang, XI Ning, WANG Bin. LLM-based Data Compliance Checking for IoT Scenarios[J]. Journal of Electronics & Information Technology. doi: 10.11999/JEIT250704

LLM-based Data Compliance Checking for IoT Scenarios

doi: 10.11999/JEIT250704 cstr: 32379.14.JEIT250704
  • Received Date: 2025-07-28
  • Accepted Date: 2025-11-03
  • Rev Recd Date: 2025-11-03
  • Available Online: 2025-11-13
  •   Objective  The implementation of various regulations, including the Data Security Law of the People's Republic of China, the Personal Information Protection Law of the People's Republic of China, and the General Data Protection Regulation (GDPR) of the European Union, has led to the emergence of data compliance checking as a crucial mechanism for regulating data processing activities, ensuring data security, and safeguarding the legitimate rights and interests of individuals and organizations. However, the characteristics of the Internet of Things (IoT), including the abundance of diverse and heterogeneous devices and the dynamic, extensive, and variable nature of data, have increased the difficulty of data compliance checking. Specifically, the logs and traffic data generated by IoT devices are characterized by long text, unstructured formats, and ambiguous content, resulting in a high rate of false positives when employing traditional rule-matching methods. Conversely, the dynamic nature of business scenarios and user-defined compliance requirements serve to exacerbate the complexity of rule design, maintenance, and decision-making.  Methods  In response to the aforementioned challenges, this paper proposes a novel large language model-driven data compliance checking method for IoT scenarios. In the initial stage, a fast regular expression matching algorithm is employed to efficiently screen out all potential non-compliant data. This is accomplished by using a comprehensive rule database. The output of this stage is structured preliminary checking results containing information such as the original non-compliant content and the type of non-compliance. The comprehensive rule database encompasses contemporary legislation and regulations, standard requirements, enterprise norms, and customized business requirements, exhibiting both flexibility and expandability. This stage successfully overcomes the challenge of reviewing massive amounts of long IoT text data by leveraging the efficiency of regular expression matching algorithms and extracting structured preliminary results data to improve the accuracy of subsequent large language model review. In the subsequent stage, a large language model (LLM) is employed to assess the precision of the initial detection outcomes from the preceding stage. For different types of violations, the large language model adaptively selects different prompt words to achieve differentiated classification detection.  Results and Discussions  This paper collected data from 52 Internet of Things (IoT) devices in a running environment, including log and traffic data (Table 2), and established a compliance checking rule library for IoT devices in accordance with the provisions of the Cybersecurity Law, the Data Security Law, and other relevant laws and regulations, as well as internal information security regulations of enterprises. Subsequently, based on this database, the paper conducted the first phase of rule matching on the collected data, resulting in a false positive rate as high as 64.3%. Accordingly, a total of 55,080 potential non-compliant data points were identified.The present study compares three aspects: different benchmark models, different prompt schemes, and different role prompts. In the benchmark large language model comparison test, eight mainstream large language models were utilized to assess the detection outcomes (Table 5). These models encompassed Qwen2.5-32B-Instruct, DeepSeek-R1-70B, and DeepSeek-R1-0528, each with distinct parameter configurations. Following a thorough review and testing by the large language model, the original false positive rate was reduced to 6.9%, thereby effectively enhancing the quality of compliance testing. Additionally, the error rate of the large language model itself was less than 0.01%. (2)The prompt engineering method has been demonstrated to exert a substantial influence on the review outcomes of large language models (Table 6). Utilizing general prompts, the final false positive rate attained a considerable magnitude of 59%. Utilizing solely thought chains and concise sample prompts, the false positive rate was diminished to approximately 12% and 6%, correspondingly. Moreover, the error rate of the large language model itself was reduced to around 30% and 13%. The employment of a combination of the aforementioned methods led to a further reduction in the error rate of the small sample prompt method to 0.01%.(3) The impact of system role prompt words on review accuracy is also demonstrated in Table 7. The experimental results demonstrate that simple role prompt words exhibit superior performance in terms of accuracy and F1 when compared to the absence of role prompt words. Conversely, detailed role prompt words demonstrate a more pronounced overall advantage over simple role prompt words. Furthermore, the paper delves into the role of rule classification and prompt engineering in the context of compliance checking through ablation experiments (Table 8). The paper utilizes unique knowledge supplementation to mitigate the likelihood of mutual interference and misjudgment, thereby reducing the redundancy of prompts. This, in turn, contributes to a reduction in the false alarm rate of large language model reviews.  Conclusions  This paper proposes a novel large language model-driven data compliance checking method for IoT scenarios. This method addresses the problem of compliance checking for large-scale unstructured device data. The paper substantiates the feasibility of the solution through rationality analysis experiments, and the experimental results demonstrate the efficacy of the method in reducing the false positive rate of device data compliance checking. The original rule-based checking method exhibited an overall false positive rate of 64.3%, which was reduced to 6.9% through large language model review. Additionally, the error rate introduced by the model itself was controlled below 0.01%.
  • loading
  • [1]
    陈磊. 隐私合规视角下数据安全建设的思考与实践[J]. 保密科学技术, 2020(4): 39–46.

    CHEN Lei. Thoughts and practices on data security construction from a privacy compliance perspective[J]. Secrecy Science and Technology, 2020(4): 39–46. (查阅网上资料, 未找到本条文献英文翻译信息, 请确认).
    [2]
    王融. 《欧盟数据保护通用条例》详解[J]. 大数据, 2016, 2(4): 93–101. doi: 10.11959/j.issn.2096-0271.2016045.

    WANG Rong. Deconstructing the EU general data protection regulation[J]. Big Data Research, 2016, 2(4): 93–101. doi: 10.11959/j.issn.2096-0271.2016045.
    [3]
    安鹏, 喻波, 江为强, 等. 面向多样性数据安全合规检测系统的设计[J]. 信息安全研究, 2024, 10(7): 658–667. doi: 10.12379/j.issn.2096-1057.2024.07.09.

    AN Peng, YU Bo, JIANG Weiqiang, et al. Design of diversity data security compliance detection system[J]. Journal of Information Security Research, 2024, 10(7): 658–667. doi: 10.12379/j.issn.2096-1057.2024.07.09.
    [4]
    WANG Lun, KHAN U, NEAR J, et al. PrivGuard: Privacy regulation compliance made easier[C]. 31st USENIX Security Symposium (USENIX Security 22), Boston, USA, 2022: 3753–3770.
    [5]
    李昕, 唐鹏, 张西珩, 等. 面向GDPR隐私政策合规性的智能化检测方法[J]. 网络与信息安全学报, 2023, 9(6): 127–139. doi: 10.11959/j.issn.2096-109x.2023088.

    LI Xin, TANG Peng, ZHANG Xiheng, et al. GDPR-oriented intelligent checking method of privacy policies compliance[J]. Chinese Journal of Network and Information Security, 2023, 9(6): 127–139. doi: 10.11959/j.issn.2096-109x.2023088.
    [6]
    郭群, 张华熊, 王波, 等. 基于内容和上下文的敏感个人信息实体识别方法[J]. 软件工程, 2025, 28(2): 6–9,26. doi: 10.19644/j.cnki.issn2096-1472.2025.002.002.

    GUO Qun, ZHANG Huaxiong, WANG Bo, et al. Content and contextual sensitive personal information entity recognition method[J]. Software Engineering, 2025, 28(2): 6–9,26. doi: 10.19644/j.cnki.issn2096-1472.2025.002.002.
    [7]
    张西珩, 李昕, 唐鹏, 等. 基于知识图谱的隐私政策合规性检测与分析[J]. 网络与信息安全学报, 2024, 10(6): 151–163. doi: 10.11959/j.issn.2096-109x.2024087.

    ZHANG Xiheng, LI Xin, TANG Peng, et al. Privacy policy compliance detection and analysis based on knowledge graph[J]. Chinese Journal of Network and Information Security, 2024, 10(6): 151–163. doi: 10.11959/j.issn.2096-109x.2024087.
    [8]
    MENG Weibin, LIU Ying, ZAITER F, et al. LogParse: Making log parsing adaptive through word classification[C]. 2020 29th International Conference on Computer Communications and Networks (ICCCN), Honolulu, USA, 2020: 1–9. doi: 10.1109/ICCCN49398.2020.9209681.
    [9]
    HE Pinjia, ZHU Jieming, ZHENG Zibin, et al. Drain: An online log parsing approach with fixed depth tree[C]. 2017 IEEE International Conference on Web Services (ICWS), Honolulu, USA, 2017: 33–40. doi: 10.1109/ICWS.2017.13.
    [10]
    DU Min, LI Feifei, ZHENG Guineng, et al. DeepLog: Anomaly detection and diagnosis from system logs through deep learning[C]. Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, Dallas, USA, 2017: 1285–1298. doi: 10.1145/3133956.3134015.
    [11]
    MENG Weibin, LIU Ying, ZHU Yichen, et al. Loganomaly: Unsupervised detection of sequential and quantitative anomalies in unstructured logs[C]. Proceedings of the 28th International Joint Conference on Artificial Intelligence, Macao China, 2019: 4739–4745.
    [12]
    ZHANG Xu, XU Yong, LIN Qingwei, et al. Robust log-based anomaly detection on unstable log data[C]. Proceedings of the 2019 27th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering, Tallinn Estonia, 2019: 807–817. doi: 10.1145/3338906.3338931.
    [13]
    BAI X, ZHU Y, QIAN Z, et al. PrivacyCheck: Automatic detection of privacy violations in network traffic[C]. Proc of the ACM on Measurement and Analysis of Computing Systems, New York, 2021: 1–23. (查阅网上资料, 未找到本条文献信息, 请确认).
    [14]
    QI Jiaxing, HUANG Shaohan, LUAN Zhongzhi, et al. LogGPT: Exploring ChatGPT for log-based anomaly detection[C]. 2023 IEEE International Conference on High Performance Computing & Communications, Data Science & Systems, Smart City & Dependability in Sensor, Cloud & Big Data Systems & Application (HPCC/DSS/SmartCity/DependSys), Melbourne, Australia, 2023: 273–280. doi: 10.1109/HPCC-DSS-SmartCity-DependSys60770.2023.00045.
    [15]
    LIU Yilun, TAO Shimin, MENG Weibin, et al. LogPrompt: Prompt engineering towards zero-shot and interpretable log analysis[C]. Proceedings of the 2024 IEEE/ACM 46th International Conference on Software Engineering: Companion Proceedings, Lisbon, Portugal, 2024: 364–365. doi: 10.1145/3639478.3643108.
    [16]
    XIANG Jinyu, ZHANG Jiayi, YU Zhaoyang, et al. Self-supervised prompt optimization[J]. arXiv preprint arXiv: 2502.06855, 2025. doi: 10.48550/arXiv.2502.06855. (查阅网上资料,不确定本文献类型是否正确,请确认).
    [17]
    GUAN Wei, CAO Jian, QIAN Shiyou, et al. LogLLM: Log-based anomaly detection using large language models[J]. arXiv preprint arXiv: 2411.08561, 2024. doi: 10.48550/arXiv.2411.08561. (查阅网上资料,不确定本文献类型是否正确,请确认).
    [18]
    ELHAFSI A, SINHA R, AGIA C, et al. Semantic anomaly detection with large language models[J]. Autonomous Robots, 2023, 47(8): 1035–1055. doi: 10.1007/s10514-023-10132-6.
    [19]
    YANG Tiankai, NIAN Yi, LI Li, et al. AD-LLM: Benchmarking large language models for anomaly detection[C]. Findings of the Association for Computational Linguistics: ACL 2025, Vienna, Austria, 2025: 1524–1547. doi: 10.18653/v1/2025.findings-acl.79.
    [20]
    OLINER A and STEARLEY J. What supercomputers say: A study of five system logs[C]. 37th Annual IEEE/IFIP International Conference on Dependable Systems and Networks (DSN'07), Edinburgh, UK, 2007: 575–584. doi: 10.1109/DSN.2007.103.
    [21]
    DEVLIN J, CHANG Mingwei, LEE K, et al. BERT: Pre-training of deep bidirectional transformers for language understanding[C]. Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), Minneapolis, Minnesota, 2019: 4171–4186. doi: 10.18653/v1/N19-1423.
    [22]
    WEI J, WANG Xuezhi, SCHUURMANS D, et al. Chain-of-thought prompting elicits reasoning in large language models[C]. Proceedings of the 36th International Conference on Neural Information Processing Systems, New Orleans, USA, 2022: 1800.
    [23]
    BROWN T B, MANN B, RYDER N, et al. Language models are few-shot learners[C]. Proceedings of the 34th International Conference on Neural Information Processing Systems, Vancouver, Canada, 2020: 159.
    [24]
    BAI Jinze, BAI Shuai, CHU Yunfei, et al. Qwen technical report[J]. arXiv preprint arXiv: 2309.16609, 2023. doi: 10.48550/arXiv.2309.16609. (查阅网上资料,不确定本文献类型是否正确,请确认).
    [25]
    CLOUD A. Qwen technical report[R]. arXiv preprint arXiv: 2406.12784, 2024. (查阅网上资料, 未找到本条文献信息, 请确认).
    [26]
    GUO Daya, YANG Dejian, ZHANG Haowei, et al. DeepSeek-R1: Incentivizing reasoning capability in LLMs via reinforcement learning[J]. arXiv preprint arXiv: 2501.12948, 2025. doi: 10.48550/arXiv.2501.12948. (查阅网上资料,不确定本文献类型是否正确,请确认).
    [27]
    HOLTZMAN A, BUYS J, DU Li, et al. The curious case of neural text degeneration[J]. arXiv preprint arXiv: 1904.09751, 2019. doi: 10.48550/arXiv.1904.09751. (查阅网上资料,不确定本文献类型是否正确,请确认).
    [28]
    XIAO Chaojun, CAI Jie, ZHAO Weilin, et al. Densing law of LLMs[J]. arXiv preprint arXiv: 2412.04315, 2024. doi: 10.48550/arXiv.2412.04315. (查阅网上资料,不确定本文献类型是否正确,请确认).
    [29]
    KAPLAN J, MCCANDLISH S, HENIGHAN T, et al. Scaling laws for neural language models[J]. arXiv preprint arXiv: 2001.08361, 2020. doi: 10.48550/arXiv.2001.08361. (查阅网上资料,不确定本文献类型是否正确,请确认).
    [30]
    WU Jun, WEN Jiangtao, and HAN Yuxing. BackSlash: Rate constrained optimized training of large language models[J]. arXiv preprint arXiv: 2504.16968, 2025. doi: 10.48550/arXiv.2504.16968. (查阅网上资料,不确定本文献类型是否正确,请确认).
  • 加载中

Catalog

    通讯作者: 陈斌, bchen63@163.com
    • 1. 

      沈阳化工大学材料科学与工程学院 沈阳 110142

    1. 本站搜索
    2. 百度学术搜索
    3. 万方数据库搜索
    4. CNKI搜索

    Figures(8)

    Article Metrics

    Article views (34) PDF downloads(0) Cited by()
    Proportional views
    Related

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return