Secure Hardware Accelerators for Machine Learning: Design, Evaluation, and Mitigation of Vulnerabilities

From ISLAB/CAISR
Title Secure Hardware Accelerators for Machine Learning: Design, Evaluation, and Mitigation of Vulnerabilities
Summary This master's project focuses on investigating the security of hardware accelerators designed for machine learning
Keywords Hardware Security, Machine Learning, Privacy Analysis
TimeFrame
References
Prerequisites
Author Mahdi Fazeli and Ahmad Patooghy
Supervisor Mahdi Fazeli and Ahmad Patooghy (North Carolina University, US)
Level Master
Status Open


Abstract: As hardware accelerators play a pivotal role in modern machine learning systems, ensuring their security is paramount. This master's project focuses on investigating the security of hardware accelerators designed for machine learning and proposes techniques to evaluate and mitigate vulnerabilities, ensuring the integrity and confidentiality of machine learning operations. Objectives: 1. Hardware Accelerator Vulnerability Analysis: Investigate potential security vulnerabilities specific to hardware accelerators used in machine learning applications. Identify attack vectors, such as side-channel attacks, hardware Trojans, and supply chain vulnerabilities. 2. Security Evaluation Framework: Develop a comprehensive security evaluation framework tailored to hardware accelerators. Define security metrics, threat models, and evaluation criteria to assess the resilience of accelerators against various attacks. 3. Vulnerability Mitigation Techniques: Research and implement mitigation techniques to enhance the security of hardware accelerators. Explore countermeasures against side-channel attacks, hardware Trojans, and other potential threats. 4. Performance Impact Assessment: Evaluate the performance overhead introduced by security enhancements and assess their impact on machine learning model execution time, energy consumption, and accuracy. 5. Real-world Applications: Apply the developed security evaluation framework and mitigation techniques to practical machine learning accelerator designs or FPGA-based accelerators, considering use cases such as edge AI, autonomous vehicles, and IoT devices. Methodology: • Literature Review: Conduct an extensive review of security challenges in hardware accelerators for machine learning and existing mitigation techniques. • Vulnerability Analysis: Identify potential vulnerabilities through theoretical analysis and practical experiments. This may involve reverse engineering, code analysis, or hardware testing. • Security Evaluation Framework Development: Design and implement a comprehensive security evaluation framework tailored to hardware accelerators. Define metrics, threat models, and evaluation procedures. • Mitigation Technique Implementation: Research, develop, and implement mitigation techniques to address identified vulnerabilities. This may include hardware design modifications, secure boot procedures, or cryptographic enhancements. • Performance Evaluation: Evaluate the performance impact of security enhancements using benchmark machine learning models. Measure execution time, energy consumption, and model accuracy under different scenarios. • Real-world Application: Apply the developed framework and mitigation techniques to real or simulated machine learning accelerator scenarios. Analyze the effectiveness of security measures in protecting against attacks. Deliverables: • A research paper documenting the investigation, implementation, and evaluation of hardware accelerator security measures. • Open-source code and tools for security evaluation and mitigation of hardware accelerators. • Case studies demonstrating the practical application of security enhancements in machine learning accelerators.