載入中

NYCU × HHRIFortifying Cybersecurity in the Age of AI

To address AI-related security challenges in electric autonomous vehicles, Associate Professor Chia-Mu Yu from NYCU's Department of Electronics and Electrical Engineering has partnered with HHRI to examine how adversarial stickers and surface dirt can cause vehicle cameras to misjudge road conditions. As collaboration deepened and given Hon Hai Technology Group's leadership in manufacturing, the research focus gradually shifted toward AI security in smart manufacturing. The goal is to ensure that highly automated production lines and lights-out factories possess the resilience needed to withstand cybersecurity attacks.

Focusing on Three Major Issues

Current joint research between Professor Yu's team and HHRI focuses on three key issues:

1.Data Poisoning: Because the training data used in large language models comes from diverse sources, it may contain malicious code from the Internet or code with security vulnerabilities. This can cause the models to generate unsafe programs.

2.Code Quality and Security: AI-generated code varies in quality and may contain vulnerabilities or errors. While human programmers also make mistakes, because AI generates code extremely quickly, more efficient automated detection tools are needed to ensure code security.

3.AI Hallucinations and Program Errors: AI language models occasionally hallucinate, producing fictitious content. While this may occur less frequently in code, a more serious problem is AI may generate code that works but contains errors or has security vulnerabilities that malicious attackers could exploit. Therefore, research goals include reducing the proportion of harmful code generated by AI models and developing efficient automated tools to detect errors.

Clear Roles, Precise Solutions

Professor Yu observes that the division of labor in the collaboration with HHRI is very clear. HHRI acts as the “problem setter,” identifying real-world industry needs and pain points to ensure that research directions address real issues. NYCU's team serves as the “problem solver,” extracting specific, feasible research topics from broad issues and developing solutions.

Through regular meetings, NYCU's team also receives direct, timely feedback from Hon Hai business units. Both sides quickly reach consensus on approaches and communication styles, greatly reducing the communication gap often seen in typical industry-academia collaborations.

Professor Yu believes this collaborative model ensures that academic research moves beyond theory. It allows algorithms and systems developed by his team to be tested on Hon Hai's experimental production lines to verify their feasibility and effectiveness, paving the way for real-world applications.

More importantly, collaboration with HHRI provides NYCU's team with industry perspectives unavailable in academia alone.

Professor Yu explains that academic research is often theory-based, while industry collaboration provides direct engagement with the most pressing real-world problems. Moreover, HHRI not only identifies immediate short-term challenges but also alerts academic teams to issues they may face three to five years ahead, ensuring that research truly helps enterprises reduce costs and improve operational efficiency.

Defending Against AI Attacks, Elevating Taiwan's Cybersecurity Standards

“Collaboration with HHRI holds significant demonstration value and has a profound impact on Taiwan's cybersecurity industry,” says Professor Yu. Currently, most people focus on using AI to solve traditional cybersecurity problems, but NYCU and the Information Security Research Center are already working to defend against attacks targeting AI itself.

He further explains that as AI is widely deployed in electric autonomous vehicles, smart factories, and other fields, hackers no longer target only traditional information systems but are shifting their attention to AI systems. Traditional systems follow program logic, while AI systems involve more complex mathematical algorithms. As a result, attack methods and defensive strategies differ completely between the two.

Defensive AI systems strengthen resilience against evolving cybersecurity threats.
Defensive AI systems strengthen resilience against evolving cybersecurity threats.

As tech giants like Google and OpenAI have established AI security teams, Professor Yu believes the NYCU-HHRI collaboration demonstrates to Taiwan's cybersecurity industry that defending AI systems itself is a worthwhile and promising research direction. He hopes this collaboration will encourage more Taiwanese enterprises to invest in cybersecurity, elevating the overall technical standards of the cybersecurity industry.