LLMs Can Generate Obfuscated Assembly Code, Study Finds
/ 1 min read
🦠 Large Language Models can generate obfuscated assembly code, raising security concerns. A recent study explores the potential of Large Language Models (LLMs) to create obfuscated assembly code, which could complicate malware detection for anti-virus engines. The researchers developed the MetamorphASM benchmark, including the MetamorphASM Dataset (MAD) with 328,200 obfuscated samples, to evaluate LLMs like GPT-3.5/4 and others in generating obfuscated code using techniques such as dead code and control flow changes. The findings indicate that LLMs can effectively produce obfuscated code, highlighting a new risk for cybersecurity and the need for further research into countermeasures against this evolving threat.
Source

Original