| Resumen |
Malicious programs, commonly called malware, have had a pervasive presence in the world for nearly forty years and have continued to evolve and multiply exponentially. On the other hand, there are multiple research works focused on malware detection with different strategies that seem to work only temporarily, as new attack tactics and techniques quickly emerge. There are increasing proposals to analyze the problem from the attacker's perspective, as suggested by MITRE ATT&CK. This article presents a proposal that utilizes Large Language Models (LLMs) to generate malware and understand its generation from the perspective of a red team. It demonstrates how to create malware using current models that incorporate censorship, and a specialized model is trained (fine-tuned) to generate code, enabling it to learn how to create malware. Both scenarios are evaluated using the pass@k metric and a controlled execution environment (malware lab) to prevent its spread. |