This project is not just a standard malware classifier; it's an end-to-end study on adversarial robustness in AI security.
While many models can achieve high accuracy (>99%) on clean data, this project demonstrates that they are often dangerously insecure and vulnerable to simple attacks. It then implements a professional-grade defense (Adversarial Training) to build a truly robust and secure classifier.
Final Result: The model's security was systematically improved from 0% robustness to 100% robustness against a black-box adversarial attack.