The rapid integration of Machine Learning (ML) into critical sectors has spotlighted the vulnerability of ML models to adversarial attacks—deliberately crafted inputs designed to induce erroneous model predictions. This study embarks on an exhaustive exploration of ML model robustness, dissecting the efficacy of various attack techniques and countermeasures through a detailed performance analysis. Central to our investigation are novel robustness metrics—Accuracy under Attack, Attack Success Rate, Robustness Margin, and Confidence Score Stability—devised to offer a multidimensional assessment of model resilience. Employing the MNIST dataset, we subjected a baseline DNN model to a spectrum of adversarial attacks, including FGSM, HopSkipJump, and Carlini & Wagner, evaluating the impact of defense mechanisms like Adversarial Training, Feature Squeezing, and Defensive Distillation. Our findings reveal significant insights into model vulnerabilities and the protective efficacy of deployed defenses, underscoring the imperative for ongoing advancement in defensive strategies. This study not only highlights the critical need for robust ML models in the face of sophisticated adversarial threats but also lays the groundwork for future research aimed at fostering a more secure, transparent, and resilient AI landscape.