The Opportunity We are building a dedicated AI Red Team to rigorously test and harden enterprise-scale AI products. We are looking for an adversarial machine learning specialist who thinks like an attacker. This role focuses on identifying vulnerabilities in LLM-driven systems, breaking model guardr