AlignerrAt Alignerr, we partner with the world's leading AI research teams and labs to build and train cutting-edge AI models.
This role focuses on structured adversarial reasoning rather than exploit development. You will work with realistic attack scenarios to model how threats move through systems, where defenses fail, and how risk propagates across modern environments.
Offensive Security Analyst (Structured / Non-Exploit)
Contract / Task-
$40–$60 /hour Location:
Remote Commitment: 10–40 hours/week
Analyze attack paths, kill chains, and adversary strategies across real-world systems
Classify weaknesses, misconfigurations, and defensive gaps
Review red-team style scenarios and intrusion narratives
Help generate, label, and validate adversarial reasoning data used to train and evaluate AI systems
2+ years in pentesting, red team, or a strong blue-team role with hands-on attack knowledge
Understand how real attacks unfold in production environments
Ability to clearly explain attack chains, impact, and tradeoffs
Competitive pay and flexible remote work.
Work directly on frontier AI systems.
autonomy, flexibility, and global collaboration.
Potential for contract extension.
Application Process (Takes 10–15 min)
Our team reviews applications daily. Please complete your AI interview and application steps to be considered for this opportunity.
Technology, Information and Internet
¿Te interesa este puesto?