About The Job At Alignerr, we partner with the world’s leading AI research teams and labs to build and train cutting-edge AI models. This role focuses on structured adversarial reasoning rather than exploit development. You will work with realistic attack scenarios to model how threats move through