Attention Shift: Steering AI Away from Unsafe Content

Abstract

This study analyses the generation of unsafe or harmful content in state-of-the-art generative models with a focus on techniques used for restricting such generations. We introduce a training-free approach using attention reweighing to remove unsafe concepts without additional training during inference. We compare the performance of models post the application of ablation techniques on both, direct as well as jailbreak prompt attacks, hypothesize potential reasons for the observed results, and discuss the limitations and broader implications of the approaches.

Publication
NeurIPS 2024, RBFM Workshop