Study reveals poetic prompting can sometimes jailbreak AI models
Study reveals poetic prompts could jailbreak AI
Well, AI is joining the ranks of many, many people: It doesn't really understand poetry.Research from Italy’s Icaro Lab found that poetry can be used to jailbreak AI and skirt safety protections.In the study, researchers wrote 20 prompts that started with short poetic vignettes in Italian...
0 Compartilhamentos
20 Visualizações