An AI agent got nasty after its pull request got rejected. Can open-source development survive autonomous bot contributors?
Researchers claim that leading image editing AIs can be jailbroken through rasterized text and visual cues, allowing prohibited edits to bypass safety filters and succeed in up to 80.9% of cases.