2026
论文总结-Understanding and Rectifying Safety Perception Distortion in VLMs 论文总结-JailBound:Jailbreaking Internal Safety Boundaries of Vision-Language Models 论文总结-VLM-Guard:Safeguarding Vision-Language Models via Fulfilling Safety Alignment Gap 论文总结-MMJ-Bench:A Comprehensive Study on Jailbreak Attacks and Defenses for Vision Language Models 论文总结-ETA:Evaluating Then Aligning Safety of Vision Language Models at Inference Time