AI Safety Diary: September 13, 2025

Investigates how LLMs can be tuned to become more susceptible to jailbreaking, highlighting the implications for AI safety and the need for robust defenses.

September 13, 2025 · 1 min