New SonarSource research shows LLMs like GPT-4o, Claude Sonnet 4, and Llama-3.2 produce highly functional yet risky code — with frequent high-severity vulnerabilities, hard-coded credentials, and messy “code smells” that raise long-term tech debt.
Source: DevOps.com