Winter is coming, and so are the details for our December Monthly webinar!
On December 11th, Anirban Chatterjee from the Product Marketing Management team will explore with you how businesses can integrate AI safely into their Software Development Life Cycle (SDLC).
He will discuss strategies for LLM approval, tracking, and performance monitoring, among other insightful aspects.
Title: How to ensure code accountability and trust in the age of generative AI Date and Time: 2024-12-11T16:00:00Z Speaker: Anirban Chatterjee, Senior Director of Product Marketing
Thank you to all who participated in our webinar yesterday. Please find below the questions and resources that were mentioned during the session
Questions
Q: Do you see differences in those statistics depending on developer experience? We’ve found that when used by less experienced developers, it increases productivity but decreases stability, and when used by more experienced developers, it has less of an impact on both.
A: Yes, democratization of coding is leading to new developers often accepting ai code more blindly. This does result in differences in stastics. There is a report about an increase of churn when using AI coding tools. I suspect some of this is due to developer experience.
Q: Where can I see a demonstration of your product?
A: You can watch interactive demos or request a live demo here: AI Code Assurance & CodeFix
Q: Is our code sent to your servers when using your AI tooling? ChatGPT for example might retain our code so we don’t use that but CoPilot does not so we are allowed to use that.
A: We are not storing your code on our servers and also OpenAI is not storing your code either. We have a specific contract with them to prevent that.
Q: If the AI-generated code gets blended into your hand-crafted code, how can SonarQube distinguish between the human code and the AI code?
A: We do see this need and are considering the detection of the presence of AI code.
Q: If you go from AI-assisted to AI-generated, do you see a shift in popular programming languages?
A: Great question! Yes, I think so. I suspect there will be some consolidation in the languages used.
Q: Doesn’t AI use for coding require a more strict check of coding standards followed?
A: Yes. when AI is used to generate code, a strict quality standard is necessary. We recommend using Sonar way for AI Code quality gate: Quality Gates | SonarQube Server Documentation
Q: I’ve seen both bad and invalid code generated by AI. And this doesn’t even address the potential issue of someone poisoning the AI. Any suggestions on how to address this?
A: We will cover this. AI Code Assurance is a capability we introduced in SonarQube to review the AI-generated code. There is also this OWASP LLM security practices document that may help prepare for such issues: OWASP LLM Top 10 for Code Generation
Q: I think all the statistics will change depending on the software type such as systems, mobile, enterprise, web, etc. What do you think?
A: I think you are right. AI performance is really related to your use case. Moreover the possibility to fine-tune different models and customize them to your specific use cases, the results may change even more.
Q: How can you automate the tagging of projects that have contributors who used a coding assistant? In organizations with more than 100 repositories it is complex.
A: Good question! We are looking into this. Currently, the projects need to be manually tagged but this is something we want to solve.
Q: Can Sonar distinguish AI-generatedA: code vs human written code? If so, how?
A: Not at this time but our research team is looking into this requirement.
Resources
2024 Accelerate State of DevOps Report, DORA
How to trust AI contributions to your codebase, Sonar, 2024