Request for Clarification on AI Features and Data Security in SonarQube 2025.1.4

Hi Team,
After upgrading our Test environment to SonarQube 2025.1.4, we observed that several newly introduced AI‑related capabilities are enabled by default. We have noticed AI CodeFix and AI‑generated code detection features, AI Code Assurance, AI‑qualified quality gates etc introduced in the 2025.1 LTA release.
Since we do not intend to use these AI capabilities in our Production environment at this time, particularly due to internal security and compliance considerations, we would like to understand the following:

Is there a way to keep all AI‑related features disabled during installation or the upgrade process itself, rather than disabling them individually after the upgrade?

From a security perspective, could you clarify whether any of these AI features including:

AI CodeFix (AI‑generated fix suggestions),
AI‑generated code detection,
AI Code Assurance workflows,
AI‑qualified quality gates (e.g., Sonar way for AI Code),
transmit any source code, scan data, metadata, contributor signals, or project information outside our SonarQube instance to SonarSource‑hosted services or any third‑party LLM/AI infrastructure?

If any outbound communication is involved, could you provide details on:

what data is transmitted ?,
the destination (e.g., SonarSource servers, OpenAI endpoints for AI CodeFix) ? ,
how this data is secured and stored ? ,
whether such communication can be restricted or fully disabled ?

For the features that rely on external services such as AI CodeFix using any AI model need confirmation on whether our organization’s code or scan data is ever sent to these LLMs for processing, even in anonymized or partial form ?

Given that our organization handles sensitive code and operates under strict security controls, it is essential for us to verify that no project code, scan artifacts, or metadata leave our environment without explicit approval.

Your guidance on how these AI functions operate ? , how data is managed ? , and whether we can fully disable all AI‑related features during the upgrade will help us complete our internal security assessment.

Hello @rahul.phatkare ,

I had exactly the same questions as you did, and we have thoroughly reviewed your inquiries. Here are some important clarifications regarding the AI features in SonarQube 2025.1.4:

  1. Activation of AI Features
    AI features such as AI CodeFix are opt-in . This means your SonarQube Server/Cloud administrators can choose not to enable these features if they are not desired.
  2. Complete Disablement Options
    To ensure that the features cannot be accidentally enabled, you have several options:
  • Network blocking : You can block access to the API endpoints and IP ranges used by SonarQube AI (addresses listed in the official documentation).
  • Blacklist : You may request your instance be added to a blacklist, which prevents enabling the feature. In this case, the feature will still be visible in the administration interface but cannot be enabled.
  • Hide the feature entirely : By adding the property sonar.ai.codefix.hidden=true in the instance configuration file (<SQ_HOME>/conf/sonar.properties ), the feature will be hidden and unavailable to all administrators.
  1. Data Transmission to External Services
  • When AI CodeFix is enabled and used with SonarSource-hosted LLM models (such as GPT-5.1 or GPT-4o via OpenAI), affected code snippets and problem descriptions are sent securely via HTTPS to their servers. This data is not used to train the models according to service agreements.
  • When using a self-hosted model (e.g., your own Azure OpenAI instance), your code remains within your network, but SonarQube still sends prompts and rule descriptions needed by the external model, which requires some internet connectivity.
  • Therefore, the service requires an internet connection to the designated endpoints and IP ranges, which are public and documented to allow strict network configuration if needed.
  1. Security and Privacy Management
  • If your internal policies prohibit transmitting code or metadata outside your environment, it is recommended to fully disable the AI features using the methods described above.
  • These features are designed to be transparent: only a limited, strictly necessary portion of code is sent, and only when generating AI suggestions.
  • Enforcing network restrictions on outbound connections ensures no data leaks to external services.

If you have further security-related questions, you can also directly contact SonarSource at: security@sonarsource.com.


For more technical details and precise instructions, please refer to the official documentation:

In summary, you have full control to disable or hide AI features and configure your environment to meet your security and compliance requirements.

Best regards,

2 Likes

Thanks @Bachri_Abdel for the detailed answer !
But mostly you have focused on Ai code fix feature. But what about AI‑generated code detection , AI Code Assurance (which is part of sonar way for AI code for quality gates) ?

As per documents :

Autodetect AI‑generated code : External communication is require with GitHub to grant Read‑only access to Copilot Business org‑level usage metadata so SonarQube can determine if contributors used Copilot. I assume No source code is transmitted to the AI providers by this feature ! it only consumes usage metadata from GitHub’s APIs. If disabled or if permissions are not granted, autodetection does not run.

AI Code Assurance : AI Code Assurance in SonarQube 2025.1.4 is a governance feature that stays inside our SonarQube Server. It labels AI influenced projects, applies AI‑qualified quality gates, and can (optionally) publish badges. it does not send source code to external AI/LLM services.

Could you please confirm on this.

Hi,

I’d like to augment @Bachri_Abdel’s excellent answer by letting you know that we’re deprecating the Autodetect feature. In today’s world, it’s the project with AI-generated code that’s exceptional, but the one without it. It’s a similar story for AI Code Assurance.

All of this is correct except the labeling part. You enable AI Code Assurance for a project, thus labeling it, and it turning on the assurance features.

 
HTH,
Ann