Code coverage on databricks notebook

Must-share information (formatted with Markdown):

  • which versions are you using (SonarQube, Scanner, Plugin, and any relevant extension)
  • what are you trying to achieve
  • what have you tried so far to achieve this

I am trying to do Sonarqube analysis on my databricks scala notebook from azure devops. I am using maven as a build tool.

I am able to analyse my sample scala code which i uploaded in my azure repo. I am getting code coverage as well for that code.

But I am not sure how to extend this for my databricks notebook. Because if i run my notebook it will run in my workspace. And so the results will be stored in workspace only. Then how my azure pipeline will be calculating the code coverage,??
I am using latest version of sonar.

FYI, I have tried stand-alone scanner as well but that’s not giving any code coverage.

I am looking for some guidance on how to analyse my notebook using Sonarqube and azure pipeline. Any guidance will mean a lot.

Thanks :slight_smile:

Hi,

I only know about Databricks notebooks what the Google summary just showed me, so take this for what it’s worth

You’ll need to be able to check your code out in your Azure pipeline so you can run the build and analysis. That’s the first step. Once you get that far, the docs should help.

 
Ann

A post was split to a new topic: Error integrating with azure data lake