I’m pretty new to SonarQube and was wondering how SQ uses the tsconfigPaths property? From the DEBUG logs, I see that a Typescript program for each tsconfig.json specifed. Furthermore, it also looks at the config’s references to find other related tsconfig.json. Does the tsconfig.json have any other purpose than defining the directory where the Typescript program will be created for analysis?
The reason why I’m asking is in my project, I have many, many subdirectories with tsconfigs that reference many other tsconfigs that are outside the scope of my directory.
// example folder structure
/
folder1
folder2
subdir1
subdir2
I want to run analysis within folder2 and tsconfigs within subdir1 reference other tsconfigs in folder1.
Because of this out-of-scope reference, the sonar scan takes a long time. When I removed the references option in subdir1’s tsconfig, I found that the analysis within that directory was extremely fast. If tsconfigs have no other purpose other than specifying the directory to create a typescript program in, I’m thinking of creating a bare-bones tsconfig.sonar.json in the subdirs in tsconfigPaths instead.
you are right, and it is usually a good idea to create a tsconfig.sonar.json if your production tsconfig.json has some negative impact on the analysis (if you search community with that precise filename you will see that we’ve recommended that in the past).
The specified tsconfigs in the tsconfigPaths property will be used for program creation, as you mentioned. Of course, any file that TS detects belonging to that program will be read by TS and used for type-checking, but it will not be analyzed if it’s not with the sources property of the scan.
Keep in mind though, that if you use another tsconfig.json without the references the analysis will be faster, but also could provide different results as some types will be missing, and thus some issues could be silenced.
Hi @victor.diez, my project is a monorepo, with many subdirs. I made a root tsconfig.sonar.json, and in each subdir, I also created an empty tsconfig.sonar.json.
However, during the analysis, the eslint-bridge process becomes unresponsive.
ERROR: eslint-bridge Node.js process is unresponsive. This is most likely caused by process running out of memory. Consider setting sonar.javascript.node.maxspace to higher value (e.g. 4096).
INFO: Hit the cache for 0 out of 432
ERROR: Failure during analysis, Node.js command to start eslint-bridge was: node --max-old-space-size=10240 .scannerwork/.sonartmp/eslint-bridge-bundle/package/bin/server 33177 127.0.0.1 .scannerwork ******** false .scannerwork/.sonartmp/eslint-bridge-bundle/package/custom-rules17487561232331657853/package
The maxspace is set to 10240, but does it need to be larger? I’ve set it to 20000 in the past and encountered the same issue. Or can this memory issue be part of something else?
you should not need 10GB of memory, I would rather split the analysis between subfolders. what is your tsconfigPaths?
Using this you should have better memory performance, as it will create sequentially 2 smaller TS programs instead of a single one (and double the size).
Also, what CI provider are you using? If you are on Bitbucket Pipelines, we know what containers are run by default with 1GB of memory, so that needs to be increased along with the Node.js maxspace.
@victor.diez my tsconfigPaths currently only has the root tsconfig.sonar.json. I have listed all subdir configs before, but the analysis took around 40 mins I have around 450 subdirs , which is probably why it took so long. I was wondering if there were any other ways to decrease the analysis time?
My CI provider is bamboo, and it currently has 10GB of memory and look at the performance, the analysis used up all the memory when I only specified the root tsconfig.
Update:
I bumped up the memory for the container to 20GB and now the eslint-bridge process doesn’t fail. However, it takes a while (35mins) to perform a full TS analysis with tsconfigPaths: tsconfig.sonar.json. Is there any other way to optimize the analysis?
How many LOCs your whole monorepo has? If even splitting the analysis in small TS programs does not have a big effect on the performance, you may already be in the optimal scenario. We test some big opensource projects internally (300K-400K LOCs) and their analysis takes near one hour.
The monorepo has around 600K… If analysis takes around 1hr for 400K, then I guess it makes sense for ours to take this long.
Just another question. We also run SQ for branches, and in this case, do I need to specify the full list of sonar.sources? If only a subset of files were modified in the branch, can I just specify these directories?
you should not need to change sonar.sources. Use instead the parameters for branch and PR analysis and the analysis will focus on the changes instead of the full codebase. Have a look at the docs here:
Thanks for the resources! I’m using SonarQube, but I was able to find the respective branch analysis docs and have a better understanding. A quick question, for versioning (sonar.projectVersion), I see that it’s used for Maven projects? So since my repo is JS/TS, would “new code” be automatically defined? Or would setting the projectVersion be more beneficial?
sonar.projectVersion is used for all projects; it’s picked up automatically from Maven projects.
If you’re using the ‘previous version’ option for your New Code definition, then you’ll want to make sure to pass this parameter in (otherwise, it defaults to ‘Not Provided’ IIRC). Then when you change your sonar.projectVersion analysis value, you reset the New Code period.
It’s never automatically defined. You need to set it somehow.