More PyTorch rules for High Quality ML Code

Hello Pythonistas,

For those working with PyTorch, we have further extended our ruleset. We already support 9 rules and today we add 7 more and have extended two others to support PyTorch cases. This means that you can benefit from the broadest best practice PyTorch ruleset there is. These are the new rules:

  • S7702: Specify “start_dim” when using “torch.flatten” to preserve batch dimension
  • S7703: Method calls should use parentheses when saving PyTorch model state
  • S7704: PyTorch module classes should not be instantiated inline in forward methods
  • S7706: Use PyTorch Lightning’s built-in checkpointing instead of manual checkpoint saving
  • S7708: Tensors should not be concatenated incrementally in loops
  • S7710: Use “torch.empty()” instead of list comprehensions for empty tensor initialization
  • S7713: Tensor operations should rely on automatic broadcasting instead of manual expansion

Rules extended to support PyTorch:

  • S935: Functions and methods should only return expected values
  • 2201: Return values from functions without side effects should not be ignored

Availability:
SonarQube Cloud - now
SonarQube Server - 2025.6 Developer Edition and above
SonarQube IDE - your next update

We have various rules on the way, take a look at what we have in our roadmap and let us know your thoughts!

Jean