Sonar-way default profile not detecting Python vulnerabilities

Overview

We are new to SonarCloud and are currently setting up a canary project within SonarCloud to test our configuration and integration steps. We are using an internal Python training project and have provided a deliberately vulnerable python file with the following code

""" This file has deliberate vulnerabilities within it to test
    the functionality of SonarCloud"""
import os
import requests


def user_input(prompt: str) -> str:
    """ A worker function to test user input """
    os_input_cmd = input(prompt)
    return os_input_cmd


def bad_os_cmd_inj() -> None:
    """ A vulnerable test for command injection """
    os_input_cmd = user_input('Provide and OS cmd: ')
    # No input sanitisation and this could allow for command injection and/or LFI
    if os_input_cmd is not None:
        os.system(os_input_cmd)
        print(f"OS command injection: {os_input_cmd}")
    else:
        print('Request error')


# Unsafe use of eval()
def bad_eval_func(a: str, b: str) -> None:
    """ Test for dangerous func eval """
    return eval('%s + %s' % (a, b))


def test_bad_eval_func() -> None:
    """ Test for dangerous func exec """
    eval_user_input = user_input(
        "Welcome to addy the eval calc, please enter the first number you wish to add: ")
    result = bad_eval_func(eval_user_input, '3')
    print(f"The result is {result}")


# unsafe use of exec() function with user input.
def bad_exec_func(a: str, b: str) -> None:
    """ Dangerous func exec """
    return exec(f"{a} + {b}")


def test_bad_exec_func() -> None:
    """ Test for dangerous func exec """
    exec_user_input = user_input(
        "Welcome to Exec adder, please enter the first value you'd like to add: ")
    exec_result = bad_exec_func(exec_user_input, '4')
    print(f"The exec test result is {exec_result}")


# This is an example of how we untrusted user input in str.format() can lead to data leakage
# example being print_nametag("{person.__init__.__globals__[CONFIG][API_KEY]}", new_person)
# output: 771df488714111d39138eb60df756e6b
CONFIG = {
    "API_SECRET_KEY": "771df488714111d39138eb60df756e6b"
    # some program secrets that users should not be able to read
    # this is also an example of bad secret storage pratices.
}


class Person(object):
    """ Creates a Person object """

    def __init__(self, name: str) -> None:
        self.name = name


def print_nametag(format_string: str, person: Person) -> None:
    """ Prints out a persons nametag """
    print(format_string.format(person=person))


def test_str_format() -> None:
    """ Test for str format data leakage """
    new_person = Person("Vickie")
    print_nametag(user_input("Please format your nametag!: "), new_person)

Using the Default: Sonar Way quality profile for the project does not trigger any of the rules relating to unsafe/dangerous functions like eval, exec and os.system with user provided input. However, if we create a new custom profile, apply it to the project then these rules are triggered??

Any help would be greatly appreciated.

Question

Why would the default profile not pick these up and what else is the default profile missing?

Current set up

ALM GitLab
CI/CD GitLab
Quality Profile Default: Sonar way
Language Python v3.11

Error observed

Default Sonar Way quality profile does not pick up dangerous function use in Python v 3+

Testing

  • Tested against Semgrep, Semgrep does identify the use of dangerous functions, SonarCloud and SonarLint do not.
  • Creating a new custom profile and applying to the project does identify these dangerous functions.

Potential workaround

Create a new custom profile and apply the rules and assign this profile to the project will work.

Hi,

Welcome to the community!

To be clear, you’re saying pythonsecurity:S5334 (which should pick up eval is not on in the ‘Sonar way’ profile?

 
Thx,
Ann

Hi Ann,

Thank you so much for replying.

To answer your question and further clarify, I’m saying that following the recommended setup guide and using the Sonar way profile which does include rule pythonsecurity:S5334 does not identify the eval nor the exec functions within our sample/test code.

I deliberately chose these functions as I thought they would easily trigger rule pythonsecurity:S5334 during our testing phase.

Happy to answer any other questions and thanks again for your help.

Mark

1 Like

Hi Mark,

Thanks for sharing code examples. Rule pythonsecurity:S5334 is not designed to raise every time eval is used (apparently it’s the approach Semgrep chose). It only raises an issue when untrusted values are evaluated.

As of today, we don’t consider the return values of input as untrusted and this is why pythonsecurity:S5334 is not raising an issue in your case.

By default, we only consider values from incoming HTTP requests as untrusted. Here is a code example of a flask application where an issue is raised:

from flask import Flask, request
import something

app = Flask(__name__)

@app.route("/")
def example():
    operation = request.args.get("operation")
    eval(f"product_{operation}()") # Noncompliant (S5334)
    return "OK"
3 Likes

Hey Pierre,

Thanks for letting me know and I do understand the approach Sonar have taken regarding this.

Thanks again

Mark

1 Like

Hi Mark,

As of today, we don’t consider the return values of input as untrusted and this is why pythonsecurity:S5334 is not raising an issue in your case.

I am also curious to know the kind of applications you want to analyze. Would it make sens for you to add input as a source of untrusted value?

Hi Pierre,

We mainly create web apps, however, we also create a lot of internal command line tooling. If the local account on a machine was compromised, then there is the possibility one of these tools could be then used to privilege escalation and further lateral movement. However, I feel we could just create our own rule set to run for these types of projects which includes rules to detect the use of eval and exec etc.

Thanks again for your help.

Mark

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.