Overview
We are new to SonarCloud and are currently setting up a canary project within SonarCloud to test our configuration and integration steps. We are using an internal Python training project and have provided a deliberately vulnerable python file with the following code
""" This file has deliberate vulnerabilities within it to test
the functionality of SonarCloud"""
import os
import requests
def user_input(prompt: str) -> str:
""" A worker function to test user input """
os_input_cmd = input(prompt)
return os_input_cmd
def bad_os_cmd_inj() -> None:
""" A vulnerable test for command injection """
os_input_cmd = user_input('Provide and OS cmd: ')
# No input sanitisation and this could allow for command injection and/or LFI
if os_input_cmd is not None:
os.system(os_input_cmd)
print(f"OS command injection: {os_input_cmd}")
else:
print('Request error')
# Unsafe use of eval()
def bad_eval_func(a: str, b: str) -> None:
""" Test for dangerous func eval """
return eval('%s + %s' % (a, b))
def test_bad_eval_func() -> None:
""" Test for dangerous func exec """
eval_user_input = user_input(
"Welcome to addy the eval calc, please enter the first number you wish to add: ")
result = bad_eval_func(eval_user_input, '3')
print(f"The result is {result}")
# unsafe use of exec() function with user input.
def bad_exec_func(a: str, b: str) -> None:
""" Dangerous func exec """
return exec(f"{a} + {b}")
def test_bad_exec_func() -> None:
""" Test for dangerous func exec """
exec_user_input = user_input(
"Welcome to Exec adder, please enter the first value you'd like to add: ")
exec_result = bad_exec_func(exec_user_input, '4')
print(f"The exec test result is {exec_result}")
# This is an example of how we untrusted user input in str.format() can lead to data leakage
# example being print_nametag("{person.__init__.__globals__[CONFIG][API_KEY]}", new_person)
# output: 771df488714111d39138eb60df756e6b
CONFIG = {
"API_SECRET_KEY": "771df488714111d39138eb60df756e6b"
# some program secrets that users should not be able to read
# this is also an example of bad secret storage pratices.
}
class Person(object):
""" Creates a Person object """
def __init__(self, name: str) -> None:
self.name = name
def print_nametag(format_string: str, person: Person) -> None:
""" Prints out a persons nametag """
print(format_string.format(person=person))
def test_str_format() -> None:
""" Test for str format data leakage """
new_person = Person("Vickie")
print_nametag(user_input("Please format your nametag!: "), new_person)
Using the Default: Sonar Way quality profile for the project does not trigger any of the rules relating to unsafe/dangerous functions like eval
, exec
and os.system
with user provided input. However, if we create a new custom profile, apply it to the project then these rules are triggered??
Any help would be greatly appreciated.
Question
Why would the default profile not pick these up and what else is the default profile missing?
Current set up
ALM GitLab
CI/CD GitLab
Quality Profile Default: Sonar way
Language Python v3.11
Error observed
Default Sonar Way quality profile does not pick up dangerous function use in Python v 3+
Testing
- Tested against Semgrep, Semgrep does identify the use of dangerous functions, SonarCloud and SonarLint do not.
- Creating a new custom profile and applying to the project does identify these dangerous functions.
Potential workaround
Create a new custom profile and apply the rules and assign this profile to the project will work.