Hi, same question here, Colin. Documentation is insufficient, because what kind of default/industry criteria could be used? Criteria can be seen subjective, opinion or situational based, then again LC 100% if not part of the contract are impossible (or at least triggering high blood pressure across teams) and what ‘good practice’ values could be used for say Code smells, Bugs, etc. is the main question. Also on the spectrum of development, those values might need to adapt to allow for initial demonstrations to quickly fire, but from a quality perspective such things are asking a lot of misunderstanding, demotivate, instability, etc.
There is a set of criteria to start with (fresh installation), agree, but how to do you come to a conclusion on whether to accept 1 or 5 blocker issues without ripping holes into development/product schedule and with little multi-stakeholder communication/clarification? My addition is asked based on a large-scale 2 years legacy code project, mainly Java. SQ, especially documentation, could provide deeper insight here. Maybe there is information, anonymous customer studies etc.? Help appreciated.