Open Category List

How am I rated as a tester?

2 minutesread

All testers are rated against other active members of the community based on their quality and activity level (tester rating calculations are performed on a daily basis). There are a total of five tiers for the rated tester: Gold, Silver, Bronze, Proven, and Rated tier (if you do not have a badge, this simply means that you have not filed enough “activity” points). Testers can be Gold in Functional testing, and be lower or even unrated in other testing types.

Detailed Ratings per Testing Type

Instead of earning one overall rating, testers earn one rating per testing type, based on their activity level and quality of work for test cycles of that testing type. The thresholds for tester ratings differ across testing types.

Additionally, Gold/Silver/Bronze testers receive higher payout rates than other members of the testing community: Gold testers receive a 10% premium on all approved reports, while Silver and Bronze testers receive a 5% and 2.5% premium, respectively. Payout increases are determined by the rating that applies to the test cycle’s testing type. For example: If a tester is a Silver Security tester but a Rated Usability tester, he gets 5% extra for Security bugs but not Usability. Here is more information about the parameters that drive your rating (essentially two sub-ratings: Activity Level and Quality of Participation):

Activity Level

  • Lifetime participation level: # reported bugs, # approved bugs, etc. (see “Quality of Participation” below)
  • Recent participation level (prior 3, 6, 12 months)
  • Reliability: Reporting bugs and test cases for those in which the Test Cycle Agreement was checked
  • Declining a test cycle has higher positive weight than Accepting a test cycle without submitting at least one report

Quality of Participation

  • Approval percentage for functional issues takes into account approved and rejected reports, but goes deeper than that due to varying tiers (e.g., Exceptionally, Very, and Somewhat Valuable approved reports and Out Of Scope and Did Not Follow Instructions for rejected reports) . For example: a tester with an average (75%) bug approval percentage, who finds mostly Exceptionally valuable bugs will have a higher rating than a tester with a very high (95%) bug approval percentage who finds mostly Somewhat valuable bugs. Issues marked as ‘Not Specified’ occur when a TTL or PM does not wish to make a value assessment on behalf of the customer when the cycle closes. Issues in this category have a ratings effect between ‘Somewhat’ and ‘Very’ value and payout at the somewhat level.
  • Approval percentage for all types of reports (surveys, test cases, usability reports, etc).
  • Rejected disputes have more negative weight than an initial rejected report.
  • Accuracy of your initial bug report/severity classification. If your bug is reclassified it will hold more negative weight.
  • Info Requested: While the status change does not affect tester rating, it may lead to a rejected bug if you do not supply the requested information before the test cycle closes. Additionally, keep in mind that your bug may be rejected outright if it doesn’t contain information requested in the scope and instructions, which results in a negative ratings impact.

2 minutesread

Related Knowledge Base Posts