Organizing Open Online Computational Problem Solving Competitions By: Ahmed Abdelmeged.

Slides:



Advertisements
Similar presentations
50.530: Software Engineering
Advertisements

1 CS101 Introduction to Computing Lecture 17 Algorithms II.
The Computational Difficulty of Manipulating an Election Tetiana Zinchenko 05/12/
Hoare’s Correctness Triplets Dijkstra’s Predicate Transformers
Games & Adversarial Search Chapter 5. Games vs. search problems "Unpredictable" opponent  specifying a move for every possible opponent’s reply. Time.
Scientific Community Game Karl Lieberherr 4/29/20151SCG.
Lower bound: Decision tree and adversary argument
Study Group Randomized Algorithms 21 st June 03. Topics Covered Game Tree Evaluation –its expected run time is better than the worst- case complexity.
EC941 - Game Theory Lecture 7 Prof. Francesco Squintani
Design Concepts and Principles
1 NP-Complete Problems. 2 We discuss some hard problems:  how hard? (computational complexity)  what makes them hard?  any solutions? Definitions 
Copyright © 2006 Addison-Wesley. All rights reserved.1-1 ICS 410: Programming Languages Chapter 3 : Describing Syntax and Semantics Axiomatic Semantics.
Game Design Serious Games Miikka Junnila.
Complexity 15-1 Complexity Andrei Bulatov Hierarchy Theorem.
Complexity 11-1 Complexity Andrei Bulatov Space Complexity.
Algorithms and Problem Solving-1 Algorithms and Problem Solving.
CSE115/ENGR160 Discrete Mathematics 02/07/12
The Theory of NP-Completeness
1 Discrete Structures CS 280 Example application of probability: MAX 3-SAT.
Dr. Muhammed Al-Mulhem 1ICS ICS 535 Design and Implementation of Programming Languages Part 1 Fundamentals (Chapter 4) Axiomatic Semantics ICS 535.
Chapter 11: Limitations of Algorithmic Power
Testing an individual module
Fundamentals of Python: From First Programs Through Data Structures
Software Testing Sudipto Ghosh CS 406 Fall 99 November 9, 1999.
Texas Holdem Poker With Q-Learning. First Round (pre-flop) PlayerOpponent.
Fundamentals of Python: First Programs
SCG Example Labs Ahmed Abdelmeged Karl Lieberherr.
Chapter 11 Limitations of Algorithm Power. Lower Bounds Lower bound: an estimate on a minimum amount of work needed to solve a given problem Examples:
Formal Two Party Debates about Algorithmic Claims or How to Improve and Check your Homework Solutions Karl Lieberherr.
The Scientific Community Game for STEM Innovation and Education (STEM: Science, Technology, Engineering and Mathematics) Karl Lieberherr Ahmed Abdelmeged.
Definitions Ahmed. Definition: A ranking function r is said to have the LimitedCollusionEffect (LCE) property if for any two arbitrary players px and.
Pricing Combinatorial Markets for Tournaments Presented by Rory Kulz.
Introduction Algorithms and Conventions The design and analysis of algorithms is the core subject matter of Computer Science. Given a problem, we want.
Debates / Socratic Method for Computational Problems Karl Lieberherr Based on Ahmed Abdelmeged’s Dissertation 10/15/20151.
Space Complexity. Reminder: P, NP classes P NP is the class of problems for which: –Guessing phase: A polynomial time algorithm generates a plausible.
Slide 1 Propositional Definite Clause Logic: Syntax, Semantics and Bottom-up Proofs Jim Little UBC CS 322 – CSP October 20, 2014.
Introduction to Problem Solving. Steps in Programming A Very Simplified Picture –Problem Definition & Analysis – High Level Strategy for a solution –Arriving.
Propositional Calculus CS 270: Mathematical Foundations of Computer Science Jeremy Johnson.
Chapter 5: Sequences, Mathematical Induction, and Recursion 5.5 Application: Correctness of Algorithms 1 [P]rogramming reliability – must be an activity.
Backtracking and Games Eric Roberts CS 106B January 28, 2013.
Chapter 18: Searching and Sorting Algorithms. Objectives In this chapter, you will: Learn the various search algorithms Implement sequential and binary.
Chapter 3 Part II Describing Syntax and Semantics.
UNIT 5.  The related activities of sorting, searching and merging are central to many computer applications.  Sorting and merging provide us with a.
CS 3343: Analysis of Algorithms Lecture 25: P and NP Some slides courtesy of Carola Wenk.
The Algorithms we use to learn about Algorithms Karl Lieberherr Ahmed Abdelmeged 3/16/20111Open House 2011.
Intro to Planning Or, how to represent the planning problem in logic.
Chapter 11 Introduction to Computational Complexity Copyright © 2011 The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 1.
Complexity 24-1 Complexity Andrei Bulatov Interactive Proofs.
The Selection Algorithm : Design & Analysis [10].
Space Complexity. Reminder: P, NP classes P is the class of problems that can be solved with algorithms that runs in polynomial time NP is the class of.
PREDICATES AND QUANTIFIERS COSC-1321 Discrete Structures 1.
February 25, 2016Introduction to Artificial Intelligence Lecture 10: Two-Player Games II 1 The Alpha-Beta Procedure Can we estimate the efficiency benefit.
CS Class 04 Topics  Selection statement – IF  Expressions  More practice writing simple C++ programs Announcements  Read pages for next.
Theory of Computational Complexity Yuji Ishikawa Avis lab. M1.
Introduction to Debates CS5800 Algorithms Instructor: Karl Lieberherr Teaching Assistant: Zhengxing Chen Based on Ahmed Abdelmeged’s Dissertation.
Honors Track: Competitive Programming & Problem Solving Seminar Topics Kevin Verbeek.
A Review of Software Testing - P. David Coward
Inference and search for the propositional satisfiability problem
Richard Anderson Lecture 26 NP-Completeness
Chapter 5 Induction and Recursion
UNIT-4 BLACKBOX AND WHITEBOX TESTING
NP-Completeness Yin Tat Lee
Propositional Calculus: Boolean Algebra and Simplification
Chap 3. The simplex method
CSc4730/6730 Scientific Visualization
SAT-Based Area Recovery in Technology Mapping
The Alpha-Beta Procedure
Debates / Socratic Method for Computational Problems
NP-Completeness Yin Tat Lee
UNIT-4 BLACKBOX AND WHITEBOX TESTING
Presentation transcript:

Organizing Open Online Computational Problem Solving Competitions By: Ahmed Abdelmeged

In 2011, researchers from the Harvard Catalyst Project were investigating the potential of crowdsourcing genome- sequencing algorithms.

So, they collected few million sequencing problems and developed an electronic judge that evaluates sequencing algorithms by how well they solve these problems.

And, they set up a two-week open online competition on TopCoder with a total prize pocket of $6000.

The results were astounding!

“... A two-week online contest... produced over 600 submissions.... Thirty submissions exceeded the benchmark performance of the US National Institutes of Health’s MegaBLAST. The best achieved both greater accuracy and speed (1,000 times greater).” -- Nature Biotechnology, 31(2):pp. 108–111, 2013.

We want to lower the barrier to entry for establishing such competitions by having “meaningful” competitions where participants assist the admin in evaluating their peers.

Thesis Statement “Semantic games of interpreted logic sentences provide a useful foundation to organize computational problem solving communities.”

Open online competitions have been quite successful in organizing computational problem solving communities.

“... A two-week online contest... produced over 600 submissions.... Thirty submissions exceeded the benchmark performance of the US National Institutes of Health’s MegaBLAST. The best achieved both greater accuracy and speed (1,000 times greater).” -- Nature Biotechnology, 31(2):pp. 108–111, 2013.

Let’s take a closer look at state-of-the-art approaches to organize an open online competition for solving computational problems. MAX-SAT as a sample problem.

MAXimum SATisfiability (MAX- SAT) problem Input: a boolean formula in the Conjunctive Normal Form (CNF). Output: an assignment satisfying the maximum number of clauses.

The Omniscient Admin Approach A “trusted” admin prepares a thorough benchmark of MAX-SAT problem instances together with their correct solutions. This benchmark is used to evaluate individual MAX-SAT algorithms submitted by participants.

The Teaching Admin Approach Admin prepares a thorough benchmark of MAX-SAT problems and their “model” solutions. Benchmark used to evaluate individual MAX-SAT algorithms submitted by participants.

Cons Overhead to collect and solve problems. What if, admin incorrectly solves some problems?

The Open Benchmark Approach Admin maintains an open benchmark of problems and their solutions. Participants may object to any of the solutions before the competition starts.

Cons Over-fitting : Participants may tailor their algorithms for the benchmark.

The Learning Admin Approach An admin prepares a set of MAX-SAT problems and keeps track of the best solution produced by one of the algorithms submitted by participants. Pioneered by the FoldIt team.

Cons Works for optimization problems. Not clear how to apply to other computational problems. TQBF for example.

Wouldn’t it be great if we had a sports-like OOCs where admin referees the competition with minimal overhead?

However,

Research Question How to organize a “meaningful” open online computational problem solving competition where participants assist in the evaluation of their opponents?

Research Question How to organize a “meaningful”, sports-like, open online computational problem solving competition where the admin only referees the competition with minimal overhead?

Simpler Version “meaningful”, Two-Party Competitions. Admin provides neither benchmark problems nor their solutions.

Attempt I Each participant prepares a benchmark of problems and solve their opponent’s benchmark problems. Admin checks solutions. Checking the correctness of a MAX-SAT problem solution can be an overhead to the admin.

Attempt II Each participant prepares a benchmark of problems and their solutions. Each participant solves their opponent’s problems. Admin “compares” both solutions for each problem to determine the winner. Admin has to correctly compare solutions. Admin cannot assume any of the solutions to be correct.

Attempt II Each participant prepares a benchmark of problems and their “model” solutions. Each participant solves their opponent’s problems. Admin only compares solutions to “model” solutions.

But, Participants are incentivized to provide the wrong “model” solution. Admin should compare solutions without trusting any of them.

Thesis “Semantic games of interpreted logic sentences provide a useful foundation to organize computational problem solving communities.”

Semantic Games A Semantic Game (SG) is a constructive debate of the correctness of an interpreted logic sentence (a.k.a claim) between two distinguished parties: the verifier which asserts that the claim holds, and the falsifier which asserts that the claim does not hold.

A Two-Party, SG-Based MAX-SAT Competition (I) Participants develop functions to: Provide side preference. Provide values for quantified variables based on values of variables in scope. ∀ φ ∈ CNFs ∃ v ∈ assignments(φ) ∀ f ∈ assignments(φ). fsat(f,φ)≤fsat(v,φ)

A Two-Party, SG-Based MAX-SAT Competition (II) Admin chooses sides for players based on their side preference. Let Pv be the verifier and Pf be the falsifier. ∀ φ ∈ CNFs ∃ v ∈ assignments(φ) ∀ f ∈ assignments(φ). fsat(f,φ)≤fsat(v,φ)

A Two-Party, SG-Based MAX-SAT Competition (III) Admin gets value provided by Pf for φ. Admin checks φ ∈ CNFs. If false, Pf loses. Admin gets value provided by Pv for v. Admin checks v ∈ assignments(φ). If false, Pv loses. ∀ φ ∈ CNFs ∃ v ∈ assignments(φ) ∀ f ∈ assignments(φ). fsat(f,φ)≤fsat(v,φ)

A Two-Party, SG-Based MAX-SAT Competition (IV) Admin gets value provided by Pf for f. Admin checks f ∈ assignments(φ). If false, Pf loses. Admin evaluates fsat(f,φ)≤fsat(v,φ). If true Pv wins, otherwise Pf wins. ∀ φ ∈ CNFs ∃ v ∈ assignments(φ) ∀ f ∈ assignments(φ). fsat(f,φ)≤fsat(v,φ)

Rationale (I) Controllable admin overhead. ∀ φ ∈ CNFs ∃ v ∈ assignments(φ). satisfies-max(v,φ) ∀ φ ∈ CNFs ∃ v ∈ assignments(φ) ∀ f ∈ assignments(φ). fsat(f,φ)≤fsat(v,φ)

Rationale (II) Correct: there is a winning strategy for verifiers of true claims and falsifiers of false claims. Regardless of the opponent’s actions.

Rationale (III) Objective. Systematic. Learning chances.

SG-Based Two-Party Competitions We let participants debate the correctness of an interpreted predicate logic sentence specifying the computational problem of interest, assuming that participants choose to take opposite sides.

Out-of-The-Box, SG-Based, Two-Party MAX-SAT Competition ∀ φ ∈ CNFs ∃ v ∈ assignments(φ) ∀ f ∈ assignments(φ). fsat(f,φ) ≤ fsat(v,φ) 1. Falsifier provides a CNF formula φ. 2. Verifier provides an assignment v. 3. Falsifier provides an assignment f. 4. Admin evaluates fsat(f,φ) ≤ fsat(v,φ). If true, verifier wins. Otherwise, falsifier wins.

Pros and cons of out-of-the-box, SG-based, two-Party competitions solve our “meaningful”, Two-Party Competitions. Admin provides neither benchmark problems nor their solutions.

Pro (I): Systematic The rules of an SG are systematically derived from the syntax of its underlying claim. SGs are also defined for other logics.

Rules of SG(φ, A, v, f) φMoveNext Game ∀ x : ψ  f provides x 0 SG(ψ[x 0/ x], A, v, f)  ∧  f chooses θ ∈ {ψ,  } SG(θ, A, v, f) ∃ x :  ψ  v provides x 0 SG(ψ[x 0/ x], A, v, f) ψ ∨ ψ ∨  v chooses θ ∈ {ψ,  } SG(θ, A, v, f) ¬ ψN/ASG(ψ, A, f, v) P(t 0 )v wins if p(t 0 ) holds, o/w f wins “The Game of Language: Studies in Game-Theoretical Semantics and Its Applications” -- Kulas and Hintikka, 1983

Pro (II): Objective Competition result is based on skills that are precisely defined in the competition definition.

Pro (III): Correct Competition result is based on demonstrated possession (or lack of) skill. Incorrectly solved problems by admin or opponent cannot worsen participant’s rank. There is a winning strategy for verifiers of true claims and falsifiers of false claims. Regardless of the opponent’s actions.

Pro (III): Correct There is a winning strategy for verifiers of true claims and falsifiers of false claims, regardless of the opponent’s actions.

Pro (IV): Controllable Admin Overhead Admin overhead is to implement the structure interpreting the logic statement specifying a computational problem. It is always possible to scrap functionality out of the interpreting structure at the cost of adding complexity to the logic statement.

Pro (V): Learning Losers can learn from SG traces.

Pro (VI): Automatable Participants can codify their strategies for playing SGs. Efficient and thorough evaluation. Codified strategies are useful bi- products. Controlled information flow.

Challenges (I) Participants must take opposing sides! Neutrality is lost with forcing.

Con (II): Not Thorough Unlike sports-games, a single game is not thorough enough.

Con (III): Issues Scaling to N-Party Competitions In sports, tournaments are used to scale two-party games to n-party competitions.

Challenges (II) Scaling to N-Party Competition using a tournament, yet: Avoid Collusion Potential especially in the context of open online competitions where Sybil identities are common and games are too fast to spectate! Ensure that participants get the same chance.

Challenges (II) Scaling to N-Party Competition using a tournament, yet:

Issue (II): Neutrality Do participants get the same chance? We have to force sides on participants. We may have vastly different number of verifiers and falsifiers.

Issue (II): Correctness and Neutrality We have to force sides on participants. Yet, we cannot penalize forced losers for competition correctness. We’ve to ensure that all participants get the same chance even though we may have vastly different number of verifiers and falsifiers.

Contributions 1. Computational Problem Solving Labs (CPSLs). 2. Simplified Semantic Games (SSGs). 3. Provably Collusion-Resistant SSG- Tournament Design.

Computational Problem Solving Labs (CPSLs)

CPSLs A structured interaction space centered around a claim. Community members contribute by submitting their strategies for playing an SSG of the lab’s claim. Submitted strategies, compete in a provably collusion resistant tournament of simplified semantic games.

Control, Thorough and Efficient Evaluation.

Codified Strategies Efficient and thorough evaluation. Useful by-products. Controlled information flow.

CPSLs (II) A structured interaction space centered around a claim. Community members contribute by submitting their strategies for playing an SSG of the lab’s claim. Once a new strategy is submitted in a CPSL, it competes against the strategies submitted by other members in a provably collusion resistant tournament of simplified semantic games.

Highest Safe Rung Problem The Highest Safe Rung (HSR) problem is to find the largest number (n) of stress levels that a stress testing plan can examine using (q) tests and (k) copies of the product under test. k = 1, n = q, linear search k >= q, n = 2^q, binary search 1 < k < q, n = ?, ? 12k (safe) n...

Highest Safe Rung Admin Page Computational Problem Solving Lab - Highest Safe Rung Description: The Highest Safe Rung (HSR) problem is to find the largest number of stress levels that a stress testing plan can examine using (q) tests and (k) copies of the product under test. Claim: Save Welcome.... Standings Log out "HSR() := forall Integer q : forall Integer k :... Game Traces: PublishHide

Highest Safe Rung Computational Problem Solving Lab - Highest Safe Rung Description: The Highest Safe Rung (HSR) problem is to find the largest number of stress levels that a stress testing plan can examine using (q) tests and (k) copies of the product under test. Upload new Strategy Ran k Membe r Latest contribution # of faults Chose n side #1... verifier #2... verifier #3... #20... #22... #21... #22Sc11/1/ #23... #24... #25... falsifier Standings: Welcome Sc1 See all Download traces of past games Download strategy skeleton Download claim specification Log out

Claim Specification

Simplified Semantic Games

SG Rules

SSGs Simpler : use auxiliary games to replace moves for conjunctions and disjunctions. Thoroughness potential : participants can provide several values for quantified variables.

SSG Rules

class HSRClaim { public static final String [] FORMULAS = new String []{ ”HSR() := forall Integer q : forall Integer k : exists Integer n : HSRnqk(n, k, q) and ! exists Integer m : greater (m,n) and HSRnqk(m, q, k)” ; ”HSRnqk(Integer n, Integer q, Integer k) := exists SearchPlan sp : correct(sp, n, q, k)”}; public static boolean greater(Integer n, Integer m){ return n > m; } public static interface SearchPlan{} public static class ConclusionNode implements SearchPlan{ Integerhsr ; } public static class TestNode implements SearchPlan{ IntegertestRung ; SearchPlan yes; // What to do when the jar breaks. SearchPlanno ;// What to do when the jar does not break. } public static boolean correct(SearchPlan sp, Integer n, Integer q, Integer k){ // sp satisfies the binary search tree property, has n leaves, of depth at most q, all root−to−leaf paths have at most k ”yes” branches.... } HSR Claim Specification

Strategy Specification One function per quantified variable.

class HSRStrategy { public static Iterable HSR_q(){... } public static Iterable HSR_k(Integer q){... } public static Iterable HSR_n(Integer q, Integer k){... } public static Iterable HSR_m(Integer q, Integer k, Integer n){... } public static Iterable HSRnqk_sp(Integer n, Integer q, Integer k){... } HSR Strategy Skeleton

Semantic Game Tournaments

Tournament Design Scheduler: Neutral. Ranking function: Correct and anonymous. Can mask scheduler deficiencies.

Ranking Functions Input : beating function representing output of several games. Output: a total preorder of participants.

Beating Functions (of SG Tournaments) b P (p w, p l, s wc, s lc, s w ) : sum of all gains of p w against p l while p w choosing side s wc, p l choosing side s lc and p w taking side s w. More complex.

Ranking Functions (Correctness) Non-Negative Regard for Wins. Non-Positive Regard for Losses.

Non-Negative Regard For Wins (NNRW) Px Wins Faults Additional wins cannot worsen Px’s rank w.r.t. other participants.

Non-Positive Regard For Losses (NPRL) Px Wins Faults Additional faults cannot improve Px’s rank w.r.t. other participants. Implies:

Ranking Functions (Anonymity) Output ranking is independent of participant identities. Ranking function ignores participants’ identities. Participants also ignore their opponents’ identities.

Limited Collusion Effect Slightly weaker notion than anonymity. What you want in practice. A participant Py can choose to lose on purpose against another participant Px, but that won’t make Px get ahead of any other participant Pz.

Limited Collusion Effect (LCE) Px Wins Faults Games outside Px’s control cannot worsen Px’s rank w.r.t. other participants.

Discovery A useful design principle for ranking functions. Under NNRW, NPRL : LCE = LFB LFB is quite unusual to have. LFB lends itself to implementation.

Locally Fault Based (LFB) Px Wins Faults Relative rank of Px and Py depends only on faults made by either Px or Py. Py Faults Wins

Locally Fault Based (LFB) Px Wins Faults Relative rank of Px and Py can depends only on games faults made by either Px or Py. Py Faults Wins

Locally Fault Based (LFB)

Collusion Resistant Ranking Functions

Beating Functions Represent outcome of a set of SSGs b P (p w, p l, s wc, s lc, s w ) : sum of all gains of p w against p l while p w choosing side s wc, p l choosing side s lc and p w taking side s wc.

Beating Functions (Operations) b P | w p x : games p x wins. b P | l p x : games p x loses. b P | fl p x : games p x loses while not forced. b P | c p x = b P | w p x + b P | fl p x : games p x controls. Can add them, b P 0 is the identity element.

Ranking Functions Take a beating function to a ranking Ranking : a total pre-order.

Limited Collusion Effect There is no way p y ’s rank can be improved w.r.t. p x ’s rank behind p x back.

Non-Negative Regard for Wins An extra win cannot worsen p x ’s rank.

Non-Positive Regard for Losses An extra loss cannot improve p x ’s rank.

Local Fault Based Relative rank of p x w.r.t. p y only depends on faults made by either p x or p y.

Main Result

Visual Proof

Fault Counting Ranking Function Players are ranked according to the number of faults they make. The less the number of faults the higher the rank. Satisfies the NNRW, NPRL, LFB and LCE properties.

Semantic Game Tournament Design For every pair of players: If choosing different sides, play a single SG. If choosing same sides, play two SGs where they switch sides.

Tournament Properties Our tournament is neutral.

Neutrality Each player plays n v + n f - 1 SGs in their chosen side, those are the only games it may make faults.

Related Work Rating and Ranking Functions Tournament Scheduling Match-Level Neutrality

Rating and Ranking Functions (I) Dominated by heuristic approaches Elo ratings. Who’s #1? There are axiomatization of rating functions in the field of Paired Comparison Analysis. LCE not on radar. Independence of Irrelevant Matches (IIM) is frowned upon.

Rating and Ranking Functions (II) Rubinstein[1980]: points system (winner gets a point) characterized as: Anonymity : ranks are independent of the names of participants. Positive responsiveness to the winning relation which means that changing the results of a participant p from a loss to a win, guarantees that p’s rank would improve. IIM: relative ranking of two participants is independent of matches in which neither is involved. “beating functions” are restricted to complete, asymmetric relations.

Tournament Scheduling Neutrality is off radar. Maximizing winning chances for certain players. Delayed confrontation.

Match-Level Neutrality Dominated by heuristic approaches Compensation points. Pie rule.

Conclusion “Semantic games of interpreted logic sentences provide a useful foundation to organize computational problem solving communities.”

Future Work Problem decomposition labs. Social Computing. Evaluating Thoroughness.

Questions?

Thank You!

N-Party SG-Based Competitions A tournament of two-party SG-based competitions

N-Party SG-Based Competitions: Challenges (I) Collusion potential especially in the context of open online competitions.

N-Party SG-Based Competitions: Challenges (II) Neutrality. Two-party SG-Based competitions are not neutral when one party is forced.

Rationale (4): Anonymous

Rationale (Objective) While constructively debating the correctness of an interpreted predicate logic sentence specifying a computational problem, participants provide and solve instances of that computational problem.

∀ φ ∈ CNFs ∃ v ∈ assignments(φ) ∀ f ∈ assignments(φ). fsat(f,φ) ≤ fsat(v,φ)

Semantic Games

A “meaningful” competition is Correct Anonymous Neutral Objective Thorough

Correctness Rank is based on demonstrated possession (or lack of) skill. Suppose that we let participants create benchmarks of MAX-SAT problems and their solutions to evaluate their opponents. Participants would be incentivised to provide the wrong solutions.

Anonymous Rank is independent of identities. There is a potential for collusion among participants. This potential arise from the direct communication between participants. This potential is aggravated by the open online nature of competitions.

Neutral The competition does not give an advantage to any of the participants. For example, a seeded tournament where the seed (or the initial ranking) can affect the final ranking is not considered neutral.

Objective Ranks are exclusively based on skills that are precisely defined in the competition definition. Such as solving MAX-SAT problems.

Thorough Ranks are based on solving several MAX-SAT problems.

Thesis “Semantic games of interpreted logic sentences provide a useful foundation to organize computational problem solving communities.”

Semantic Games Thoroughness means that the competition result is based on a wide enough range of skills that participants demonstrate during the competition.