In this phase, you play in the interactive TAGGRS Games platform, where you race through 3 levels of increasing difficulty: The Warm-Up, The Sprint, and The Final Lap.
Each level is a collection of timed multiple‑choice questions focused on tracking, dataLayer, and Server-side Tracking. Each level has a fixed number of questions, a minimum score to pass, a time limit, and a maximum number of attempts.
Level 1 (The Warm-Up): 7 questions, 80% pass (max 1 wrong), 20 seconds each, 3 total attempts. Fill in your email to unlock Level 2 and get 1 daily prize pool ticket.
Level 2 (The Sprint): 7 questions, 80% pass (max 1 wrong), 30 seconds each, 3 total attempts. Unlocks Level 3 and makes you earn 2 daily prize pool tickets.
Level 3 (The Final Lap): 10 questions, 70% pass (max 2 wrong), 30 seconds each, 2 total attempts. Sets your leaderboard position and grants 6 months complimentary TAGGRS Basic access.
Each question has its own timer: if a question timer expires, that answer counts as incorrect. For each question, you see only whether your answer is correct, incorrect, or whether the time’s up: not the right answer.
To register your scores, enter the prize pools, and progress through the competition, you must log in with (or create) a free forever TAGGRS account.
Your final score per level combines speed and accuracy. The faster you answer correctly, the higher you score.The top 20 players on the leaderboard when qualification closes advance to the Grand Final.
The Grand Final is a practical tracking challenge for the 20 best players from the Qualification phase. It takes place on February 24, 2026, and must be completed and submitted within 3 hours.
Each finalist receives:
• A TAGGRS demo account
• A TAGGRS demo webshop
• A Google Tag Manager client-side container
• An email with instructions and relevant information
• An Axeptio cookie banner for Consent Mode
To simplify the setup, finalists may use a TAGGRS subdomain, so no DNS validation is needed during the challenge.
Finalists work on a demo website where the client‑side tracking setup is intentionally broken. The assignment consists of three main steps:
• Identify and fix problems in the client‑side tracking, ensuring correct event tracking.
• Set up a complete Server-side Tracking configuration in TAGGRS that matches the expected events and parameters.
• Write a short explanation describing what was broken and how you fixed it, the key decisions you made, and why your setup works.
You must work individually. You may discuss general approaches, but sharing answers, environments, or accounts is not allowed and may lead to disqualification.
Make sure to submit your setup and documentation before the deadline: late submissions won’t be evaluated.
All final setups are reviewed manually by an expert jury. Each finalist is scored using a weighted model across five categories:
3.1 Data maximization (25%)
This category looks at how well the setup captures the available data from the demo website. The jury can check, for example, if all expected events are implemented (page_view, add_to_cart, purchase, and other defined interactions), the expected parameters are present on these events (such as IDs, values, or other required fields), and there are any duplicate or unnecessary events.
The score is based on event coverage, parameter completeness, absence of duplicated events.
3.2 Privacy and consent (25%)
This category checks whether consent is respected and applied correctly. The jury tests different consent scenarios and checks, for example, if events are limited or adjusted correctly when consent is denied, whether Consent Mode v2 signals are implemented and passed correctly.
Full compliance leads to the maximum score; each privacy or consent violation reduces the score.
3.3 Data quality (25%)
This category looks at whether the data that is sent is accurate and reliable. The jury checks, for example: Do the values that are sent match the defined event schema (correct format, correct fields)? Are there errors, missing values, or inconsistencies in the data? Does the setup behave consistently across the test scenarios?
A higher score is given to setups that are accurate, consistent, and free from obvious data issues.
3.4. Structure and transparency (15%)
This category evaluates how clearly the setup is organized. The jury looks at naming conventions for tags, triggers, variables, and events, how the setup is grouped or structured (folders, logical grouping in TAGGRS and GTM), and whether the configuration is easy to read and audit for another implementer.
3.5 Creativity (10%)
This category rewards useful improvements inside the rules of the competition. Creativity gets points when it improves reliability, performance, or maintainability, does not harm privacy, data quality, or clarity, and could realistically be used in a real client implementation.
Creativity is bonus‑based: it can help you stand out, but it is not required to qualify or win.
Final decisions of the jury are final and binding.
Winners of the TAGGRS Games will be announced on February 26, 2026, on TAGGRS and Niels Olivier’s LinkedIn.
Days
hours
minutes
seconds




















What are the TAGGRS Games?
Who can participate?
Can I participate from anywhere?
Do I need specific technical skills to join?
Do I need to train before joining?
Is participation really free?
Do I need a TAGGRS account to play?
How does the competition work?
How is my qualification score calculated?
Can I replay to improve my score?
How are finalists evaluated?
Where will the final results be published?
What can I win in the TAGGRS Games?
How does the daily prize pool work?
How will I know if I’ve won a prize?