27th November, 2024, Toronto, Canada

Entry Guide

The following are comments gathered from previous Judging Panels after they had reviewed all entries. It may help you in deciding on what information to include in your entries.

As a whole the entries did not take into account criteria as much as the Judging Panel would have liked. Additionally, the entrants should remember that they are being judged by a panel of industry peers and so would do well to pitch the entry to the target audience. A few entries were considered “overly simplistic.”

Strong Entries:

 

  • Emphasised the business importance and criticality of the project, and identified what was innovative about the approach being adopted.
  • Shown the biggest number of traits/characteristics one would expect from someone with good people management and communication skills:  Mentorship (not only for people within his company), Coaching, being a role model, being approachable and supportive, being valued-respected-listened by people, dedicated to being team well-being, focus on skilling people up.
  • Evidence of overcoming challenges was clear, entries also described how they were committed to best practices and their selection of methodology to support their automation objectives along with evidence which made their application stand out from the rest.
  • Cleary researched and implemented the best tech to test ALL the system, NOT just the softwarebased backend systems.
  • There is very clear evidence of the project methodology and justification of tech choices. Very clearly a detailed description of understanding the Stakeholders needs, the importance of the project and the overall Goals.
  • Give context to metrics to fully justify their inclusion.

Weak Entries:

 

  • Weak entries were not able to describe challenges very well, talking about business, project, or architectural challenges – rather than challenges in their automation journey.
  • Did not cover all the criteria defined, which made it a slightly weaker application.
  • Lacked details around the approach to testing. The diagram summarised tools used but limited detail on what challenges were encountered and how those were overcome through the use of automation.
  • Not demonstrating evidence of delivering on time, within budget or engagement with stakeholders, nor reflecting on goals to establish ultimate project success.
  • Concentrating on the merits of the tool or method rather than the actual project deliverable is less likely to be scored highly.
  • Not justify the reasoning behind including metrics.