A truly independent awards programme

The North American Software Testing Awards judges are appointed based on their extensive experience in the software testing field. These seasoned professionals, all of whom currently hold senior management roles, guarantee that each entry is judged fairly and accurately.

To ensure complete impartiality, all entries are judged anonymously with company and/or individual names, products, or references to any identifiable solution and/or service being removed before being distributed to the judges.

This stringent process means that each winner of an award has done so based purely on merit. So, regardless of company size, budget, customer base, market share, influence, vendor, academic, end user, consultant or otherwise; The North American Software Testing Awards truly is an independent awards programme that recognises and rewards outstanding achievement.

Ambassador

Joe Larizza
Director of QA
Royal Bank of Canada

Judging Panel 2021

Aviad Afik
QA Manager
Accommodations Plus International

Jeremy Berriault
President of Berriault
Associates Consulting Group

Bernd Bornhausen
Director of Quality Assurance
Logibec

Dinesh Thandapany
Head of Quality Engineering and Test
Tanegrine Bank

Bakhtiar Ghazanfar
Senior Director, QA
Travelers Canada

Aayush Kathuria
Associate Vice President, Quality Engineering
TD Bank

Jesse Penning
Director of Software Delivery
Avanti Computer Systems

Muthu Tamizhmani
Director of Quality Engineering
Symcor

Lina Vaisman
QA Manager
Air Canada

Derwin de Vera
Director of Test Engineering
Flipp

Judges’ feedback

The following are comments gathered from 2020 Judging Panels after they had reviewed all entries. It may help you in deciding on what information to include in your entries.

As a whole the entries did not take into account criteria as much as the Judging Panel would have liked. Additionally, the entrants should remember that they are being judged by a panel of industry peers and so would do well to pitch the entry to the target audience. A few entries were considered “overly simplistic.”

Strong Entries:

  • Emphasised the business importance and criticality of the project, and clearly identified what was innovative about the approach being adopted.
  • Focused on security testing and gave detail on tool evaluation, plus approach to overcoming challenges. Good integration in to delivery and security testing principles.
  • Clear project goals coupled with quantified outcomes/successes; adaptability to mitigate unexpected challenges/risks and the utilisation of a wide variety of testing approaches and techniques that aligned with the complexity of their environments.
  • Very explicit in drawing out the use of Agile principles and how they were deployed and adapted throughout the project.  It is also valuable to have concrete examples of processes that worked well, and of lessons learned from mistakes.
  • Shown the biggest number of treats/characteristics one would expect from someone with good people management and communication skills:  Mentorship (not only for people within his company), Coaching, being a role model, approachable and supportive, valued-respected-listened by people, dedicated to team well-being, focus on skilling people up.
  • Evidence of overcoming challenges was clear, they have also described how they were committed to best practices and their selection of methodology to support their automation objectives along with evidences which made their application standing out from the rest.
  • Cleary researched and implemented best tech to test ALL the system, NOT just the software based backend systems.
  • Very clear evidence of the project methodology and justification of tech choices. Very clearly detailed description of understanding the Stakeholders needs, the importance of the project and the overall Goals. In addition employed a combination of technology to solve some very difficult problems. Demonstrated some very innovative use of tech to deliver a complete testing solution.
  • Explained well the approach to selecting tools, and the reasons for selecting them. The selection of the tools themselves represent best practice test approaches, and the results speak for themselves.  In addition, the approach to synthetic data creation was good. The testing has clearly been challenging, and some of the unique solutions implemented, along with the clear narrative on the reason for the choice, are very exciting.
  • Give context to metrics to fully justify their inclusion.

Weak Entries:

  • Did not describe challenges very well, talking about business, project, or architectural challenges – rather than challenges in their automation journey. Not sufficient details around how they were testing to take a view on the quality of the test scripts. Unable to justify their choices in selecting an implementation approach.
  • Did not cover all the criteria’s defined, which made it as slightly weaker applications.
  • Lacked details around the approach to testing. Diagram summarised tools used but limited detail on what challenges were encountered and how those were overcome through the use of automation.
  • Not demonstrating evidence of delivering to time, within budget or engagement with stakeholders, neither reflecting back on goals to establish ultimate project success.
  • Concentrating on the merits of the tool or method rather than the actual project deliverable is less likely to be scored highly.
  • Not justify the reasoning behind including metrics.
Menu
Cart
  • No products in the cart.


Subscribe to our newsletter

Loading