The growing need for AI talents

Artificial Intelligence (AI) is slowly becoming more popular not only in businesses but within our everyday life as well. Yet, in order to tackle AI problems safely and efficiently, we need experts with a strong and focused set of skills.

There is then a growing need for AI talents, who possess the necessary capabilities to build and deploy AI systems. As we see more AI-driven automation starting to take over some industries, it is also vital to think about how can humans work with AI and change the nature of work for the better.

Hence, we have talked to experts in the field to shed light on this topic.


The skills to work in artificial intelligence

First of all, it is becoming necessary that many organizations hire workers with the right skills to deal with AI correctly. We can then wonder what are the skills needed to work in artificial intelligence.

To answer this, Cibe Sridharan, AI Engineer at Fractal Analytics, starts by pointing out that there are multiple skills that need to be developed for an AI talent such as Data Engineering, Domain use case understanding, EDA, Modelling, and Model deployment process in real-time.

Adam Leon Smith, CTO at Dragonfly, adds that mathematics is very important to machine learning, as well as other algorithms. Indeed, he highlights, whilst coding is less relevant, there is plenty of engineering skill needed to develop data pipelines, especially real-time ones!

‘Many AI implementations come with interesting ethical and legal questions that need multi-disciplinary skills.’

Moreover, he continues, more skills are needed to test AI systems. Part of this is about understanding the technology, and particularly the limitations of the technology. Another important part is understanding testing techniques and processes and how they apply differently to AI systems.

Kevin Surace, CEO & CTO at, also notes that AI talents can be found within developers of AI algorithms and systems and users of AI-enabled technologies.

Indeed, for developers of AI algorithms, he says that you need either a degree in and substantial background in AI algorithms (traditional ML as well as modern neural nets, GANS, etc) and/or deep knowledge of a field such that you can apply those algorithms to solve specific problems in the field. For many technology areas, you don’t need to invent a new algorithm but apply ones already available to you in unique ways. The application of AI is the heaviest lift today.

For users of AI-enabled technologies, he continues, you need to be able to learn how to train the AI, as well as trust it and explain it to others. For instance, users of Appvances Ai autonomous testing need to know their application and how they could expand application coverage. They don’t need to understand the 19 algorithms working behind the scenes for them. But the thinking is different. You go from “what scripts must I write” to “how can I use Ai to find bugs for me”.

Finally, according to Bogdan Grigorescu, AI Platform Manager at Combined Intelligence, AI talents need to have both technical and non-technical skills. It depends on the fields. It also depends on if you are working with algorithms or not. In that case, python is a must-have.

If you’re working with data, then data skills and statistics are essential.  You don’t necessarily need to be a programmer.

Moreover, he adds, it is vital to be very collaborative and communicate well with your team.  You also need to be good at managing changes because, soon he says, everything is going to be automated.


Why are AI skills so needed?

Everything is changing indeed, and it is vital that we are prepared for the next steps in technology where AI will play a key role.

Adam then points out that AI skills are more important now as so many businesses are adopting AI. Indeed, many companies are starting to use AI-as-a-service – where they don’t necessarily have their own AI experts. Hence, it is even more important that there are solid testing and critical thinking surrounding any implementation.

Kevin emphasizes, however, that the development of platforms using AI is limited to a handful of people. So, that’s not the big need. The larger need is taking QA testers and QA engineers out of the scripting business and into the robot-overlord business. They will be overseeing a machine that finds bugs. They have then to think “how can I best train it?” “What do I really care about” and so on, he highlights. It’s not knowing how to create a neural net, it’s raising the QA persons thinking from daily scripting tasks to a higher level of “I need to use this to release the best quality product”.

For Cibe, AI can be used to solve problems across the board. Therefore, AI can help businesses increase sales, detect fraud, improve customer experience, automate work processes and provide predictive analysis.

Bogdan, on the other hand, states that organizations first need to know if they can cope with AI and more automation and if they do have a problem that is suited for this type of automation. According to him, in more than 50% of cases, these organizations aren’t suited for AI – their problems can be solved without AI. Otherwise, there is a risk that the process will become obsolete, and it is going to fail.

Yet, he points out, in certain cases, there is the need for AI but there is a need to first identify who and why they might need it.


Most required AI skills in business

Adam points out that the AI skills required depend on the business.

Indeed, the first thing a business needs is a strategy for adopting AI, and dependent upon whether they build or buy, they may need to hire data scientists, data engineers, specialist managers, and architecture, and their testers need to be AI-aware.

For Cibe though, the primary focus should be mainly on Problem-solving, use case, and domain understanding.


A talent shortage in AI?

Every software company is utilizing some level of machine learning now, Kevin tells me.

Yet, the world didn’t train enough AI Ph.D.’s. Indeed, there aren’t enough businesses offering AI autonomous testing. We need more AI expertise and a deep ability to apply AI to solve client problems faster and more efficiently.

Adam adds that, whilst the current generation of AI technology is becoming mature, and technical skills are developing well, the skills needed to plan, manage and test AI implementations are much less prevalent. That is, according to him, in part because the best practices are not mature yet. That said, there are now several training courses specifically aimed at testers working with AI systems.

Cibe emphasizes that domain and business understanding are the main skills lacked by few data scientists, the idea is to analyze the problem and then use algorithms on top of it.


The challenges…

Yet, every progress leads to many tests and trials. Artificial intelligence has a lot yet to explore…

One of the biggest challenges with AI, according to Adam, is unrealistic expectations from the perception of stakeholders. ‘Most people don’t understand that machine learning is not going to be right all the time, and using it is a trade-off between accuracy and automation.’

Hence, businesses need testers to think critically about this and identify things that can go wrong.

The main challenges could be more around the technical expertise, Cibe adds, as well as the cost factor, infrastructure challenges, and business integration of results.

What we have seen is a lack of trust in the system and a need to understand each algorithm and decision an AI system makes, Kevin highlights. AI developers know it’s impossible to describe how a decision is made but QA engineers must learn to trust the AI as an augmentation of their work. He emphasizes that developers and engineers must understand that AI is not a replacement but a technology that will find perhaps 10X more bugs than they could in 1/10th of the time. In order to do so, they need to understand how that’s possible, yet don’t have the years of AI background to do so. It’s a bit of a quandary.

We see so many accept systems they cannot fully understand but only because it can help them, they instantly love it. It helps them do a job far better than before and deliver a better-quality release. There is then the need to educate and train engineers and developers in AI-enabled technology so as they can better understand the future.

Bogdan lists a few challenges that come with AI such as:

  • Scaling
  • Data and who gets access to it
  • Bias – how do we understand bias, how to detect it, and how to change it?
  • Training and coaching experts so they can develop and deploy AI in the best way

Another challenge that comes up with AI is how it can alter the nature of work and especially, possibly take over human workers in the future.


Altering the nature of work

On the other hand, AI can also positively affect the nature of work.

Indeed, for Cibe, AI in action and products like Data robots and others will help drastically reduce the time taken in solving a machine learning problem for Data Scientists. Hence, the focus will be more on the nuts and bolts in deploying a scalable solution.

Adam also notes that ‘the more AI skills we have in business, the easier it will be for businesses to identify opportunities for AI solutions.’


An AI-centered future?

Adam believes that AI technologies are continuing to be deployed at pace. Yet, many expect that there will be a slow-down in progress in the coming years, due to much-needed regulation, and a realization of societal concerns, and the need for new areas of expertise (not just data science), like testing AI.

Cibe also thinks that artificial intelligence can dramatically improve the efficiencies of our workplaces and augment the work humans can do. When AI takes over repetitive or dangerous tasks, it frees up the human workforce to do the work they are better equipped for—tasks that involve creativity and empathy among others.

Overall, he continues, the main technical challenge is that how are the model outputs from the AI solution being interpreted to the business, does the business output reconcile with the model output – This is still a million-dollar question that needs to be answered in any Modelling application.

For Kevin, every software product will use some AI components because there is always more to learn from your users. So, Ai will be 100% part of our lives. We won’t need to understand how it works, yet we rely on it to do its job.

Finally, according to Bogdan, there will be more AI and automation in the future. But there will also be standards and regulations that will help develop it more safely. It will force companies to be more thoughtful about the impact of AI on society and do it better.

However, he points out that there will also be more ethical questions around it, especially as more AI-enabled technology such as deepfakes are created.


Special thanks to Cibe Sridharan, Adam Leon Smith, Kevin Surace, and Bogdan Grigorescu for their insights on the topic!