Could AI become dangerous?

The past few years have witnessed the dramatic rise of artificial intelligence (AI) and machine learning (ML) within the IT industry. It is possible that, in the coming decades, more improvements will be made and, in time, our future will be led by AI. It is then crucial that AI is approached in the most secure way possible to assure the best outcomes.

AI – artificial intelligence and machine learning – today refers to the simulation of human intelligence in machines that are programmed to think like humans and mimic their actions. It can also be applied to any machine that exhibits traits associated with a human mind such as learning and problem-solving.

In order to shed light on this ever-present topic, I have talked to various engineers and leaders in the field. They gave me their input on AI and its dangers and benefits as well as its role in the future.


Is AI the future?

These past few years have seen the rapid evolution of AI and it is safe to believe that, sooner than later, our future will be even more dictated by AI.

Arman Kamran, CTO of Prima Recon and enterprise scaled Agile Transition coach, told me that ‘AI has been part of our technology for a few decades now, but up until a decade ago it was busy growing slowly out of technical labs into multiple industries from healthcare to space.’ Indeed, he emphasized the fact that the many improvements made in processing power, as well as the drop in the cost of computer memories and data storage, gave AI the ‘boost it needed to rapidly grow into a key factor in today’s calculations for survival and growth of a business or industry.’

Moreover, Arman pointed out that ‘we are now even using AI to develop AI and expand it into new fields with unprecedented strength. At this point, there is no visible limit to this expansion in the horizon.’

Reinforcing that idea, Jitander Kapil, Head of DevOps at Larson & Toubro, also underlined that AI is the future as industries and businesses are going to be using AI, and not only the technology sector. AI is growing and in time, it will be involved in every domain.

We already use AI in our everyday life, even without noticing, emphasizes Cibe Sridharan, AI Engineer at Fractal. ‘For instance,’ he told me, ‘if you go on google and search a particular keyword to buy a jacket, you will have many recommended options. Then, you will see all of these recommendations come into your feed. You will get the same ads on Twitter, Instagram, all of your social media platforms… which will only reinforce your opinion to buy that particular product. And so, the future technology will be exclusively based on your personal needs.’ According to him, technology and AI, in particular, will use this data to impact your thinking. Thus, AI is driving the future of tech.

AI is only going to expand from now on as AI has proven itself ‘not only resourceful’ but also ‘an important factor in assisting governments and businesses’, Arman highlights. Indeed, AI has helped organizations in many ways, including:

  1. understand better their working environments,
  2. accommodate their clients and service recipients,
  3. provide a higher quality of services
  4. optimize their service provisioning and manufacturing pipelines

And all of this, as he says, can be achieved ‘with lower cost, less error, and higher predictability’.


AI brings innovation and progress

So, AI is driving the future of technology and provides industries with many resources and advantages.

Technology, as Arman points out, has always been considered as a way to enhance human abilities in order to ‘achieve more and go beyond previous limits.’ With AI, it could be possible to envision ‘the best course of action and to act on it.’ Therefore, as he says, ‘this gives AI a unique perspective never enjoyed by technologies before, as it is taking away the need for human decision-making power reducing it to a supervisory level of engagement which is already fading away as we cannot catch up fast enough with the complexity of their decision-making process.’

Moreover, he underlines that AI is a great supporting and enabling tool in a variety of sectors and that has ‘expanding utilization in some hot areas such as pandemic response (from predicting outbreak patterns and rise and fall of infection rates to the prescriptive distribution of resources to responders and balancing the predicted needs and shortages with the most optimized supply approach). AI is now even used in creating a predictive outcome of vaccine formulations before they are put to test.’

Jitander also adds that AI will have major benefits in sectors such as agriculture and farming as it will be able to help human workers in identifying the best ways to grow and develop more than ever before. The same goes for the construction industry. Indeed, AI will give thousands of design patterns on which people can choose, something that is impossible to do in a manual way and at this kind of pace. Cibe underlines that point as well as he tells me that AI will bring down unnecessary labor.

Furthermore, AI is becoming ever more present in our everyday life, just as Cibe pointed it out earlier, and it is ‘into every corner of our supply chain of goods and service’ Arman tells me. In fact, AI helps with reducing dependency on humans in order to control and make decisions. This means that humans ‘can be kept away from dangerous working areas and hazardous environments.’ AI will take control and do some operations that can be considered harmful to humans and give them more secure working conditions. However, Arman also tells me that it can serve as a double-edged sword as manufacturers and service providers will require fewer human workers, and thus, they will ‘cut the cost of hiring and keeping their human workforce.’.

As we are going to see, innovation doesn’t come without drawbacks.


Could AI take over?

The rapid evolution of AI doesn’t come without risks, and the faster we go, the more dangerous it can become.

When asked about the dangers of AI, Arman asserted that ‘danger has always existed in every technological innovation in history, from the ever-increasing trail of pollution caused by the first Industrial Revolution to the idea of Nuclear power generation to free use of pesticides everywhere into genetic modification of food and so on.’ AI is only a part of that as ‘it is on its path to outgrow human’s capacity to fully understand how it makes decisions and what is the base of its outcomes.’

Indeed, this would be the first time that our intellectual superiority would be taken away.

To shed some light on this, Arman retells a conversation he had with one AI lead from key players in Silicon Valley during a meeting in 2017: ‘After 2 hours of discussing, brainstorming and trying to picture a path, we ended up having no firm idea on where AI was leading us. The final outcome was that each individually announced that they believe it is too early to predict anything and we can’t even say with certainty where we will be in 18 months. They also refused to acknowledge the risk that was brought up through research from my team projecting that – back in 2017, even with AI still being in its infancy – it had the ability to take away over 1 billion jobs across the globe.

‘Three years later, we now have already arrived at a point where AI has the potential to take away or seriously threaten every single job from Pizza Delivery and Hairstylist Reception to far more complicated engineering or healthcare services. Add to that the great advances in robotics and you will see that AI can also have arms and legs much stronger than our physical capability.’ He also continues by pointing out that ‘corporations are not essentially accountable to their employees to keep them employed. Their accountability stands with their shareholders which means if they can run and stay profitable with only 10% of their former workforce, they will do that. After all, AI does not need paid leave, sick days, vacations, bonuses, or more importantly, any salaries. It does not need to rest, never forgets anything, and gets better at its work as time passes by. Corporations now need a much smaller space to run their business and don’t have to worry about providing a dignified, clean, healthy, and morale-boosting environment.’

Thus, as Arman stresses, AI is now capable of replicating human interactions and can combine the talents and brainpower of an entire group of humans together, only much quicker and with better quality. He gives the example of the new GPT-3, which can ‘write a resume, poetry, stories, and even create the software from scratch based on your verbal instructions’. ‘The algorithm cocktail that GPT-3 is using is created by multiple other in-house developed AIs because its complexity goes way above the human brain’s comprehension.’

Yet, Cibe emphasized the fact that AI is only dangerous if everything in life becomes automated. However, as he says, it is impossible to automate everything. According to him, the human component is irreplaceable and so we would always need human interactions and actions in the process. Cibe believes that as long as AI is made for good, it cannot be dangerous.


AI and security

Unfortunately, not everyone will use AI for good. In the wrong hands, AI can become a powerful and dangerous threat. Something that Jitander touches upon.

Indeed, Jitander highlights the fact that AI can be used for bad purposes, especially within this cyber world we’re living in. 2020 has seen a rise in cyberattacks. Now, even more than before, hackers than before are using new technologies to collect data and information and use them against people and businesses. According to him, this is up to every industry to understand the risk where they are developing a new AI-related project and invest in security so they can do it safely. Without risks, there is no progress.

As we’ve seen, AI is evolving extremely rapidly and is becoming a tool for control and security. However, we must be careful to not take it too far…

Indeed, Arman brings to light the fact that AI is dangerously expanding into military and police systems. ‘From identifying threat on streets to differentiating friends from foes in a battleground.’ Indeed, AI had already led ‘to the creation of functioning prototypes of autonomous weapons and regardless of how many prohibitions or sanctions are placed by United Nations or European Union or other global and regional bodies, they will continue to be developed in competition with the other sides in case they are needed in the future.’

Then, this brings up the question of how long it would take to start using AI as military weapons? How can we be sure it won’t turn into a public control device in the hand of future dictatorships?

For instance, the Boston Dynamic’s dog robots are already waking in public places in Singapore, watching people and warning them to keep social distancing. Or even, the “Deep Fakes”, which allow the recording of pictures or voices of a real person, which then ‘can be used to create videos of them saying or doing things they never did’. Jitander also emphasizes that point., where AI could then be used to incriminate people who did nothing wrong. There is nothing good about that. Likewise, he gives the example of ‘an app was recently shut down by mobile app marketplaces which were reported to be have been used to create over 100,000 fake naked pictures of women, without their consent, by just using their profile picture on social media.’

As innovation keeps on increasing, ‘we will continue to see AI-equipped malware that will try to better trick people into revealing their passwords and access credentials to their bank accounts and credit cards.’ In time, ‘more sophisticated ones will be expected to allow perpetrators to monitor someone’s daily routines and habits and establish traps for them to obtain compromising materials and blackmail them on that.’

Unfortunately, as Arman points out if the ‘sky is the limit’ on how far AI can develop and grow, then so will be the threat of its malicious usage against the public or businesses.


Then, how can we tackle AI safely?

AI was firstly developed in order to help and benefit humans. To prevent it from going off the rails, there must be some ways to keep us all safe.

Arman points out that there is no sure path that can guarantee the future safety of AI or the future safety of humans in regards to AI. But, what we can do is try to prevent it as much as we can. Hence, there is a need to have ‘a special division or council in United Nations to set up an ever-evolving and adapting set of regulations and sanctions to not only encourage, but also mobilize governments to establish their own internal checks and balances and enforce the safety measures that they are asked for.’

These regulations and standards could then serve as ‘guardrails’ for future development and would then allow more control and slow down the development into weapons and crowd control devices.

Moreover, he warns against an intrinsic bias that has been developed in AIs output over the past few years. Indeed, ‘this bias can be against a certain demographic group of the society or a certain race or ethnicity which has its root in the hidden bias that may exist in the data that is used to train AI.’ For instance, he says, ‘if we are going to use data coming from hospitals in an area where there is a higher population of a certain ethnicity, then there is a chance that AI training would lead to expect people from this ethnicity to get sick more often than people from other ethnicities. This bias can cause a range of reactions from annoying or stressing the targeted ethnicity to causing serious hazards when it comes to prescribing medicine or medical procedures or enforcing health plans.’

By incorporating global standards in ethics, removing bias, and keeping a high-level view on where we’re going will help people maintaining a safe environment for AI to grow without getting out of control. Arman points out that ‘we are already using AI to develop sensory and preventive measures against malicious use of AI by fraudsters and also hostile foreign governments trying to negatively influence and impact the lives of people.’ Therefore, we absolutely need ‘to find and then maintain the balancing point between AI-based defense and Human-based future.’

Yet, as he highlights,’ we should not forget that we may soon be facing a new paradigm called “self-aware AI” which would already be able to develop other AIs and enhance and enrich its own thinking and decision-making abilities. We will then soon be faced with multiple “self-aware AIs” across the globe getting introduced to the world by their trainers with a variety of agendas and biases. Our young “self-aware AIs” will soon come in contact with each other and soon enough their communication and interaction become more complex that we could safely understand or interpret.’

Overall, Arman concludes that ‘we should accept that this will be a live, ongoing and growing concern that needs AI itself to assist us in controlling and circumventing its dangers.’

In order to tack AI safely, Jitander also emphasizes the need to have more innovation in education and other sectors so people can be more aware of the risks and dangers. According to him, there is a necessity to educate people so there is a better understanding of it.

To develop safe AIs, Jitander tells me two important things: first to implement security at the beginning of the building process, and second, to put more work on innovation to always keep it safe and secure. For this, industries and businesses need to have better innovation budgets and widespread knowledge about AI and everything that revolves around it.

Only with education and knowledge can we reach a better and safer future.



Jitander concludes by telling me that AI is part of our everyday life. Today, we cannot do anything without AI. So, there needs to be an innovation hub for this kind of tech to become available so people can use it for the global good of the world. This is the aim. People still see it as a very skilled technology but this is not true. We need to learn, so we can handle it better. Education and innovation are key!

For Cibe, AI can give recommendations but it doesn’t replace humans. Even by building a robot that mimics human, it would still not be human. AI is here to help and help for good. In the near future, there is no reason to fear AI.

Arman, on the other hand, tells me that we have no way of knowing where this will go so we should just try to live with this concern while always striving to make it safer every day. He ends with ‘the rest looks like a great plot for a Sci-Fi horror movie, with the added excitement that we cannot guess in which direction that story would go. Let’s hope they don’t like watching movies and would especially hate the “Terminator” saga!’


Thanks to Arman Kamran, Jitander Kapil, and Cibe Sridharan, your inputs have been great to shed light on this topic!