Top News

Musk’s warning on AI genie
Samira Vishwas | May 13, 2026 2:24 PM CST

The ongoing court dispute between Elon Musk and Open AI brings to the fore a larger conflict. Can such a powerful technology be left entirely to market forces?

cats
Dr. Shivam Bhardwaj, Associate Professor

In the modern world, cyber security is no longer just a technical subject. This has become a question directly to national security, economic stability and democratic system. Just as a large part of the world is becoming dependent on digital infrastructure, the scope of cyber threats is also increasing in proportion. At such a time, Artificial Intelligence is being seen as a tool that can increase the ability to deal with these threats manifold, but this is also the point where hope and apprehension are seen standing side by side.

The ongoing court dispute between famous tech businessman Elon Musk and OpenAI brings this big conflict to the fore. Musk alleges that Open AI, which was founded with the purpose of promoting safe and human-friendly AI development, is now moving in a different direction under the pressure of huge investment and commercial competition. It would be easy to dismiss this dispute as a mere personal or business conflict, but the question it raises is much more serious. Can such a powerful technology be left entirely to market forces?

Musk has also warned that in the coming years, AI can surpass human decision-making ability and can act arbitrarily. There may be differences in their apprehensions, but it is true that AI is no longer just a system that follows instructions. She recognizes patterns, sets priorities and sometimes takes decisions faster than humans. This is why the debate about AI has now become not about technical convenience, but about control and accountability.

Companies like Anthropic are developing advanced AI models that aim to proactively identify vulnerabilities in the world’s digital infrastructure—networks, software, and cyber systems. The claim is that these models can analyze millions of lines of code and network activity and find flaws that human experts may not catch for years. In this context, initiatives like ‘Project Glasswing’ become important.

At first glance this technology seems to be a major achievement in the field of cyber security. Governments, banking systems, power plants, hospitals and defense institutions around the world are constantly facing the threat of cyber attacks. In such a situation, if AI can detect possible attacks in advance and strengthen the security system, then it seems naturally useful.

The problem starts from here. If an advanced AI model can discover network vulnerabilities at extraordinary speed, that same capability could become a tool of attack in the hands of a cybercriminal or hostile organization. This is why some companies are reluctant to release their most powerful models publicly and have shared them only in a limited manner with select tech companies and security institutes.

There have already been examples in the world where AI based systems made serious mistakes. Some algorithms used in health services gave less priority to needy patients because they were based on biased data. In the real-estate sector, algorithmic models failed to understand the complexity of the market and led to huge economic losses. Some bots reinforced the illusions of people with sensitive mental states instead of balancing them. These incidents made it clear that machines may appear logical, but their logic does not always make sense in human context.

Musk’s concerns are also broadly in the same direction. He says that if AI development proceeds only with a mindset of profit and competition, then safety and ethics will be left behind. This fear does not seem entirely unfounded, as the current race for AI development involves billions of dollars of investment and a race to be ‘first to market’. In such an environment, voluntary abstinence is difficult to last for long.

Its impact will not be limited to business or cyber security only. Democracy is also not untouched by this. Deepfakes, disinformation, and targeted digital propaganda are already influencing public discourse. With the help of AI, the line between truth and lie is becoming more blurred. This is the reason why many experts have now started considering AI not only as a technical but also as a political and social challenge.

This is where the question of policy and governance becomes central. Steps have been taken towards AI regulation in some parts of the world, but AI is global in nature, so its solution will also be possible only at the global level. Technological progress alone will not be enough; Along with that, accountability, transparency and clear rules will also be necessary.
Ultimately, AI is not a question of machines, but of human discretion. We are developing a power which can become our biggest help and also our biggest crisis. Today AI is still within human control, but this control will not be maintained on its own, it will have to be maintained, because in the future the biggest question will not be what machines can do, but what humans allow them to do.
(These are the personal views of the author)


READ NEXT
Cancel OK