SEBI AI Risk Advisory: Why Artificial Intelligence in Markets Is Becoming Dangerous

India’s market regulator SEBI is preparing an advisory for market intermediaries on emerging risks from artificial intelligence tools. Reuters reported on May 4, 2026, that SEBI Chairman Tuhin Kanta Pandey said the regulator will soon issue an advisory around risks from Anthropic’s Mythos and other AI tools. He also said SEBI is in touch with stakeholders on AI-related threats.

This matters because AI is no longer limited to chatbots or simple automation. In financial markets, AI can be used for trading support, risk checks, fraud detection, client monitoring, cyber defence and investment-related workflows. The problem is that the same technology can also create new risks if models are opaque, poorly tested, biased, overused or connected to sensitive market systems without strong controls.

SEBI AI Risk Advisory: Why Artificial Intelligence in Markets Is Becoming Dangerous

What Exactly Has SEBI Said About These AI Risks?

Business Today reported that SEBI is concerned about newer AI models and AI-led vulnerability detection tools. Tuhin Kanta Pandey said SEBI is in constant touch with market participants and relevant stakeholders because newer models test market resilience. He also said SEBI will soon issue an initial advisory on risks coming from such models and AI-led vulnerability detection tools.

The key point is not that SEBI wants to stop AI use completely. The regulator’s concern is preparedness. Pandey said market players should use available tools proactively to find vulnerabilities and patch them. That means SEBI’s focus is likely to be on control, monitoring, cyber safety and accountability rather than banning AI from financial systems.

What Are The Main AI Risks In Financial Markets?

SEBI’s concern fits into a wider financial-regulation problem: AI can improve speed and efficiency, but it can also create hidden risks. RBI’s former Governor Shaktikanta Das warned in 2024 that growing use of AI and machine learning in financial services can create financial stability risks. He specifically pointed to concentration risk, cyber attacks, data breaches, opacity and difficulty in auditing AI systems.

AI Risk Area What It Means In Simple Terms Why It Matters For Investors
Cyber vulnerability AI can find weak points in systems faster Exchanges, brokers and platforms need stronger defence
Model opacity Users may not know how AI reached a decision Bad recommendations can become hard to trace
Data privacy risk AI systems may process sensitive customer data Investor information can be exposed or misused
Bias and fairness AI may produce unequal or distorted outcomes Small investors may get poor or unfair treatment
Concentration risk Many firms may depend on few tech providers One failure can affect many financial institutions
Over-reliance Humans may blindly trust AI outputs Wrong signals can spread faster in markets

This table shows why SEBI’s advisory is not just a technology update. It is an investor-protection issue. If an AI model used by a broker, exchange, mutual fund or advisory platform fails, the damage may not remain limited to one user. In fast-moving markets, a weak model can affect orders, advice, compliance alerts and risk systems together.

What Has SEBI Already Done On AI And ML?

SEBI had already started examining AI and machine learning use in Indian securities markets before this advisory news. On June 20, 2025, SEBI released a consultation paper on guidelines for responsible usage of AI and ML in Indian securities markets. The official SEBI page shows that the consultation paper was opened for public comments.

Economic Times reported that SEBI’s 2025 consultation paper proposed five broad principles for responsible AI and ML use: model governance, mandatory disclosure, robust testing and monitoring, fairness and bias controls, and data security. These points show that SEBI’s 2026 advisory is not coming out of nowhere. It follows an existing regulatory direction around safer AI use in securities markets.

Why Is AI In Stock Markets Riskier Than Normal AI Use?

AI in stock markets is riskier because market decisions involve real money, real-time prices and millions of investors. A wrong AI-generated answer in a normal search tool may mislead a user. A wrong AI-driven signal in financial markets can trigger bad trades, poor advice, compliance failures or cyber exposure. That difference makes regulation more urgent.

RBI’s warning also fits this concern. Das said AI’s opacity makes it difficult to audit and interpret algorithms that drive financial decisions, which can lead to unpredictable consequences in the market. This is exactly the kind of risk that becomes serious when AI is connected to investment advice, trading systems, surveillance engines or customer onboarding.

How Could This Affect Brokers, Exchanges And Investors?

For brokers and market intermediaries, the likely impact is stronger internal review of AI systems. They may need better documentation, testing, human oversight, cyber checks and vendor-risk controls. If a broker uses AI for client support, trading tools, advisory workflows or compliance monitoring, it may have to prove that the system is safe, explainable and properly supervised.

For investors, the advisory could improve protection against AI misuse, but it will not remove all risk. Investors should still be careful with AI-generated stock tips, automated trading claims and social media advice. SEBI has already been using AI for surveillance as well; Economic Times reported in February 2026 that SEBI was deploying AI to track misconduct such as insider trading, unregistered investment advice and misleading financial promotions.

Why Is Cyber Risk A Big Part Of This Story?

Cyber risk is central because advanced AI tools can be used to detect software vulnerabilities faster. Business Today reported that concerns around models such as Mythos include advanced capabilities linked to analysing systems and identifying zero-day vulnerabilities. SEBI’s chairman also said stakeholders need to be alert and proactive in finding and patching vulnerabilities.

Finance Minister Nirmala Sitharaman had also urged SEBI to increase global regulatory consultations and use AI to manage cyber risks. Reuters reported on April 25, 2026, that she said stronger global understanding of SEBI’s frameworks would increase confidence of global capital in Indian markets. This shows that cyber-risk management is now part of India’s broader market-trust agenda.

What Is The Conclusion?

SEBI’s upcoming AI risk advisory is important because artificial intelligence is now touching sensitive parts of the financial system. Reuters reported that the advisory will focus on emerging risks from Anthropic’s Mythos and other AI tools, while SEBI is already speaking with stakeholders on AI-related threats. This is a clear signal that AI risk has moved from a future concern to a present regulatory issue.

The data-based takeaway is simple: SEBI is not treating AI as only a productivity tool. It is looking at AI as a market-risk, cyber-risk and investor-protection issue. For brokers, exchanges and intermediaries, this means stronger controls are coming. For investors, the lesson is even clearer: never blindly trust AI-generated market advice without checking whether it comes from a regulated, accountable source.

FAQs

What Is SEBI’s AI Risk Advisory About?

SEBI is preparing an advisory for market intermediaries on emerging risks from AI tools, including Anthropic’s Mythos and similar technologies. Reuters reported that SEBI Chairman Tuhin Kanta Pandey said the regulator is in touch with stakeholders on AI-related threats.

Has SEBI Already Worked On AI Rules Before?

Yes, SEBI released a consultation paper on June 20, 2025, on guidelines for responsible usage of AI and ML in Indian securities markets. Economic Times reported that the proposed framework included model governance, disclosure, testing, fairness and data security.

Why Is AI Risky For Stock Market Investors?

AI is risky for investors when it is opaque, poorly tested, biased, vulnerable to cyber misuse or used without human oversight. RBI has warned that AI can create concentration risk, cyber attack exposure, data breach risk and auditability problems in financial services.

Click here to know more

Leave a Comment