EU AI Act Full Enforcement Begins: A Plain-English Guide for Indian Tech Exporters
Amit Yadav
The EU AI Act is now fully in force. Indian tech exporters selling AI products to European clients face new obligations around transparency, documentation, and human oversight — with fines up to €35 million.
The European Union's AI Act — the world's first comprehensive AI regulation — has entered full enforcement, applying to any AI system deployed within EU borders, regardless of where the developer is based. For Indian SaaS companies, AI service providers, and IT exporters that sell to European customers, the implications are significant and immediate.
Who Does It Affect?
Any Indian company whose AI product is used by individuals or organisations in the EU must comply. This includes HR software that uses AI for resume screening, lending platforms with AI credit models, medical AI sold to EU hospitals, and recommendation engines used by EU consumers. The Act applies on the basis of where the AI is used, not where it is built.
The Risk Classification System
The Act classifies AI systems into four risk tiers. Most software-as-a-service products fall into the "limited risk" or "high risk" categories:
- Unacceptable risk: Banned outright. Includes real-time biometric surveillance in public spaces and social scoring systems.
- High risk: Subject to strict requirements. Includes AI used in hiring, credit, healthcare, critical infrastructure, and law enforcement. Requires conformity assessments, technical documentation, and human oversight mechanisms.
- Limited risk: Transparency obligations only. Chatbots must disclose they are AI. Deepfakes must be labelled.
- Minimal risk: No obligations. Spam filters, AI in games, etc.
Penalties
Non-compliance fines reach up to €35 million or 7% of global annual turnover, whichever is higher — the steepest tech regulation fines in history. India's IT ministry is reportedly working with NASSCOM to produce compliance guidance for member companies.