Within 72 hours of the recently concluded Assembly elections, the Election Commission of India (ECI) released detailed statistical reports and index cards showcasing the capabilities of ECINet, its digital electoral platform formally launched in its full-fledged professional version in January 2026. However, during the special intensive revision (SIR) for the same West Bengal elections, the ECI neither disclosed the status of nearly 34 lakh appeals, including seven lakh deletion appeals, pending before the 19 Supreme Court-appointed tribunals, nor released comparable transparency reports, despite all relevant information being readily available within ECINet.
Reportedly, one tribunal headed by the former Chief Justice of the Calcutta High Court disposed of 1,777 appeals, allowing all 1,717 citizen appeals for inclusion and rejecting all 60 ECI appeals for deletion. Similar large-scale corrections may have occurred before the other 18 tribunals. In contrast, the ECI reportedly included only about 1,607 voters before polling.
Such selective disclosure and inconsistent transparency raise serious questions about the ECI’s functioning, neutrality, and suppression of equally significant information that could potentially influence electoral outcomes.
The absence of comparable transparency and status disclosures for SIR, despite reports of large-scale discrepancies and disenfranchisement, raises serious questions about selective disclosure and institutional double standards within the ECI. As a constitutional authority, the ECI commands the highest institutional dignity and trust. Equally, it must remain accountable, transparent, and open to objective scrutiny, especially when allegations of arbitrariness, bias, and large-scale exclusion of genuine voters emerge.
Against this backdrop, an independent AI-enabled oversight layer integrated with ECINet could continuously assess electoral roll revision processes, including neutrality, consistency, and procedural arbitrariness. The proposed AI watchdog framework is straightforward to implement, with a foundational operational model achievable within a few months and capable of continuous enhancement thereafter.
Failures in SIR 2.0
SIR 2.0 exposed unprecedented chaos driven by ad hoc, ever-changing, and subjective SOPs (Standard Operating Procedures) that reportedly excluded millions of genuine voters from electoral rolls and, in several cases, denied candidature rights. What began as an exercise to improve electoral accuracy by removing ASDD (absentee, shifted, duplicate, dead) entries and adding new voters instead resulted in widespread uncertainty, repeated verifications, prolonged appeals, and allegations of arbitrariness, discrimination, and bias.
The exercise relied heavily on inaccurate, incomplete, and non-searchable legacy SIR 2002-04 databases. Instead of correcting defects at the source, the burden of proof was shifted onto voters, forcing genuine citizens to repeatedly establish their eligibility despite long voting histories and valid documents. The process was further marked by uneven application of the logical discrepancy criteria across regions and voter groups, resulting in non-uniform outcomes for similarly situated voters. Minor mismatches in names, ages, or family details often led to exclusions, while opaque decision-making and the absence of reasoned orders fuelled allegations of arbitrariness and algorithmic bias.
The consequences were most alarming in West Bengal, where only about 1,600 inclusion appeals and merely six deletion cases were reportedly disposed of before polling, out of nearly 3.4 million pending appeals, even though inclusion appeals reportedly had a success rate exceeding 99%. Those excluded reportedly included electoral officials and prospective candidates. Notably, one such excluded individual, later cleared for inclusion, went on to be elected as an MLA.
In an unprecedented situation, 49 Assembly constituencies reportedly recorded victory margins lower than the number of voters awaiting disposal of inclusion appeals. The apex court observed that relief for many may come only in future elections and that post-election scrutiny may be necessary in constituencies where victory margins fall below the scale of discrepancies and pending appeals, raising serious concerns over electoral integrity and the possibility of post-election chaos. The situation reflects not merely administrative failure, but a deeper crisis of credibility in the electoral roll revision process itself.
These developments exposed deeper structural weaknesses in electoral roll management. They stood in sharp contrast to the EC’s repeated commitment to “ensuring free, fair, transparent, accessible and peaceful elections” and its assurance that “no genuine voter is disenfranchised.”
More significantly, this occurred despite ECINet reportedly being capable of handling three crore hits per minute and maintaining detailed operational data for every voter and transaction. Yet neutrality, consistency, and accountability continued to depend largely on opaque manual processes, administrative discretion, and post-facto correction. The SIR 2.0 experience, therefore, underscored the urgent need for a continuous, technology-driven oversight mechanism capable of monitoring processes, detecting anomalies, assessing institutional neutrality, and identifying discriminatory patterns in real time.
AI oversight for ECINet
As AI increasingly powers governance and large-scale public systems, electoral management too requires intelligent, continuously auditable oversight. Embedding an AI-enabled watchdog within ECINet offers a practical pathway to build a neutrality-aware electoral roll management system capable of safeguarding democratic participation and public trust.
Integrated directly with ECINet, the proposed AI layer would function as a continuous oversight and analytics engine. It would monitor system usage, track decision patterns, analyse voter-official interactions, and generate real-time indicators of neutrality, consistency, efficiency, and citizen satisfaction at booth, constituency, district, and State levels. Unlike post-facto reviews, it could continuously audit electoral roll revision processes using transactional and procedural data already available within ECINet, enabling early detection of irregularities before they escalate into large-scale disenfranchisement or administrative crises.
The system could automatically flag anomalies and discriminatory patterns, including unusual spikes in deletions, inconsistent application of SOPs, repeated rejection trends linked to specific officials, excessive grievance delays, abrupt policy shifts, bias arising from logical discrepancy filters, disproportionate exclusions due to minor spelling or family-data mismatches, and concentrated deletions in specific regions, castes, or communities. It could also compare outcomes across regions to identify differential treatment of similarly placed voters and enforce consistency in institutional communication by tracking announcements, circulars, deadlines, SOP revisions, and field instructions.
Further, continuous analysis of bottlenecks, software glitches, verification failures, grievance trends, and operational inefficiencies could support evidence-based refinement of SOPs, replacing ad hoc administrative responses with measurable corrective action. A foundational operational version of such an AI watchdog could be implemented within a few months and continuously enhanced thereafter.
Way forward
An AI-enabled watchdog integrated with ECINet could continuously monitor electoral operations, assess institutional neutrality, detect anomalies and discriminatory patterns, and flag inconsistencies or shifting eligibility criteria.
It could also standardise announcements, deadlines, and procedural updates, reducing confusion and non-uniform implementation across regions.
With ECINet already fully operational, AI-driven oversight could make SIR processes more transparent, neutral, accountable, and citizen-centric. Importantly, such a system would strengthen — not replace — constitutional authority through transparent audit trails, fairness metrics, evidence-based oversight, and measurable accountability, while reducing arbitrariness, opacity, and public distrust.
(Rajeev Kumar is a former Professor of Computer Science at IIT Kharagpur, IIT Kanpur, BITS Pilani, and JNU, and a former scientist at DRDO and DST)


