Special | As AI Took Over Policing in Delhi, Who Bore the Brunt?
Astha Savyasachi
This story was produced in partnership with the Pulitzer Center.
New Delhi: In the hours of a March morning in 2020, Ali’s* life changed forever – and AI-powered facial recognition technology was at the centre of why.
Ali was arrested from the narrow alleys of Chand Bagh – a poor locality in Northeast Delhi. What followed was more than four and a half years of pre-trial incarceration. Trapped in a muddle of legal delays and procedural limbo, he waited as his case crept through the Indian judicial system until he was finally granted bail.
"I was beaten mercilessly; on some occasions, the assault was so severe that there was profuse bleeding and my flesh was torn. They used batons against me, and there were times I was kicked so brutally that I struggled to breathe," he said about those days. The Wire has asked the commissioner of police, Delhi and the secretary, Union Ministry of Home Affairs, to respond to these allegations of custodial torture, but no response had been received till the time of publication.
Ali is one of the 29 accused in the Ratan Lal Murder Case (FIR 60/2020). The case pertains to the communal violence that erupted in Delhi on February 24, 2020, when, according to the police, protestors against the controversial Citizenship (Amendment) Act used sticks, baseball bats, iron rods and stones to attack policemen on Wazirabad Road, Chand Bagh. According to the chargesheet, Head Constable Ratan Lal was struck by a bullet and later succumbed to this injury.
Despite chargesheets asserting the presence of “sufficient material” against several of them, 27 of the 29 accused have been granted bail, and one has been discharged from the case. In the years spent behind bars, with no trial in sight, many, like Ali, have lost loved ones and seen their livelihoods vanish, and now face crushing debt.
How were they ‘identified’?
The investigating officer of the Ratan Lal Murder Case, Inspector Gurmeet Singh of the Crime Branch, told The Wire that the case was ‘solved’ using advanced technologies, including video and image enhancement tools (Amped FIVE by Amped Software) and facial recognition software (AI Vision by Innefu Labs). He confirmed that all the accused named in FIR 60/2020 were identified using these tools.
Special Public Prosecutor Amit Prasad, who has been representing the Delhi Police, told the Delhi high court during a hearing, “The usage of AMPED software based on digital recognition is sufficient to establish [the identity of] the correct individual.” Though Prasad refused to respond to The Wire’s queries, he confirmed that Amped tools were used by the police.
The police obtained CCTV footage from multiple alleys near the crime scene in Chand Bagh. Additionally, three private videos were submitted by onlookers: Harsh’s* video (1.48 minutes) recorded from Gym Body Fit Garage, Skyride Video (1.37 minutes), and Yamuna Vihar Video (40 seconds).
Defence counsel Raman*, who represented Ali in the Delhi high court, told The Wire that the police obtained CCTV footage from the surrounding alleys and selected frames capturing a full-frontal or even a side-profile view of each accused. He added, “These images were then fed into the facial recognition software and matched against the individuals appearing in the three private videos acquired by the police.”
Raman stated that Ali was ‘identified’ through facial recognition software: “Even the methodology of facial recognition was not disclosed, which is quite surprising. Based on what the prosecution said in court, I gather that the police had CCTV footage of the alleys/gullies near the scene of the crime. They seemed to have extracted images from that footage and matched them against the private videos (like Harsh’s*) using the FRS. Even if I assume that the CCTV footage was correct, I can surely say that the persons in the Harsh* video and the CCTV footage do not match.”
Advocate Raman explains that the person in Harsh’s video, who is seen pelting stones at the police, is wearing a black shirt with a white jacket, while Ali, in the CCTV footage of the main road, is wearing a different coloured shirt and no jacket.* Describing the video, he says, “Furthermore, the Harsh video was a side profile of a person standing far back, rather against the wall. He could not be part of the melee of people conducting a physical attack. But the police allege that Ali threw a stone from roughly 50 to 100 meters away onto the police crowd. Whether the stone landed or caused any injury is neither mentioned nor established.”

Illustration: Pariplab Chakraborty
Since his arrest, Ali has been battling with severe depression. The toll on his family has been equally devastating. His mother, who weighed around 68 kg in 2020, has withered to just 30–35 kg. Her blood sugar often spikes to critical levels, sometimes reaching as high as 500. Years of poverty and malnutrition in Ali’s absence have caused her to lose all her teeth, and she has now begun experiencing bleeding during urination.
Ali said, “Our financial situation has deteriorated to a point where we cannot even afford basic medical treatment for our mother. We are trapped in a severe financial crisis, burdened by a debt of nearly Rs 20–25 lakh, and it feels as though our lives have been pushed back by at least two decades.”
Speaking to The Wire, he also described systemic discrimination against Muslim prisoners in jail. He alleged that they were routinely humiliated, asked their names, and if identified as Muslim, were assigned degrading tasks. "We were forced to scrub toilets and mop floors with our bare hands, denied even basic cleaning tools like wipers," he recalled. The abuse extended beyond physical violence. "I was constantly humiliated, called a terrorist, and subjected to unbearable psychological torment. I spent countless days crying and praying – as did my mother," he alleged.
Speaking about the desperation during the COVID-19 lockdown, Ali explained, "Prisoners would fight for the slightest chance to help unload heavy supply trucks, just to earn 5–7 extra biscuits. The food rations were grossly insufficient – often just 50–100 ml of watery khichdi and two teaspoons of vegetables, barely enough to survive."
Mohammed* is another accused in the Ratan Lal Murder Case, who, according to his lawyer Advocate Uday*, was also ‘identified’ using facial recognition software. He had to spend around two years behind bars without any trial before he finally walked out on bail. Despite being an undertrial and not yet convicted of any crime, he, like Ali, was also compelled to clean toilets and perform menial labour, even though prison labour for undertrials is supposed to be only voluntary. His bail pleas were rejected not once but four times, even as his wife struggled through a difficult pregnancy and his parents' health rapidly declined in his absence.
He told The Wire that in one of the videos, the police had ‘identified’ a man as him, even though the individual was at least five inches taller than him, with noticeably different hair length, footwear and even a different number of shirt pockets. The ‘identification,’ he pointed out, hinged primarily on clothing – specifically a white shirt and black pants, an outfit worn by several individuals in the footage. This allegation raises serious concerns about not just the accuracy of the identification but also the efficiency of the tools used.
Advocate Uday took the high court through one of the private video footage collected by the police.. He argued that his client could not be clearly identified as he was not distinctly visible in the footage. Also, the clothes worn by him (white shirt and black pants) were similar to those of many others present in the video, thereby failing to establish his identity at the scene of the crime. Uday also pointed out that no available camera footage captures Mohammed damaging CCTV cameras, contrary to the prosecution’s allegations. He added, “The logo present on [his] white shirt appears in one video and is absent in another. Despite this, the police claimed that the two persons in the videos are the same.”
Both the defence counsels told The Wire that facial recognition was used to ‘identify’ Mohammed and Ali: “No Test Identification Parade (TIP) was conducted for either of them, which means whoever identified them already knew them and their physical description.”
In several cases, while granting bail to the accused, the court observed that both the authenticity of the video footage and the validity of its analysis are issues to be examined during the trial.
Uday contends that since all the accused are residents of the area where the incident occurred, their presence in the vicinity is not surprising. He further asserts about Mohammed, “His mere presence in the alley near his house does not conclusively establish that he was involved in rioting which took place near the main road. There are numerous cases of gang fights, free fights and communal rioting, where many people are just curious bystanders. We call it the ‘curious bystander exception’ to the principles of unlawful assembly.”
Mohammed, who once ran a modest store selling second-hand bags to support his family of five, has been left penniless after his incarceration. His shop is gone, and his livelihood is shattered. Today, he is forced to sell those bags on the pavement outside Jama Masjid – an existence that is not only precarious but also irregular, as he is frequently summoned for court appearances that interrupt any chance of stability.
Inside the prison, he was allegedly also subjected to discrimination and degrading treatment by police authorities solely because he was Muslim. He keeps repeating in a low, broken voice: “Jail bahut buri jagah hai (Jail is a very bad place).”
To survive and continue fighting his legal battle, Mohammed has been forced to take on loans amounting to several lakhs of rupees – a crushing burden that only grows heavier by the day. With no steady income, no end to his legal ordeal in sight, and a family still depending on him, he sees no hope.
After reviewing police reports, filing multiple RTI applications, speaking with investigating officers, defence lawyers and AI experts, and analysing court documents related to cases involving more than 50 accused individuals, The Wire found at least one case in which several accused were “identified” using facial recognition technology – sometimes based solely on side or even rear profiles captured in video footage. Despite the absence of any public witnesses confirming the presence of these accused at the crime scene, the police allegedly proceeded with the arrests.
Also, in response to an RTI filed in 2022, the Delhi Police acknowledged that facial recognition technology was used to investigate “over 750 cases related to the North East Delhi riots” and that the results were presented as evidence against those arrested. That accounts for at least 98.9% of all riot-related cases (758 in total) being “solved” with the help of facial recognition technology.
Similarly, in March 2020, Union home minister Amit Shah told the Rajya Sabha that 1,922 perpetrators had been identified through facial recognition software, comprising at least 73.3% of the 2,619 people arrested in connection with the riots till last year.
However, media reports indicate that more than 80% of the cases heard so far have resulted in acquittals or discharges, raising serious questions about the reliability of a technology the police appear to have relied on so heavily.
The cases of Ali and Mohammed underscore this troubling trend: both were arrested solely on the basis of facial recognition matches, without any corroborating evidence or credible public witness accounts. What drove the police to place such unwavering faith in a technology that is criticised globally for its inaccuracies, inherent biases and potential to cause wrongful incarceration and infringe upon human rights?
Why are the police rushing to use unaccountable AI technologies?
Months after Prime Minister Narendra Modi came to power, he rolled out the five-point concept of SMART policing (Strict and Sensitive, Modern and Mobile, Alert and Accountable, Reliable and Responsive, as well as Tech-savvy and Trained) at the 49th All India Conference of Director Generals/Inspector Generals of Police and head of all central police organisations on November 30, 2014.
Since then, police forces across Indian states have hastily plunged into the race to adopt Artificial Intelligence (AI)-based tools for law enforcement, with numerous awards and government initiatives introduced to encourage the trend. Many state police departments have begun relying on AI-based Automated Decision-Making Systems (ADMS). These include the Facial Recognition System (FRS) and Crime Mapping, Analytics & Predictive System (CMAPS) in Delhi, Trinetra and CrimeGPT in Uttar Pradesh, Punjab AI System (PAIS) in Punjab, Automatic Number Plate Recognition System (ANPR) in Madhya Pradesh, Artificial-intelligence based Human Efface Detection (ABHED) system in Rajasthan, and Telangana State Police - COP (TSCOP) in Telangana.

Illustration: Pariplab Chakraborty
The widespread deployment of these AI-driven tools in law enforcement is unfolding in the absence of any regulatory framework or accountability mechanism to oversee their usage. Despite the critical implications of AI in policing, there are no comprehensive legislations or statutory guidelines to ensure ethical implementation, data security and safeguards against potential misuse.
In this spectacle of ‘revolutionising’ law enforcement through AI, several private companies – such as Innefu Labs, AMPED Software, Pelorus Technologies and Staqu Technologies – have emerged as prominent players. These firms have secured contracts with various state police forces, the Indian Army, intelligence agencies, the Election Commission of India, public sector banks, and other key government departments, positioning them as significant stakeholders in national security and governance. By supplying AI-based technologies to government agencies, these companies not only make a fortune but also allegedly gain access to vast amounts of data with significant commercial value.
The unchecked reliance on opaque, ‘black-box’ algorithms supplied by these private entities has fueled concerns over the rise of a surveillance state. Experts say that these AI systems, lacking transparency, perpetuate pre-existing biases against marginalised communities – including Muslims, Dalits and Adivasis – leading to discriminatory policing practices. Despite these problems, the police are allegedly acting solely based on facial recognition technologies in several instances and even making arrests in the absence of public witnesses or other corroborating evidence.
Facial recognition software used by the Delhi Police
An official document titled ‘Best Practices In Delhi Police’ announces digital initiatives taken by the police in alignment with the PM’s directives on SMART policing. It states, “Delhi Police has acquired the Facial Recognition System and integrated it with the Missing Children/Persons and Found Children/Persons module of the Zonal Integrated Network system (ZIPNET) to track the missing children reported missing from Delhi.”
The police confirmed this in a Right to Information (RTI) Act response to the Internet Freedom Foundation (IFF) dated February 20, 2020, and added that facial recognition software was procured under the direction of the Delhi high court in the Sadhan Haldar v NCT of Delhi case in March 2018.
However, it is pertinent to note that while the court order specifically directed the Delhi Police to obtain facial recognition technology for tracking missing children, the police admitted to using it for investigations as well.
The ‘Best Practices’ document also confirms that facial recognition is used for “surveillance and detection of suspects at crowded places like Railway stations, Bus terminals, and large gatherings like sports events, Public Rallies, etc.” Although it does not explicitly clarify whether 'public rallies' include protests, the police themselves admitted in RTI replies to using facial recognition software in investigations related to the farmers’ protest-Red Fort violence (2021) and the Jahangirpuri riots (2022) cases.
Srinivas Kodali, a hacktivist and tech transparency expert, told The Wire, “Facial recognition technology is being used for far more than just locating missing children. It is primarily used to track a broad list of people under the radar of law enforcement agencies. This includes criminals, former offenders, and persons of interest, including those suspected of having ties to terrorism, Naxalite movements, etc. Initially, various state police forces implemented their own facial recognition systems independently. But the State wanted to integrate all of them, which led to the proposal of the National Automated Facial Recognition System to interlink all of these cameras, allowing the Ministry of Home Affairs (MHA) to monitor every activity nationwide.”
How do the police use facial recognition?
Law enforcement agencies use facial recognition for ‘identification’ purposes – what is called a 1:many technique. It involves feeding the image of a person’s face, extracted from a photograph/video, into the software. The software then analyses the input image and attempts to find a match within the entire database maintained by the authority to confirm the identity of the individual.
The output is a list of potential matches, each accompanied by a ‘confidence score’ or ‘probability match’ which represents the likelihood that the suspect matches an individual in the database. A higher confidence score indicates a stronger likelihood that the system has correctly identified the suspect.
The police personnel operating the system then select a match from the list generated by the software. They also determine the minimum ‘confidence score’ required for a match to be considered a suspect. This decision, being subjective, has the chance of being significantly influenced by personal biases or prejudices against certain religions, races, classes or communities, potentially affecting the accuracy and fairness of the identification process.
The absence of legal provisions
In 2017, the Supreme Court of India recognised privacy as a fundamental right guaranteed by the Constitution, requiring any State intrusion into it to meet four essential tests – legality, necessity, proportionality and procedural safeguards.
However, the Delhi Police have admitted in an RTI response that no legal opinion was sought prior to procuring facial recognition technology, and no specific rule governs its use. Privacy advocates argue that this could be viewed as a violation of the Supreme Court’s judgment, potentially making the Delhi Police’s use of facial recognition technology unlawful.
Nonetheless, the Delhi Police continue the rampant and unregulated use of facial recognition technology in its investigations.
- In March 2020, Union home minister Amit Shah told the Rajya Sabha that more than 25 computers were analysing CCTV footage to identify perpetrators of the Northeast Delhi violence, and, till then, over 1,900 perpetrators had been identified through facial recognition software.In 2022, the Delhi Police admitted to using facial recognition technology to investigate “over 750 cases related to the North East Delhi riots” and presenting the results as evidence against the arrested individuals. However, they failed to specify the relevant sections of the Indian Penal Code and Code of Criminal Procedure under which such evidence would be made admissible in court.
- Media reports revealed that at least since Prime Minister Modi’s Ramlila Maidan rally in December 2019, the Delhi Police have been using facial recognition software to screen crowds. A report in the Indian Express mentioned that it was the first time the police used a set of facial images collected from footage filmed at various protests in Delhi to identify ‘law and order suspects,’ ‘habitual protesters’ and ‘rowdy elements’ from the crowd at the rally.Since the protests against the Citizenship (Amendment) Act in Delhi, the police have routinely videotaped almost every major protest in the city, sometimes through drones. This footage helped build a dataset of ‘select protesters’, reportedly used to keep ‘miscreants who could raise slogans or banners’ out of the rally. The report stated, “Each attendee at the rally was caught on camera at the metal detector gate and live feed from there was matched with the facial dataset within five seconds at the control room set up at the venue.”
The report further stated that the Delhi Police has so far created a photo dataset of 1,50,000 ‘history sheeters’ for routine crime investigations, 2,000 images of terror suspects and a third category of ‘rabble-rousers and miscreants’ (no formal definition has been provided for this category).
After the issue came to light, Project Panoptic, run by the IFF, sent a legal notice to the home secretary, Ministry of Home Affairs and the Commissioner of Police, Delhi on December 28, 2019, asking them to halt the use of facial recognition in Delhi. IFF called this “an illegal act of mass surveillance”. The notice argued that facial recognition technology is also in “breach of the principle of proportionality”, emphasising that data collection must be necessary and evidence-based, and not blanket or indiscriminate, especially when lacking probable cause of suspicion.
Despite this notice and repeated concerns raised by digital rights organisations, the use of facial recognition by the Delhi Police has continued.

Illustration: Pariplab Chakraborty
Responding to an RTI query filed by The Wire, the Provisioning and Logistics Department of the Delhi Police acknowledged on February 24, 2025, “A total of 6630 CCTV Cameras were installed in the area of 50 Police Stations of 12 Districts through M/s Bharat Electronics Limited. As per the provision of the contract, the Facial Recognition Technology feature is available in 10% of the CCTV Cameras as per user district requirements. The utilisation of the FRT feature in the CCTV Cameras is being done by the user districts as per their requirement.”
What is the accuracy of the Delhi Police’s facial recognition technology?
The ‘accuracy’ of facial recognition software refers to how well it can correctly identify a person from a database of known individuals while minimising false positives (incorrectly identifying someone as another person) and false negatives (failing to identify a known person) in various conditions. The accuracy can vary significantly under different conditions, such as lighting, camera angles, image quality, and facial expressions.
Technology expert Jake Laperruque wrote for Project On Government Oversight in 2018, “Facial recognition accuracy can vary significantly based on a wide range of factors, such as camera quality, light, distance, database size, algorithm, and the subject’s race and gender.” He warned that even a 90% accuracy rate, promised by some of the most advanced systems, is “an unacceptable risk when the end result is the possible arrest of or even the use of force (including deadly force) against an innocent person”.
For reference, before deploying its facial recognition system in early 2011, the US Federal Bureau of Investigation (FBI) conducted a test that found “roughly one in seven searches of the FBI system returned a list of entirely innocent candidates, even though the actual target was in the database”. This occurred despite the software achieving an 86% accuracy rate in those tests.
In contrast, nearly eight years later, in 2018, the Delhi Police informed the high court that their facial recognition software had an accuracy rate of just 2%. By 2019, this figure had dropped below 1%, prompting the Ministry of Women and Child Development to acknowledge that the system could not reliably distinguish between boys and girls.
In the case of Sadhan Haldar v. NCT of Delhi, heard on January 22, 2019, the Delhi high court expressed concern over the technology’s ineffectiveness. Referring to over 5,000 missing children cases in Delhi over the previous three years, the bench observed: “We are told that the use of 'Facial Recognition Software' has not helped in cracking any case of missing children so far, which comes as a surprise. It is most unacceptable that the software adopted by the Delhi Police after due diligence has not borne any results.”
While the Innefu Labs, a private vendor supplying facial recognition technology to the Delhi Police, claims on its website that its facial recognition system achieved an accuracy of 98.3% (as of June 11, 2025), the Delhi Police have not yet disclosed the accuracy rate of their facial recognition technology in response to multiple RTIs filed by The Wire. When contacted, Innefu Labs’ co-founder and CEO, Tarun Wig, refused to comment.
Adding to the lack of transparency around its accuracy rate, the Delhi Police have also set a notably low threshold of ‘confidence score’ for classifying a match as ‘positive’. In response to an RTI in 2022, they revealed that “All matches above 80% similarity are treated as positive results while matches below 80% similarity are treated as false positive results which require additional ‘corroborative evidence.’”
In 2018, the American Civil Liberties Union (ACLU) conducted a test on Amazon’s facial recognition tool, ‘Rekognition’. The test found that the software incorrectly matched 28 members of the US Congress with individuals in a criminal database, falsely identifying them as people who had been arrested for a crime. The Congress members of colour were incorrectly matched at disproportionately higher rates. The ACLU conducted the test using the software’s default ‘confidence threshold’ of 80% – the same threshold used by the Delhi Police.
In such a scenario, “The Police could continue to investigate anyone who may have gotten a very low score. Thus, any person who looks even slightly similar could end up being targeted, which could result in targeting of communities who have been historically targeted,” argues researcher Anushka Jain, former policy counsel at IFF.
Several lawyers told The Wire about another legal blind spot: the police aren't legally bound to disclose to the accused if they were identified using facial recognition. This denies the accused an opportunity to challenge the identification process or the reliability of the match. A Panoptic Project article argues, “The defendant should also be provided with access to the software’s source code to meaningfully challenge the evidence presented against them.”
Even the supplier of facial recognition technology to the Delhi Police – Pelorus Technologies – acknowledged the limitations of the tool. When asked by The Wire whether the tool’s accuracy is affected in poorly lit environments such as alleys or gullies, CEO Rahul Dwivedi admitted, “Yeah, of course! It depends on the camera, it depends on the light, it depends on the angle of the image it takes.”
How well-trained are the Delhi Police to use AI tools?
The Bureau of Police Research and Development (BPR&D) serves as the central nodal agency for police training across India. It is responsible for designing training modules and implementing capacity-building programmes for law enforcement personnel. At the state level, individual police departments also conduct their own training initiatives.
The Wire thoroughly examined various training manuals used by the Delhi Police. We found that the ‘National Syllabus for Directly Recruited Sub-Inspectors’ includes only a single mention of AI. Out of the 2,620 instructional periods, just one each is dedicated to AI and facial recognition technology, showing that emerging technologies in modern policing aren’t given much importance when it comes to training.
In 2024, the Specialised Training Centre in Rajendra Nagar conducted some specialised training sessions for Delhi Police personnel. All personnel ranking from head constables to inspectors, were taught only a half-day course on videography and photography at crime scenes, held on four occasions across the year. Sub-Inspectors (SIs) to Assistant Commissioners of Police (ACPs) were provided with merely a one-day training on “Social Media Investigation and Open Source Intelligence (OSINT)”, also conducted four times. One-day sessions on CCTV footage handling, DVR forensics and “emerging trends in forensic science and contemporary forensic techniques” were offered to SIs and inspectors on four separate dates each.
The Chanakyapuri-based Academy for SMART Policing offered specialised workshops on “handling CCTV footage,” “drone technology,” “social media investigations”, and “open-source intelligence”, each limited to just two half-day sessions across 2024. Also, training was provided exclusively to officers at the rank of ACP and above. This limited scope raises concerns about effectiveness, as frontline investigations are primarily conducted by inspector-level officers who were excluded from these sessions.
The Dwarka-based Cyber Training Division offered no training on “facial recognition technology” in 2024.
The Status of Policing in India Report 2019, by the NGO Common Cause, assessed police capacity and adequacy across states from 2012 to 2016. The report found, “In Delhi, in-service training is imparted to almost all the higher rank officers every year”, but training for Constables and Sub-Inspectors/Assistant Sub-Inspectors remained “very low”. Only 11.7 % of police personnel received in-service training during the period, with just 2.49% of the total Delhi Police budget spent on training.
The Wire spoke to Aakansha Saxena, assistant professor at the Rashtriya Raksha University (RRU), a Gujarat-based institution offering specialised training to police forces in states including Gujarat, Punjab, Karnataka, Delhi and Odisha. Saxena, who heads RRU’s Centre for Artificial Intelligence, expressed uncertainty about the extent of facial recognition technology training within the Delhi Police, stating, “I don’t know whether all the police officers are trained or not, but yes, some of them have definitely been trained.” She also confirmed that Delhi Police’s facial recognition systems are linked to the “Aadhaar (UIDAI)” and “driving license” databases.
Surprisingly, despite her role in training Delhi Police personnel, Saxena was unaware of key details like the facial recognition system’s developer company, accuracy or other technical specifications.
Saxena explained that training curricula are jointly developed by the university’s dean and vice-chancellor, in collaboration with senior police officers. She noted, “Since police officers are not adaptive to this new technology [AI], they want to learn how to integrate it into their investigations…despite having other substitute methods.” When asked whether AI tools are used in nearly every case, she confirmed that they are, highlighting growing reliance on such technologies in policing.
Saxena stated that the training programme educates police officers on threat intelligence, AI-powered targeting systems, the development of facial recognition systems, identification of their vulnerabilities, methods that attackers may use to exploit them and defensive strategies. Officers are also trained on deepfakes – what they are, how they’re created and how to differentiate between deepfakes and authentic content.
When asked about allegations of wrong identification by Delhi Police’s facial recognition technology, Saxena acknowledged it was possible, noting officers “are well aware of how the accuracy of the FRT can depend on several things and there is no guarantee of correct identification in every case”.
When asked to share the training modules provided to the Delhi Police, the RRU administration did not respond.
Are the Delhi Police upholding their own guidelines?
The Standard Operating Procedure (SOP) outlined in the Bureau of Police Research and Development’s Compendium of Scenarios for Investigating Officers (2024) mandates that, in case of a riot, the investigating officer (IO) should run the accused’s photo, if obtained from CCTV, through facial recognition software to gather leads on their identity.

Screengrab from the Compendium of Scenarios for Investigating Officers (2024), detailing the mandatory procedures an Investigating Officer must follow while conducting a search for the accused.
This directive raises a critical question: If facial recognition is an integral part of the Delhi Police’s SOP, why is it given minimal emphasis in training? The disparity between its prescribed use in investigations and its limited emphasis in training suggests a gap in preparedness and effective implementation.
The SOP clearly states that “there should be no delay in the Test Identification of the accused,” as it is “an important aspect in the investigation”. However, The Wire’s analysis of court documents from cases involving at least 50 accused individuals found that Test Identification Parades (TIPs) were not conducted for any of them. Despite the clear directive, this vital procedure was largely absent, underlining the lack of adherence to investigative protocols. The Wire has asked the Delhi Police and the Union Ministry of Home Affairs about this lapse, but no response has been received so far.

Screengrab from the Compendium of Scenarios for Investigating Officers (2024), stating that there should be no delay in conducting the Test Identification Parade (TIP) of the accused.
The SOP also outlines additional investigative steps, stating that the IO may also weigh the possibility of advanced scientific tests like Gait Analysis. Despite this directive, none of the training programmes or workshops focused specifically on Gait Analysis or other advanced AI-based investigative tools.
CCTV+ drones + facial recognition = mass surveillance?
Delhi is among the most surveilled cities in India. In August 2022, during a meeting at the Delhi Police headquarters, home minister Amit Shah said, “Surveillance is a major component of policing in preventing and investigating crime,” and recommended integrating all CCTV systems, including those in public spaces and by civil bodies, with the police control room.
However, studies suggest that CTV surveillance isn’t distributed evenly across all parts of Delhi. The deployment of CCTV cameras, instances of over-policing, and patterns of discriminatory targeting are significantly more concentrated in certain neighbourhoods.
An empirical study, conducted by Vidhi Centre for Legal Policy in 2021, revealed that the use of facial recognition technology by the Delhi Police “will almost inevitably disproportionately affect Muslims, particularly those living in over-policed areas like Old Delhi or Nizamuddin”, potentially increasing their likelihood of being targeted by law enforcement.
In her research, Jai Vipra, an AI policy scholar at Cornell University, highlights the structural inequalities embedded in surveillance practices in Delhi. Through her paper, she demonstrates that “two factors in particular – the uneven distribution of police stations across space, and the uneven distribution of CCTV cameras across space – are likely to result in a surveillance bias against certain sections of society more than others in Delhi.”
The study observes, “In Delhi, our data reveals that two kinds of areas are much more policed than others: (1) areas housing government and diplomatic offices, i.e., Central Delhi; and (2) areas with a proportionally higher Muslim population. These areas have a higher proportion of police stations compared to their relatively lower population. Technology, especially that of predictive policing, constitutes an intensification of policing in this form and can disproportionately target Muslims in Delhi.”
The paper further elaborates, “FRT in policing in Delhi is likely to employ data used from CCTV cameras across the city. This would mean that areas with relatively more CCTV cameras would be over-surveilled, over-policed and thus subject to more errors than other areas… The evident over-policing of Muslim areas can result in the use of FRT in policing in Delhi disproportionately targeting Muslims.”

Illustration: Pariplab Chakraborty
Statements by police officials also suggest heightened policing in Muslim-majority areas such as North East Delhi. Referring to the 2020 Delhi riots, Joy Tirkey, Deputy Commissioner of Police (North East Delhi), stated in February 2024, “If one area in Delhi needs actual ground-level policing, then it is the northeast district... Not only is the northeast district prone to crime, but it is also a communally sensitive area.” He added, “Since the 2020 riots, we have been closely associated with the northeast district.”
Kodali noted, “Surveillance is not limited to CCTVs. Facial recognition is also deployed through mobile phones [police stopping people at random and taking photos], particularly in cities like Hyderabad and Delhi.”
Do the police really care about privacy?
The Delhi Police have admitted that no privacy impact assessment was conducted before the deployment of facial recognition technology. They further stated, “While investigating any case, the investigating officer is empowered as per law to explore all possible information to identify and legally prosecute the offender.”
The police also refused to reveal the stage of the investigation at which the use of facial recognition technology is generally brought in. Regarding the databases used in conjunction with facial recognition technology, the Delhi Police said, “Convict photographs and dossier photographs maintained with police under section 3 & 4 Identification of Prisoners Act 1920.” They refused to provide an exhaustive list of the databases linked to the system.
The Identification of Prisoners Act, 1920 has now been repealed and replaced by the Criminal Procedure (Identification) Act, 2022 (CPIA), which allows for wider categories of data (fingerprints, palmprints, footprints, iris and retina scans, and other physical and biological samples) to be collected, analysed and stored from any person who has been arrested, whether undertrials or convicts – an incredibly large-scale collection of personal data.
ISO/IEC 27001 is the international standard for information security management, offering a structured framework for risk assessment and the implementation of security controls. When asked if their facial recognition technology complies with this standard, the Delhi Police responded to The Wire, “No such ISO/IEC 27001 was asked [for] in the tender.”
Anushka Jain told The Wire, “Since we are not aware of the level of protection and security provided to Delhi Police’s facial recognition technology data, we cannot know if the company(ies) supplying the technology have access to the Delhi Police’s data.”
What do experts recommend?
Concerns around the misuse of facial recognition technology are not specific to India. Globally, data privacy and civil liberties advocates have voiced serious objections about its unchecked deployment. The American Civil Liberties Union has cautioned that facial recognition can be used “in a passive way that doesn’t require the knowledge, consent, or participation of the subject.” Organisations such as the Electronic Frontier Foundation, Algorithmic Justice League and Amnesty International have called for a moratorium and even outright bans on the use of this technology.
A report by The Brookings Institution’s Artificial Intelligence and Emerging Technology Initiative listed some guardrails that people should be demanding when it comes to facial recognition software:
- There should be limits on how long data should be stored.
- Data sharing should be restricted.
- There should be a clear notification when facial capture is being done.
- Minimum accuracy standards need to be met.
- Third-party audits are also required.
- Collateral information collection (metadata) must be minimised.
These recommendations highlight the urgent need for comprehensive regulation and oversight – something that remains notably absent in many countries, including India, even as they race to use facial recognition in policing.
The Wire has sent the commissioner of police, Delhi and secretary, Union Ministry of Home Affairs, a list of detailed questions about the use of AI tools and facial recognition technology, the alleged inaccuracies and procedural lapses, and the opacity involved in the use of these technologies. The police commissioner’s office said that these queries have been forwarded to the Special Branch of the Delhi Police, which is handling the 2020 Delhi riots cases, but no further response has been received despite reminders.
*Names of accused persons and their lawyers have been changed to protect the identities of the accused.
Astha Savyasachi is an investigative journalist whose work focuses on socio-political and human rights issues in India.
The Wire is now on WhatsApp. Follow our channel for sharp analysis and opinions on the latest developments.