Rasha Abdul Rahim, Palestinian tech justice activist, interviewed by Simona Levi, founder of Xnet, Institute for Democratic Digitalisation
We interviewed Rasha Abdul Rahim on the occasion of her participation in the opening session of the 4D Conference: Democratic Digitalisation and Digital Rights, held on October 28 in Barcelona, organised by Xnet and Accent Obert.
Abdul Rahim, who holds a BA in Modern and Medieval Languages from the University of Cambridge and an MA in International Relations and Diplomacy from the School of Oriental and African Studies (SOAS), has led global initiatives on surveillance, artificial intelligence, lethal autonomous weapon systems, and digital rights—first as Director of Amnesty Tech at Amnesty International, and later as Executive Director of People vs Big Tech. Rasha is currently focusing on the situation in Palestine.
Simona Levi: Thank you, Rasha, for taking the time to speak with me. I know you are extremely busy right now due to the ongoing dramatic situation. What are you working on at the moment?
Rasha Abdul Rahim: Right now I am working at the intersection of technology, human rights and social justice, with a particular focus on the role of Big Tech companies in the Gaza genocide. Until August this year I was leading People vs Big Tech and working to hold Big Tech companies accountable through enforcement of EU regulations such as the landmark Digital Services Act and Digital Markets Act.
Simona Levi: You have extensive knowledge both of digital rights and of the use of autonomous weaponry against people, as well as of the situation on the ground in Palestine. How is digital technology being used in Netanyahu’s war against Palestine and in the massacre of the civilian population?
Rasha Abdul Rahim: Israel is deploying a range of advanced digital and AI tools in its military operations. AI systems are being used by Israeli intelligence forces to analyse massive amounts of data – from intercepted communications to social media – to identify potential targets, assigning risk scores based on patterns deemed suspicious. Facial recognition technologies, such as Red Wolf and Blue Wolf, are widely used in the West Bank and East Jerusalem to monitor Palestinians, often restricting movement and enabling detentions.
Additionally, Israel tracks civilian movement through mobile phone location data, using this information to direct military strikes and operations. Commercial drones have been modified for surveillance and even carrying explosives, while cloud services like Microsoft’s Azure have supported intelligence processing of intercepted communications. Though Microsoft recently terminated Unit 8200’s access to certain cloud storage and AI services after it emerged that these tools were being used to operate an invasive and indiscriminate surveillance system — a clear violation of Microsoft’s terms of service. Amazon has reportedly stepped in to fill the gap.
AI systems called ‘Lavender’ and ‘The Gospel’ have also been used by Israeli military intelligence (Unit 8200 among others) to analyse large amounts of data (intercepted communications, prior intelligence, social media etc.) to identify people or places that may be targets for killing. These tools assign scores to individuals about likelihood of involvement with militant groups, or analyse speech/data for certain “keywords” or patterns deemed suspicious.
What has become clear during the genocide is that all major tech companies have been providing critical infrastructure, services, or support to the Israeli military — including Google, Amazon, Microsoft, and Oracle. Meta, meanwhile, has been systematically removing or suppressing Palestinian voices on its platforms, a pattern that is now extensively documented. This reveals a broader structural alignment between Big Tech and state power, raising urgent questions about corporate responsibility, accountability under international law, and the role of private technology providers in enabling or facilitating genocide and other serious crimes and human rights violations.
Simona Levi: As an expert in EU legislation, digital policy, surveillance, artificial intelligence, and lethal autonomous weapon systems, how the digital practices currently being developed by the State of Israel are impacting the EU digital market and digital politics?
Rasha Abdul Rahim: Israel’s tech industry – particularly its export of AI-powered surveillance tools and military-grade tech – are quietly reshaping the EU’s digital environment, and not in ways that align with European values. These technologies and weapons are tested on Palestinians in the laboratory of occupation, warfare and now genocide, and they’re now being integrated into the European security apparatus – from policing and border control to predictive analytics and biometric surveillance. These technologies often raise serious legal and ethical concerns, particularly around mass surveillance, biometric profiling, and automated decision-making in policing and warfare.
On the one hand, Europe says it champions digital rights, transparency, and human rights. But it’s increasingly importing tools built for control, profiling, and repression, often without democratic debate, oversight, or ethical vetting. Israeli tech firms, many with close ties to military and intelligence units, have found willing customers in EU member states eager for “security solutions” but unwilling to confront the moral cost.
This undermines the EU’s credibility as a regulatory superpower through regulations like the GDPR, the AI Act, and the Digital Services Act. But if it keeps outsourcing its security infrastructure to companies whose products are ‘battle-tested’ on Palestinians, those laws risk becoming empty shells. The uncomfortable truth is that Europe is talking about ethics while quietly buying tech from the frontline of digital authoritarianism.
The question now isn’t just whether the EU can regulate Big Tech — it’s whether it has the political will to draw a line when digital tools are born in contexts of systemic violence and exported as “solutions” to democratic societies. Right now, that line is dangerously blurry.
Simona Levi: What do you think of the digital policy the EU is pursuing more in general at the moment?
Rasha Abdul Rahim: Europe currently holds the world’s most powerful regulatory arsenal for defending digital rights and reining in the immense power of Big Tech. The landmark Digital Services Act and Digital Markets Act give the EU institutions and member states the power to force Big Tech platforms to take down illegal content quickly, tackle disinformation, and be much more transparent about how their algorithms and moderation systems work. It also allows for audits and serious penalties for non-compliance, including multi million Euro fines. And actually in many ways Palestine is a litmus test for the effectiveness of the DSA, given the well-documented systematic censorship of Palestinian content on Meta platforms.
The DMA is all about fair competition and tackling the enormous market power and dominance of Big Tech companies. It targets so-called “gatekeepers” like major online platforms, preventing them from abusing their market dominance. That means banning things like self-preferencing, enforcing interoperability, and opening the door for smaller players to compete. Together, these laws are a big step towards making the digital space safer, fairer, and more accountable across the EU.
But the big question is whether the EU will actually enforce these regulations when the heat’s on, especially under heavy lobbying and diplomatic pressure from Washington. After all, the rules in these regulations directly challenge the business models of Silicon Valley giants like Google, Meta, and Apple – companies that wield enormous political power and clout in the U.S. What is at stake is not just tech policy; it’s whether the EU has the backbone to defend its digital sovereignty or whether it’ll blink when American interests push back hard.
Last posts on:
Digital Rights, Data, AI and Net Neutrality
- Combat-Tested on the Palestine’ Population Technology Bargain
- (Es) Digital Fairness Act (DFA). Una navegación sin engaños, a ver si es verdad
- (Es) La industria de la desinformación y la ley TTPA
- (Es) El nuevo reglamento sobre publicidad política, una oportunidad perdida
- (Es) Data Act
Freedom of Expression and Information vs Fake News, Propaganda and Disinformation
- Combat-Tested on the Palestine’ Population Technology Bargain
- The business of disinformation and the law
- (Es) No, esto no es libertad de expresión
- #FakeYou – Don’t blame the people; don’t blame the Internet. Blame the power.
- REPORT – Electoral Integrity and Political Microtargeting: Monitoring European elections 2024 in Spain


