Surveillance Capitalism
Surveillance capitalism is extraction without consent — behavioral data harvested from people whose agreement was buried in terms they couldn't negotiate and can't revoke. We exclude companies whose revenue depends on this, and hold a high bar for any tech company where surveillance is a meaningful input. Until users can control, audit, and delete their data, this stays a core exclusion.
Ad platforms give targeting tools that make it trivially easy to find vulnerable populations — people susceptible to financial scams, emotional manipulation, and predatory lending. The platform may not say "here's how to rip these people off." But it certainly says "here's how to find them," and that distinction is not a defense.
We screen across two policy categories.
- Products (§ 2.5). Companies whose core product is surveillance: consumer tracking, privacy erosion tools, for-profit prisons. The test is centrality to the business model — not proximity to tech.
- Conduct (§ 3.4). Data exploitation and systematic privacy violations as a pattern. Being in tech is not disqualifying. What matters is governance: real deletion, encryption, anonymization, external audit.
For ad tech, we require strong, demonstrable privacy controls. DuckDuckGo, Signal, and Proton are useful reference points — business models that structurally prevent surveillance rather than merely promising not to.
Eventually, we hope this screen won't have to exist. That would require GDPR-equivalent governance in the U.S., external audit with real teeth, genuine data portability, and deletion that means deletion. None of those conditions exist today.
Surveillance infrastructure built for commercial purposes does not stay commercial. U.S. federal agencies have purchased data broker records to circumvent warrant requirements. The architecture is the same; the buyer changes.
In the short term, these business models are profitable. In the long term, the dominant platforms are destroying the organic discovery and trust that made them valuable. That is a structural fragility, not just an ethical one.
— Sloane Ortel, Founder & CIO
See § 2.5 / § 3.8 of our screening policy for the full criteria.
What we exclude
- Surveillance hardware manufacturers (cameras, sensors, screening systems)
- Intelligence and forensic software (mobile extraction, facial recognition, predictive analytics)
- Surveillance infrastructure built for deployment by governments or corporations
- Surveillance capitalism: deploying behavioral surveillance against own users for commercial gain (§ 3.8)
- Data brokers packaging consumer profiles without meaningful consent (§ 3.8)
- Operating, financing, or materially supporting for-profit incarceration (§ 3.8)
The regulatory risk is real and growing
GDPR, CCPA, the EU AI Act, and accelerating FTC enforcement converge on one conclusion: the data extraction model has a limited runway. [Meta's 2023 GDPR fine was €1.2 billion](https://www.dataprotection.ie/en/news-media/press-releases/Data-Protection-Commission-announces-conclusion-of-inquiry-into-Meta-Ireland). [DMA enforcement](https://digital-markets-act.ec.europa.eu/index_en) is ongoing. Companies whose revenue depends on frictionless access to personal data are accumulating regulatory liability the market has not priced in.
Facial recognition sits at the intersection of legal, reputational, and political risk. [San Francisco](https://www.sanfranciscopolice.org/your-sfpd/policies/19b-surveillance-technology-policies) and [Boston](https://www.boston.gov/sites/default/files/file/2021/02/Boston-City-Council-face-surveillance-ban.pdf) have enacted bans. Any company with significant government surveillance contracts faces a policy cliff — structurally identical to what coal faced in 2015.
This is not a tradeoff between ethics and returns. It is a screen that identifies underpriced regulatory risk.
Excluded Companies (0 total)
Showing 0 of 0 companies excluded under this screen.
No companies currently excluded under this screen.