I've published over 30 peer-reviewed papers in breast cancer and molecular diagnostics, so I've wrestled with exactly this problem during multiple systematic reviews and meta-analyses at Mayo, Pittsburgh, and Florida. The deduplication headache with Scopus/PubMed/WoS is real because each database exports metadata slightly differently--PubMed uses PMID, Scopus uses DOI plus EID, and WoS has its own accession numbers, so you end up with the same paper appearing three times under different formatting. In Zotero, I use the built-in **duplicate detection** (right-click library - "Duplicate Items") which matches on DOI first, then title/author/year fuzzy matching, and it catches about 60-70% of obvious duplicates. For the remainder, I install the **Zotero Deduplicator plugin** (available on GitHub) and run a second pass with stricter title-similarity thresholds--this usually nets another 15-20% that the native tool misses because of subtitle punctuation or author initials formatted differently. The single biggest gain came from **pre-cleaning before import**: I export all three databases to RIS format, then run them through a Python script using the **ASReview** deduplication module (it's open-source and designed for systematic reviews). That script normalizes DOIs, strips special characters from titles, and flags near-duplicates based on Levenshtein distance before I even open Zotero. When I did this for a BRCA2 variant review with ~1,800 initial records, the ASReview clean dropped duplicates from 47% down to under 5% residual manual cleanup--saved me literally two days. Bottom line: native Zotero duplicate detection + the Deduplicator plugin handles most cases, but if you're merging thousands of records, pre-process with ASReview's dedup function in Python before import and you'll thank yourself later.