One privacy-preserving technique I explored was synthetic EHR note generation for model prototyping. I began by removing direct identifiers such as names, medical record numbers, addresses, and dates, and by generalizing quasi-identifiers like age ranges and infrequent conditions. Using this sanitized input, I generated synthetic clinical notes that preserved common clinical structures such as History of Present Illness, Assessment, and Plan while avoiding reuse of real patient narratives or identifiable details. To evaluate utility, I trained the same downstream task, a diagnosis classification model, on both de-identified notes and the synthetic dataset. I compared performance metrics including precision and recall, as well as error patterns and feature importance, to ensure the model learned clinically meaningful signals rather than surface-level text. Results were closely aligned across both datasets, with similar error distributions. For privacy validation, I manually reviewed samples of the synthetic notes and ran basic overlap and string-matching checks to confirm that no source phrases or identifiable sequences were reproduced. This approach demonstrated that synthetic clinical text can support useful experimentation and model development while maintaining strong protections for patient privacy.