Last year, our backend application experienced a significant performance bottleneck during peak traffic periods, resulting in delayed API responses. The issue wasn't immediately apparent, so I started by profiling the application using New Relic to pinpoint problematic areas. The profiler revealed that a specific database query was taking up an unusually high amount of processing time. The query was aggregating large datasets without proper indexing. I used EXPLAIN ANALYZE in PostgreSQL to understand the query execution plan and discovered a missing composite index that could dramatically speed up the filtering process. After adding the index and optimizing the query logic, we reduced query execution time by 65%. To prevent similar issues, I set up performance monitoring alerts and implemented query benchmarking during code reviews. This not only resolved the bottleneck but also improved overall system stability and team practices. My advice? Always pair monitoring tools with root-cause analysis for effective troubleshooting.