I created a machine learning system as part of an Inventory Management project to predict product demand and maximise inventory levels for a network of stores. In order to forecast weekly demand and determine reorder points, the system first employed a Random Forest model with simple sales, advertising, and weather data. In order to improve usability and trust, the client extended the scope halfway through to add dynamic supplier lead times, seasonal demand trends, and explainable forecasts. The codebase needed to be significantly refactored in order to accommodate these modifications. To ensure that data pre-processing, feature engineering, modelling, and inventory logic could all be changed separately, I modularised the pipeline. I used time-series data to improve feature engineering and shifted to an XGBoost model, which increased accuracy and included SHAP values for explainability, in order to capture seasonality and dynamic aspects. One of the most effective frameworks for analysing machine learning models is SHAP (SHapley Additive exPlanations). Its foundation is cooperative game theory's Shapley values, which give each player in a game a number that corresponds to their contribution to the final result. To assist explain how much each feature affected the model's choice, SHAP assigns a contribution value to each feature for a particular prediction in the context of machine learning. SHAP highlighted the contributions of features such as "holiday season," "promotion," and "historical sales" in an inventory forecasting system to explain why the model forecasted a strong demand for a product in a particular week. This transparency makes it possible to spot biases or mistakes in the model and fosters trust with stakeholders. I also streamlined pipelines for scalability and consolidated inventory logic into a modifiable module. This refactor decreased late orders by 22%, increased forecast accuracy by 15%, and simplified system maintenance and expansion. I learnt from the experience how crucial it is to prioritise scalability and explainability from the beginning, design systems for change, and have a thorough understanding of the problem domain.
Throughout my Principal Software Engineer career, I've encountered numerous situations where shifting requirements necessitated significant adjustments mid-project. One of the most recent experiences was leading the migration of half a company's business architecture to Cloud Kubernetes from On-Prem Windows VMs with a team of 30+ engineers. Early on, the project faced challenges due to underestimating critical complexities, including insufficient research and brainstorming, gaps in QA, immature SDLC processes, weak CI/CD pipelines, and the intricacies of legacy systems with extensive dependencies and integration points. These issues revealed gaps in our acceptance criteria, requiring major refactoring and strategic pivots under tight deadlines. Lessons: 1. The most important lesson was the value of rigorous preparation, encompassing comprehensive research, pilot implementations, and iterative analysis to mitigate risks in large-scale initiatives. 2. Equally, vital was fostering a collaborative culture with fast feedback loops, structured daily working sessions to resolve blockers or define clear next steps, strong ownership of tasks and challenging existing approaches. 3. This shift from reactive problem-solving to proactive, shared accountability resulted in exponential team growth, enhanced planning clarity, and seamless execution in future initiatives. 4. All that underscored the principle that success in complex projects lies not only in technical excellence but also from creating synergy by aligning people, processes, and priorities with a unified vision.