
Nidhi Sharma is a Quality Engineering Manager & Global AI Ambassador at EPAM Systems, with 15+ years of experience in AI-driven quality assurance, DevOps automation, and software testing. She has led QA transformations for Fortune 500 clients across finance, telecom, and e-commerce, integrating AI-powered testing, shift-left strategies, and CI/CD pipelines.
A thought leader in Gen AI, automation and cloud computing, Nidhi has been featured in Observer Dawn, Poland QE community & AW Club and has delivered expert sessions on 5G-network slicing and NFV/SDN. She actively mentors QA teams across 10+ countries, driving innovation in Gen AI, cybersecurity, and cloud automation.
At test:fest 2025, she will showcase AI-powered code reviews, demonstrating how AI can reduce security risks, accelerate releases, and optimize maintainability in modern DevOps workflows.
Linkedin: https://www.linkedin.com/in/nidhi-sharma-40195121/
Prelekcja/Presentation:
AI-Powered Code Reviews: Cutting Security Risks & Enhancing Code Maintainability
Intent & Purpose:
This lecture addresses the critical inefficiencies of manual code reviews in today’s fast-paced DevOps environments, where 92% of security breaches stem from code flaws and teams waste 30% of their time fixing preventable vulnerabilities. It introduces a transformative 6-phase AI framework proven to reduce security risks by 40% and boost code maintainability by 30%, as demonstrated in a real-world fintech case study.
The session aims to equip attendees with actionable strategies to:
Automate and Scale Security: Integrate AI tools like SonarQube, Codacy, and GitHub Actions into CI/CD pipelines for real-time vulnerability detection, replacing error-prone manual processes.
Enhance Accuracy and Efficiency: Leverage Retrieval-Augmented Generation (RAG) and AI-assisted triage to cut false positives by 25%, prioritizing critical threats like SQL injections (shown in live demos).
Optimize Code Quality: Use AI-guided automation to refactor code smells (e.g., long methods, dead code) with metrics-driven precision, reducing technical debt by 35%.
Navigate Ethical Challenges: Implement safeguards against AI biases and tool limitations to ensure responsible adoption.
Why It Matters:
Attendees will gain hands-on insights through visual workflows, pre-recorded demos, and interactive diagrams showcasing AI-driven enforcement of security policies (e.g., GitHub Actions blocking insecure merges). Tailored for QA Engineers, DevOps Teams, and Engineering Managers, the lecture bridges the gap between AI theory and practice, empowering teams to bake security into deployments without sacrificing speed (aligned with Gartner, 2024 trends) while accelerating release cycles.
Outcome:
By merging technical depth with ethical considerations, this talk delivers a roadmap to transform code reviews from a bottleneck into a strategic asset—slashing risks, boosting maintainability, and freeing teams to innovate securely.
Język prezentacji/Language: EN
Czego uczestnicy nauczą się z Twojego wystąpienia? / Key takeaways:
- AI Integration Made Simple: Embed SonarQube, Codacy, and GitHub Actions into CI/CD pipelines for real-time vulnerability detection.
- Cut False Positives by 25%: Use Retrieval-Augmented Generation (RAG) and AI-assisted triage to prioritize critical threats.
- Refactor with Metrics-Driven Precision: Eliminate code smells like long methods and dead code using AI-guided automation.
- Ethical AI in Action: Mitigate biases in training data and navigate tool limitations