
Michal is a Software Quality Manager at Viessmann, where he has focus on the quality processes in project deliveries. Hs goals include also Quality KPIs and monitoring system design. He works with a passion for fast and efficient testing, where Michal provides valuable insights into test harness design, solutions for industrialisation projects, scope definition, and efficient test reporting.
Beyond his role at Viessmann, Michal is dedicated to sharing his expertise with others and helping them grow both personally and professionally. He actively participates in various activities, including quality management, project consulting, and requirements analysis. Currently, he is leveraging his extensive experience as a Quality Management Consultant.
In his free time, Michal enjoys pursuits such as archery, fishing, and enjoying a glass of rum.
Prelekcja/Presentation:
Critical Thinking rules in the AI-Enhanced Software Testing
In the rapidly evolving technological landscape, the integration of Artificial Intelligence chats, tools and other AI powered extensions into software development and testing processes presents unprecedented opportunities and challenges.
We hear everywhere from AI fans a promise to revolutionise testing practices through AI powered automation, predictive analytics, and intelligent test generation, but the more we jump into details of proposed solutions we feel the existing proposed solution is overpromised.
There’s a need to approach these new technologies with objectivity rather than unchecked enthusiasm or too much skepticism.
This presentation introduces a systematic approach – applying critical thinking principles to evaluate and integrate AI-powered testing tools and practices. We’ll explore how testing professionals can leverage AI capabilities while maintaining rigorous testing standards and avoiding common pitfalls of both over-reliance and under-utilisation. Attendees will learn practical strategies and examples based on the critical thinking rules for assessing AI enhancement promises beyond marketing claims. How to use those common sense rules for understanding actual capabilities and limitations, and identifying appropriate AI enhanced usage within their testing environments.
The session will cover essential aspects of AI evaluation. We’ll address the human factor in AI integration, discussing how testing roles evolve rather than disappear in an AI-enhanced environment.
Special attention will be given to common biases that influence technology adoption decisions and how to overcome them through structured evaluation. Participants will have presented practical rules and aspects for making informed decisions about AI integration in their testing practices. That will help them ensure they can effectively expand testing capabilities with maintaining professional quality.
Język prezentacji/Language: EN