Quality of AI-enabled systems (Q4AI) is recognized as a difficult challenge in both research and practice. Many of these challenges are driven by the data-dependent nature of AI components in which functionality is determined by characteristics (features) of training and operational data and not by traditional component specifications from which test cases are often derived. This data-dependency also causes AI components to drift over time as characteristics of operational data change over time, therefore requiring QA activities, such as runtime monitoring to be essential components of AI-enabled systems.
A complementary aspect of Quality in the Age of AI is the use of AI to support Quality activities and processes (AI4Q), such as using AI techniques for test data and test case generation, fault localization in source code, and analyzing runtime log data to identify problems and courses of action. Challenges in this area stem from the lack of high quality and quantity of training data and oracles that are important for model performance and accuracy.
With the increase in complexity, size, and ubiquity of AI-enabled systems, as well as advances in AI including the growing popularity of large language models (LLMs), it is necessary to continue exploring Quality in the Age of AI. We therefore seek novel contributions investigating advances in both Q4AI and AI4Q.
Recent advances in artificial intelligence (AI), especially in machine learning (ML), deep learning (DL) and the underlying data engineering techniques, as well as their integration into software-based systems of all domains raises new challenges to engineering modern AI-based systems. This makes the investigation of quality aspects in machine learning, AI and data analytics an essential topic. AI-based systems are data-intensive, continuously evolving, and self-adapting, which leads to new constructive and analytical quality assurance approaches to guarantee their quality during development and operation in live environments. On the constructive side, for instance, new process models, requirements engineering approaches or continuous integration and deployment models like MLOps are needed. On the analytical side, for instance, new data, offline and online testing approaches are needed for AI-based systems.
The scope of this track is Quality in the Age of AI. The topics of interest include, but are not limited to:
Quality of AI-enabled Systems:
Elicitation and specification of quality requirements for AI systems
Testing techniques for AI components and systems
Data quality processes
Tools to support software quality activities in AI systems
Runtime monitoring of AI systems
Certification processes for AI components and systems
Quality metrics for AI systems and components
AI Supporting Software Quality Processes
AI for test case generation
AI for test data generation
AI for quality requirements generation
AI for runtime log analysis
AI for fault localization
Analytical and constructive quality assurance for AI-based systems
System and software architecture of AI-based systems
Data management and quality for AI-based systems
Data, offline and online testing approaches
Runtime monitoring, coverage and trace analysis of data, models and code
Development processes and organization for machine learning, AI and data analytics
Non-functional quality aspects of AI-based systems
Quality models, standards and guidelines for developing AI-based systems
Empirical studies on quality aspects in machine learning, AI, and data analytics
Chairs: Dr. Katerina Tzimourta (University of Ioannina, Greece) and Dr. Boni Garcia (Universidad Carlos III de Madrid, Spain)
Program Committee:
Julien Siebert, Fraunhofer-Institut für Experimentelles Software Engineering IESE, Germany
Neelofar Neelofar, RMIT University, Austria
Rachel Brower-Sinning, Carnegie Mellon University, UnSA
Xiao-Y Zhang, National Institute of Informatics, China
Zhenya Zhang, Kyushu University , Japan
I am Associate Professor (with tenure) at Universidad Carlos III de Madrid (UC3M) in Spain. My main research interest is software engineering with a special focus on automated testing. I am a tech lead at Selenium and the creator and maintainer of several projects belonging its ecosystem, such as WebDriverManager, Selenium-Jupiter, and BrowserWatcher. I wrote the books Mastering Software Testing with JUnit 5 (Packt Publishing, 2017) and Hands-On Selenium WebDriver with Java (O'Reilly Media, 2022). Moreover, I am the author of more than 55 research papers in different journals, magazines, and conferences. Occasionally, I speak at workshops, meetups, and other international events. I tweet about test automation and open source.