As Artificial Intelligence (AI) continues to drive innovation, Large Language Models (LLMs) are transforming the landscape of Software Engineering (SE). These models are not only enhancing capabilities in automated code generation but are also pivotal in addressing complex software development challenges such as bug detection and fixing, vulnerability detection and remediation, automated testing, and many other challenging tasks. While these solutions have shown impressive results in the SE field, their application poses reliability and security concerns.
Addressing the reliability and security aspects of AI-based solutions for SE involves careful consideration of data quality, model performance, monitoring, privacy measures, security practices, and ongoing maintenance to ensure reliable and secure AI systems.
The workshop aims to bring together researchers and industrial practitioners from both AI and SE communities to collaborate, share experiences, provide directions for future research, and encourage the use of reliable and secure AI solutions for addressing software engineering-specific challenges. We encourage submissions targeting interdisciplinary research, in particular those listed in the topics of interest.
We particularly invite submissions that focus on innovative uses of LLMs in software development, studies on their reliability and security, and explorations of ethical considerations in their deployment.
ReSAISE 2024 is co-located with the 35th IEEE International Symposium on Software Reliability Engineering (ISSRE 2024).
This call for papers invites all researchers and practitioners to explore the reliability and security aspects of AI in the Software Engineering (SE) field. The workshop will cover a wide range of topics, including but not limited to:
The workshop is partially supported by the University of Naples Federico II under the MUR PRIN 2022 program, project FLEGREA. For more information about the project, please visit https://flegrea.github.io/.