About ReSAISE
The integration of Artificial Intelligence (AI) into Software Engineering (SE) has significantly advanced with the adoption of Large Language Models (LLMs), automating tasks such as code generation, bug detection, vulnerability patching, and testing. The recent emergence of multi-agent LLM systems, comprising multiple specialized agents collaborating to perform complex tasks, has introduced new paradigms in software development. These systems promise enhanced robustness and scalability but also present unique challenges related to coordination, reliability, security, and ethical considerations.
ReSAISE’25 aims to bring together researchers and practitioners from the AI and SE communities to explore the development and deployment of reliable and secure AI solutions in software engineering. While the workshop continues to focus on a broad range of topics at the intersection of AI and SE, this edition acknowledges the growing interest in system-level LLMs and multi-agent LLM systems, reflecting their emerging significance in the field.
We welcome contributions that address the following areas:
- Innovative applications of LLMs and multi-agent LLM systems at the intersection of SE and key non-functional attributes, with a particular focus on reliability, maintainability, scalability, and security in SE.
- Studies on the reliability and security of AI models, including multi-agent systems. Examining the robustness, fault tolerance, and security vulnerabilities of AI-driven software engineering tools and methodologies.
- Benchmarking methodologies and frameworks for evaluating AI tools in SE, including reproducibility, standardization, and comparison across models and tasks.
- Explainability and accountability of AI-based decisions in software development workflows, including the role of explainable AI (XAI) techniques in increasing trust and usability of AI-generated outputs.
- Analyzing the ethical implications, including fairness, transparency, and accountability, associated with integrating AI technologies into software engineering practices.
We also encourage submissions reporting negative results or unexpected findings, provided they offer valuable insights and lessons for the community. In a rapidly evolving field, understanding what does not work is as important as discovering what does.
ReSAISE 2025 is co-located with the 36th IEEE International Symposium on Software Reliability Engineering (ISSRE 2025).
Topics of Interest
This call for papers invites all researchers and practitioners to explore the reliability and security aspects of AI in the SE field. The workshop will cover a wide range of topics, including but not limited to:
AI Security and Privacy:
- Adversarial attacks and defenses
- Privacy-preserving AI techniques
- Security threats and countermeasures
- Secure model training
- Authentication and access control
- Compliance and regulatory requirements
- AI-generated offensive security code
- Security challenges in multi-agent LLM systems, including inter-agent communication vulnerabilities and coordination risks
AI System Integrity and Quality:
- Data quality and bias mitigation
- Evaluation approaches and benchmarks
- Robustness and resilience of AI systems
- Incident response and recovery
- System monitoring and maintenance
- Emergent behaviors in multi-agent LLM systems and their impact on system integrity
- Explainable AI techniques for understanding and validating software engineering tasks
- Benchmark design and evaluation metrics for AI-based software engineering tools
- Reproducibility and comparability of experimental results across AI systems in SE
AI Implementation and Operation:
- Secure deployment and integration
- AI-enabled threat detection and defect detection
- Secure code generation and program synthesis
- Automated testing
- Code analysis and refinement
- Coordination mechanisms and communication protocols in multi-agent LLM systems
Innovations in AI Applications:
- AI coding assistants
- Explainable AI in Software Engineering
- Program repair
- Green AI for sustainable software development
- Multi-agent LLM frameworks for collaborative software development tasks
Human-AI Collaboration and Code Understanding:
- Human vs AI-generated code: comparative evaluation, attribution, and trust
- Explainable AI for software engineering tasks and developer decision support
- Human-in-the-loop systems and co-development environments
- Perception and trust of AI-generated artifacts by developers
Experiences and Case Studies:
- Open issues
- Practical experiences
- Real-world case studies
- Case studies on deploying multi-agent LLM systems in software engineering projects