The SCL Workshop welcomes contributions on all aspects of learning in non-stationary and streaming environments, spanning both theoretical foundations and practical applications across different domains (e.g., IoT, robotics, finance, monitoring systems, basic sciences, and industrial applications), and across a wide range of data types (e.g., tabular data, time series, spatiotemporal data, graphs, logs, multimedia, and data streams).
Paper submission deadline: May 31st 2026
Notification to the authors: June 30th 2026
Camera ready submission: July 10th 2026
All Deadlines are Anywhere on Earth (AoE) at 23:59.
We welcome submissions of completed research, preliminary results, and innovative ideas in the form of full and short papers. Submitted papers should not have been previously published or accepted for publication in substantially similar form in any peer-reviewed venue, such as journals, conferences, or workshops. Papers must be written in English and formatted in LaTeX, following the outline of ECML-PKDD author kit.
We will consider two submission types:
Full papers (up to 12 pages) may include research contributions, reproducibility or replicability studies, as well as case studies. They should clearly situate the work within the state of the art and describe the proposed methodology, experimental setting, or real-world application in sufficient detail.
Short papers (up to 8 pages) include position papers, preliminary or ongoing work, and practice or experience reports. They may introduce new perspectives on the workshop topics or describe real-world scenarios and lessons learned.
Submissions should not exceed the indicated pages, including any diagrams and references. All submissions will go through a double-blind review process and will be reviewed by at least two reviewers based on relevance for the workshop, novelty/originality, significance, technical quality and correctness, quality and clarity of presentation, quality of references, and reproducibility. Submitted papers will be rejected without review if they are not properly anonymized, do not comply with the template, or do not follow the above guidelines.
The submission link will be available soon
The authors might have a (non-anonymous) pre-print published online, but it should not be cited in the submitted paper to preserve anonymity. Reviewers will be asked not to search for them. The authors are also strongly encouraged to adhere to the best practices of Reproducible Research, by making available data and software tools that would enable others to reproduce the results reported in their papers. Tools such as https://anonymous.4open.science are encouraged to be used to ensure the anonymity of the submitted code.
The accepted papers and the material generated during the meeting will be available on the workshop website. The workshop proceedings will be published as a Springer's Communications in Computer and Information Science (CCIS) revised post-proceedings volume. Moreover, the authors of selected papers may be invited to submit an extended version in a special issue hosted by a top-ranking journal.
Notice: According to the ECML PKDD 2026 policy, the SCL Workshop is an in-person workshop, so each accepted workshop paper must be presented in person. Each accepted workshop paper must be accompanied by at least one distinct full author registration, completed by the early registration date cut-off.
The workshop encourages submissions presenting novel algorithms, architectures, evaluation frameworks, benchmarks, and application-driven studies, as well as position papers and perspectives outlining open challenges and future research directions in SCL. Topics include, but are not limited to:
Continual and lifelong learning
Streaming Machine Learning
Online learning in non-stationary environments
Online Continual Learning
Streaming Continual Learning: Unified perspectives on Streaming Machine Learning and Continual Learning
Adaptation and Non-Stationarity
AutoML/Automated adaptation
Real and virtual drift detection
Real and virtual drift handling
Transfer learning in streaming settings
Domain adaptation and test-time adaptation under distribution shift
Time series analysis in streaming settings
Learning with temporal dependence
Sequential and temporally-aware continuous learning
Design of realistic Streaming Continual Learning benchmarks
Evaluation protocols and metrics for streaming and continual learning
Trade-offs between accuracy, adaptation speed, memory, and computation
Continuous learning and adaptation of foundation models
Streaming Continual Learning applied to Reinforcement Learning
Efficient updates without retraining from scratch
Resource-constrained and edge learning scenarios