Networks are entering an era where both classical ML and emerging agentic AI are transforming end-to-end networking—from intent capture to closed-loop control across RAN, Core, transport, and edge/cloud. AIxNET welcomes contributions that advance algorithms, architectures, protocols, evaluations, and safeguards for trustworthy, explainable, and safe-to-operate AI-driven networking. We particularly encourage rigorous comparative studies across control layers (SMO/intent vs near-RT vs lower-layer control), and the release of open datasets and artifacts to help the community build together.
AIxNET is intending to build a stimulating, open, dynamic, and friendly forum to co-create the future and spark collaborations across teams. The conference will be a unique opportunity to gather academic and industry research on this crucial topic for 2030 networks. Expect interactive sessions, demos, and time for discussion.
Main Topics of Interest include (but are not limited to)
- Agentic AI: from Human Intent to Action Autonomy
- Networked “xLM” challenges: Intent capture/parsing/policy synthesis at SMO and service layers, use of Long, Short or Machine Language Models (LLM, SLM, MLM)
- Hierarchical/heterogeneous agents spanning non-RT and near-RT control (e.g., O-RAN RIC), Core CNFs, and edge resources
- Agentic 6G functions
- Interconnection and collaboration between AI agents
- Tool and protocols for network-facing agents (e.g., MCP-enabled clients/servers), conflict resolution, safe rollbacks
- New paradigms for networking: from Classical ML to xLM-based Control at Scale
- Supervised/unsupervised/self-supervised learning for prediction, anomaly detection, resource allocation, QoE optimization
- ML and LLM techniques for scheduling, slicing, mobility, energy saving; cross-domain orchestration across RAN/Core/transport for B5G and 6G
- Programmable data planes (P4/eBPF/SDN) with ML-in-the-loop; NWDAF-enabled analytics
- Challenges for access networks and edge networking, use of alternative models, SLM, TRM
- Data collection and labeling
- Comparative Designs Across Layers: SMO/Intent vs Near-RT vs Lower-Layer Control
- Side-by-side evaluations of top-down (intent-driven) vs bottom-up (local) autonomy
- Responsibility split across SMO policies, RIC xApps/rApps, Core functions, device/edge controllers
- Stability, latency and safety; arbitration under competing objectives (QoE, energy, cost, SLAs)
- Cross-layer observability, auditability, and explainability methodologies
- Explainability and trustworthiness: Bias and Functional Safety
- Human in the loop supervision and autonomy levels for safe operations
- Explainability for operator oversight (pre/post methods, rationales, provenance, accountability logs)
- Security and governance for AI-operated changes (access control, authorization, verification, compliance-by-design)
- Possible Bias sources and mitigation (data, prompts, tools, policies); fairness in resource allocation and service admission
- Evaluation, Benchmarks, Open Datasets, and experimentations
- Public datasets/benchmarks for RAN/Core/transport/edge; simulated vs real testbeds
- Evaluation methodology and built of meaningful KPIs (e.g., relying on MTTR, SLO, energy–QoE trade-offs…)
- Reproducible pipelines, artifact sharing, and insightful negative results, robustness to drift
- Sustainability and cost modeling (e.g., compute budgets, edge vs cloud placement)
Submission types and guidelines
- Full Papers (up to 8 pages), Short Papers (up to 5 pages), Demos/Positions (2–4 pages). Page limits include references.
- Format: two-column IEEE conference style (PDF). Detailed instructions and the submission link will be posted on the AIxNET website.
- Open artifacts are encouraged: release code/data/measurement scripts when possible; otherwise provide high-fidelity synthetic surrogates or detailed reproduction recipes.
- Comparative studies must clearly state the targeted control layer(s) and report stability/latency/safety metrics alongside performance.
- Industrial case studies and live demos are welcome as short or demo papers

