Theme
LLMTENA 2026 focuses on the emerging synergy between Large Language Models (LLMs) and tensor analysis. The central theme is a bidirectional research exchange: using LLMs to advance tensor-based data analysis, and using tensor methods to improve the efficiency, interpretability, robustness, and scalability of LLMs.
Goals
1.Bridge two research communities that have largely developed in parallel: tensor methods and LLMs.
2.Explore how LLMs can enhance tensor analysis, including tensor completion, decomposition, denoising, visualization, and structured data reasoning.
3.Investigate how tensor methods can improve LLMs, especially through compression, acceleration, interpretability, multi-modal fusion, and efficient fine-tuning.
4.Promote interdisciplinary research combining mathematical foundations, data mining, machine learning, and modern AI.
5.Create a platform for new collaborations and high-quality research on tensor–LLM integration within the ICDM community.
Audience
The workshop is intended for researchers, students, and practitioners working in: tensor decomposition, tensor networks, and multilinear algebra, large language models and foundation models, data mining and machine learning, multi-modal learning and structured data reasoning, model compression, acceleration, and efficient AI, knowledge graphs, recommendation systems, neuroscience, spatio-temporal data, and multi-agent systems. Overall, the target audience includes both tensor-method researchers interested in LLMs and LLM researchers seeking mathematically grounded tools for efficiency, interpretability, and structured data analysis.