DOI stringlengths 41 41 | Title stringlengths 23 152 | Authors stringlengths 9 455 | Abstract stringlengths 308 1.92k | Section stringclasses 2
values | Date stringdate 2025-03-16 00:00:00 2025-03-24 00:00:00 |
|---|---|---|---|---|---|
https://doi.org/10.48550/arXiv.2503.18893 | xKV: Cross-Layer SVD for KV-Cache Compression | Chi-Chih Chang, Chien-Yu Lin, Yash Akhauri, Wei-Cheng Lin, Kai-Chiang Wu, Luis Ceze, Mohamed S. Abdelfattah | Large Language Models (LLMs) with long context windows enable powerful applications but come at the cost of high memory consumption to store the Key and Value states (KV-Cache). Recent studies attempted to merge KV-cache from multiple layers into shared representations, yet these approaches either require expensive pre... | CL | 24/03/2025 |
https://doi.org/10.48550/arXiv.2503.18891 | AgentDropout: Dynamic Agent Elimination for Token-Efficient and High-Performance LLM-Based Multi-Agent Collaboration | Zhexuan Wang, Yutong Wang, Xuebo Liu, Liang Ding, Miao Zhang, Jie Liu, Min Zhang | Multi-agent systems (MAS) based on large language models (LLMs) have demonstrated significant potential in collaborative problem-solving. However, they still face substantial challenges of low communication efficiency and suboptimal task performance, making the careful design of the agents' communication topologies par... | CL | 24/03/2025 |
https://doi.org/10.48550/arXiv.2503.18878 | I Have Covered All the Bases Here: Interpreting Reasoning Features in Large Language Models via Sparse Autoencoders | Andrey Galichin, Alexey Dontsov, Polina Druzhinina, Anton Razzhigaev, Oleg Y. Rogov, Elena Tutubalina, Ivan Oseledets | Large Language Models (LLMs) have achieved remarkable success in natural language processing. Recent advances have led to the developing of a new class of reasoning LLMs; for example, open-source DeepSeek-R1 has achieved state-of-the-art performance by integrating deep thinking and complex reasoning. Despite these impr... | CL | 24/03/2025 |
https://doi.org/10.48550/arXiv.2503.18769 | AlphaSpace: Enabling Robotic Actions through Semantic Tokenization and Symbolic Reasoning | Alan Dao (Gia Tuan Dao), Dinh Bach Vu, Bui Quang Huy | This paper presents AlphaSpace, a novel methodology designed to enhance the spatial reasoning capabilities of large language models (LLMs) for 3D Cartesian space navigation. AlphaSpace employs a semantics-based tokenization strategy, encoding height information through specialized semantic tokens, and integrates primar... | CL | 24/03/2025 |
https://doi.org/10.48550/arXiv.2503.18760 | Synthetic Function Demonstrations Improve Generation in Low-Resource Programming Languages | Nick McKenna, Xinnuo Xu, Jack Williams, Nick Wilson, Benjamin Van Durme, Christian Poelitz | A key consideration when training an LLM is whether the target language is more or less resourced, whether this is English compared to Welsh, or Python compared to Excel. Typical training data for programming languages consist of real program demonstrations coupled with human-written comments. Here we present novel app... | CL | 24/03/2025 |
https://doi.org/10.48550/arXiv.2503.18751 | Construction Identification and Disambiguation Using BERT: A Case Study of NPN | Wesley Scivetti, Nathan Schneider | Construction Grammar hypothesizes that knowledge of a language consists chiefly of knowledge of form-meaning pairs (''constructions'') that include vocabulary, general grammar rules, and even idiosyncratic patterns. Recent work has shown that transformer language models represent at least some constructional patterns, ... | CL | 24/03/2025 |
https://doi.org/10.48550/arXiv.2503.18730 | Predicting the Road Ahead: A Knowledge Graph based Foundation Model for Scene Understanding in Autonomous Driving | Hongkuan Zhou, Stefan Schmid, Yicong Li, Lavdim Halilaj, Xiangtong Yao, Wei cao | The autonomous driving field has seen remarkable advancements in various topics, such as object recognition, trajectory prediction, and motion planning. However, current approaches face limitations in effectively comprehending the complex evolutions of driving scenes over time. This paper proposes FM4SU, a novel method... | CL | 24/03/2025 |
https://doi.org/10.48550/arXiv.2503.18702 | Unsupervised Acquisition of Discrete Grammatical Categories | David Ph. Shakouri, Crit Cremers, Niels O. Schiller | This article presents experiments performed using a computational laboratory environment for language acquisition experiments. It implements a multi-agent system consisting of two agents: an adult language model and a daughter language model that aims to learn the mother language. Crucially, the daughter agent does not... | CL | 24/03/2025 |
https://doi.org/10.48550/arXiv.2503.18681 | Commander-GPT: Fully Unleashing the Sarcasm Detection Capability of Multi-Modal Large Language Models | Yazhou Zhang, Chunwang Zou, Bo Wang, Jing Qin | Sarcasm detection, as a crucial research direction in the field of Natural Language Processing (NLP), has attracted widespread attention. Traditional sarcasm detection tasks have typically focused on single-modal approaches (e.g., text), but due to the implicit and subtle nature of sarcasm, such methods often fail to y... | CL | 24/03/2025 |
https://doi.org/10.48550/arXiv.2503.18646 | ZeroLM: Data-Free Transformer Architecture Search for Language Models | Zhen-Song Chen, Hong-Wei Ding, Xian-Jia Wang, Witold Pedrycz | Neural architecture search (NAS) provides a systematic framework for automating the design of neural network architectures, yet its widespread adoption is hindered by prohibitive computational requirements. Existing zero-cost proxy methods, while reducing search overhead, demonstrate inadequate performance in architect... | CL | 24/03/2025 |
https://doi.org/10.48550/arXiv.2503.18603 | LANGALIGN: Enhancing Non-English Language Models via Cross-Lingual Embedding Alignment | Jong Myoung Kim, Young-Jun Lee, Ho-Jin Choi, Sangkeun Jung | While Large Language Models have gained attention, many service developers still rely on embedding-based models due to practical constraints. In such cases, the quality of fine-tuning data directly impacts performance, and English datasets are often used as seed data for training non-English models. In this study, we p... | CL | 24/03/2025 |
https://doi.org/10.48550/arXiv.2503.18596 | LinkAlign: Scalable Schema Linking for Real-World Large-Scale Multi-Database Text-to-SQL | Yihan Wang, Peiyu Liu, Xin Yang | Schema linking is a critical bottleneck in achieving human-level performance in Text-to-SQL tasks, particularly in real-world large-scale multi-database scenarios. Addressing schema linking faces two major challenges: (1) Database Retrieval: selecting the correct database from a large schema pool in multi-database sett... | CL | 24/03/2025 |
https://doi.org/10.48550/arXiv.2503.18594 | ClinText-SP and RigoBERTa Clinical: a new set of open resources for Spanish Clinical NLP | Guillem García Subies, Álvaro Barbero Jiménez, Paloma Martínez Fernández | We present a novel contribution to Spanish clinical natural language processing by introducing the largest publicly available clinical corpus, ClinText-SP, along with a state-of-the-art clinical encoder language model, RigoBERTa Clinical. Our corpus was meticulously curated from diverse open sources, including clinical... | CL | 24/03/2025 |
https://doi.org/10.48550/arXiv.2503.18562 | Self-Reported Confidence of Large Language Models in Gastroenterology: Analysis of Commercial, Open-Source, and Quantized Models | Nariman Naderi, Seyed Amir Ahmad Safavi-Naini, Thomas Savage, Zahra Atf, Peter Lewis, Girish Nadkarni, Ali Soroush | This study evaluated self-reported response certainty across several large language models (GPT, Claude, Llama, Phi, Mistral, Gemini, Gemma, and Qwen) using 300 gastroenterology board-style questions. The highest-performing models (GPT-o1 preview, GPT-4o, and Claude-3.5-Sonnet) achieved Brier scores of 0.15-0.2 and AUR... | CL | 24/03/2025 |
https://doi.org/10.48550/arXiv.2503.18539 | Natural Language Processing for Electronic Health Records in Scandinavian Languages: Norwegian, Swedish, and Danish | Ashenafi Zebene Woldaregay, Jørgen Aarmo Lund, Phuong Dinh Ngo, Mariyam Tayefi, Joel Burman, Stine Hansen, Martin Hylleholt Sillesen, Hercules Dalianis, Robert Jenssen, Lindsetmo Rolf Ole, Karl Øyvind Mikalsen | Background: Clinical natural language processing (NLP) refers to the use of computational methods for extracting, processing, and analyzing unstructured clinical text data, and holds a huge potential to transform healthcare in various clinical tasks. Objective: The study aims to perform a systematic review to comprehen... | CL | 24/03/2025 |
https://doi.org/10.48550/arXiv.2503.18526 | SciClaims: An End-to-End Generative System for Biomedical Claim Analysis | Raúl Ortega, José Manuel Gómez-Pérez | Validating key claims in scientific literature, particularly in biomedical research, is essential for ensuring accuracy and advancing knowledge. This process is critical in sectors like the pharmaceutical industry, where rapid scientific progress requires automation and deep domain expertise. However, current solutions... | CL | 24/03/2025 |
https://doi.org/10.48550/arXiv.2503.18502 | Autoregressive Language Models for Knowledge Base Population: A case study in the space mission domain | Andrés García-Silva, José Manuel Gómez-Pérez | Knowledge base population KBP plays a crucial role in populating and maintaining knowledge bases up-to-date in organizations by leveraging domain corpora. Motivated by the increasingly large context windows supported by large language models, we propose to fine-tune an autoregressive language model for end-toend KPB. O... | CL | 24/03/2025 |
https://doi.org/10.48550/arXiv.2503.18491 | MAGIC-VQA: Multimodal And Grounded Inference with Commonsense Knowledge for Visual Question Answering | Shuo Yang, Siwen Luo, Soyeon Caren Han, Eduard Hovy | Visual Question Answering (VQA) requires reasoning across visual and textual modalities, yet Large Vision-Language Models (LVLMs) often lack integrated commonsense knowledge, limiting their robustness in real-world scenarios. To address this, we introduce MAGIC-VQA, a novel framework that enhances VQA by systematically... | CL | 24/03/2025 |
https://doi.org/10.48550/arXiv.2503.18485 | Whispering in Amharic: Fine-tuning Whisper for Low-resource Language | Dawit Ketema Gete, Bedru Yimam Ahamed, Tadesse Destaw Belay, Yohannes Ayana Ejigu, Sukairaj Hafiz Imam, Alemu Belay Tessema, Mohammed Oumer Adem, Tadesse Amare Belay, Robert Geislinger, Umma Aliyu Musa, Martin Semmann, Shamsuddeen Hassan Muhammad, Henning Schreiber, Seid Muhie Yimam | This work explores fine-tuning OpenAI's Whisper automatic speech recognition (ASR) model for Amharic, a low-resource language, to improve transcription accuracy. While the foundational Whisper model struggles with Amharic due to limited representation in its training data, we fine-tune it using datasets like Mozilla Co... | CL | 24/03/2025 |
https://doi.org/10.48550/arXiv.2503.18471 | Words as Bridges: Exploring Computational Support for Cross-Disciplinary Translation Work | Calvin Bao, Yow-Ting Shiue, Marine Carpuat, Joel Chan | Scholars often explore literature outside of their home community of study. This exploration process is frequently hampered by field-specific jargon. Past computational work often focuses on supporting translation work by removing jargon through simplification and summarization; here, we explore a different approach th... | CL | 24/03/2025 |
https://doi.org/10.48550/arXiv.2503.18432 | Teaching LLMs for Step-Level Automatic Math Correction via Reinforcement Learning | Junsong Li, Jie Zhou, Yutao Yang, Bihao Zhan, Qianjun Pan, Yuyang Ding, Qin Chen, Jiang Bo, Xin Lin, Liang He | Automatic math correction aims to check students' solutions to mathematical problems via artificial intelligence technologies. Most existing studies focus on judging the final answer at the problem level, while they ignore detailed feedback on each step in a math problem-solving process, which requires abilities of sem... | CL | 24/03/2025 |
https://doi.org/10.48550/arXiv.2503.18360 | J&H: Evaluating the Robustness of Large Language Models Under Knowledge-Injection Attacks in Legal Domain | Yiran Hu, Huanghai Liu, Qingjing Chen, Ning Zheng, Chong Wang, Yun Liu, Charles L.A. Clarke, Weixing Shen | As the scale and capabilities of Large Language Models (LLMs) increase, their applications in knowledge-intensive fields such as legal domain have garnered widespread attention. However, it remains doubtful whether these LLMs make judgments based on domain knowledge for reasoning. If LLMs base their judgments solely on... | CL | 24/03/2025 |
https://doi.org/10.48550/arXiv.2503.18296 | Surgical Action Planning with Large Language Models | Mengya Xu, Zhongzhen Huang, Jie Zhang, Xiaofan Zhang, Qi Dou | In robot-assisted minimally invasive surgery, we introduce the Surgical Action Planning (SAP) task, which generates future action plans from visual inputs to address the absence of intraoperative predictive planning in current intelligent applications. SAP shows great potential for enhancing intraoperative guidance and... | CL | 24/03/2025 |
https://doi.org/10.48550/arXiv.2503.18293 | Fact-checking AI-generated news reports: Can LLMs catch their own lies? | Jiayi Yao, Haibo Sun, Nianwen Xue | In this paper, we evaluate the ability of Large Language Models (LLMs) to assess the veracity of claims in ''news reports'' generated by themselves or other LLMs. Our goal is to determine whether LLMs can effectively fact-check their own content, using methods similar to those used to verify claims made by humans. Our ... | CL | 24/03/2025 |
https://doi.org/10.48550/arXiv.2503.18290 | When is dataset cartography ineffective? Using training dynamics does not improve robustness against Adversarial SQuAD | Paul K. Mandal | In this paper, I investigate the effectiveness of dataset cartography for extractive question answering on the SQuAD dataset. I begin by analyzing annotation artifacts in SQuAD and evaluate the impact of two adversarial datasets, AddSent and AddOneSent, on an ELECTRA-small model. Using training dynamics, I partition SQ... | CL | 24/03/2025 |
https://doi.org/10.48550/arXiv.2503.18288 | Sun-Shine: A Large Language Model for Tibetan Culture | Cheng Huang, Fan Gao, Nyima Tashi, Yutong Liu, Xiangxiang Wang, Thupten Tsering, Ban Ma-bao, Renzeg Duojie, Gadeng Luosang, Rinchen Dongrub, Dorje Tashi, Xiao Feng, Yongbin Yu | Tibetan, a minority language in China, features a highly intricate grammatical structure, characterized by four verb tenses and a tense system with frequent irregularities, contributing to its extensive inflectional diversity. Recently, advances in Large Language Models (LLMs) have transformed the paradigm in many doma... | CL | 24/03/2025 |
https://doi.org/10.48550/arXiv.2503.18260 | Bridging Emotions and Architecture: Sentiment Analysis in Modern Distributed Systems | Mahak Shah, Akaash Vishal Hazarika, Meetu Malhotra, Sachin C. Patil, Joshit Mohanty | Sentiment analysis is a field within NLP that has gained importance because it is applied in various areas such as; social media surveillance, customer feedback evaluation and market research. At the same time, distributed systems allow for effective processing of large amounts of data. Therefore, this paper examines h... | CL | 24/03/2025 |
https://doi.org/10.48550/arXiv.2503.18253 | Enhancing Multi-Label Emotion Analysis and Corresponding Intensities for Ethiopian Languages | Tadesse Destaw Belay, Dawit Ketema Gete, Abinew Ali Ayele, Olga Kolesnikova, Grigori Sidorov, Seid Muhie Yimam | In this digital world, people freely express their emotions using different social media platforms. As a result, modeling and integrating emotion-understanding models are vital for various human-computer interaction tasks such as decision-making, product and customer feedback analysis, political promotions, marketing r... | CL | 24/03/2025 |
https://doi.org/10.48550/arXiv.2503.18250 | PAD: Towards Efficient Data Generation for Transfer Learning Using Phrase Alignment | Jong Myoung Kim, Young-Jun_Lee, Ho-Jin Choi, Sangkeun Jung | Transfer learning leverages the abundance of English data to address the scarcity of resources in modeling non-English languages, such as Korean. In this study, we explore the potential of Phrase Aligned Data (PAD) from standardized Statistical Machine Translation (SMT) to enhance the efficiency of transfer learning. T... | CL | 24/03/2025 |
https://doi.org/10.48550/arXiv.2503.18247 | AfroXLMR-Social: Adapting Pre-trained Language Models for African Languages Social Media Text | Tadesse Destaw Belay, Israel Abebe Azime, Ibrahim Said Ahmad, Idris Abdulmumin, Abinew Ali Ayele, Shamsuddeen Hassan Muhammad, Seid Muhie Yimam | Pretrained Language Models (PLMs) built from various sources are the foundation of today's NLP progress. Language representations learned by such models achieve strong performance across many tasks with datasets of varying sizes drawn from various sources. We explore a thorough analysis of domain and task adaptive cont... | CL | 24/03/2025 |
https://doi.org/10.48550/arXiv.2503.18242 | ShED-HD: A Shannon Entropy Distribution Framework for Lightweight Hallucination Detection on Edge Devices | Aneesh Vathul, Daniel Lee, Sheryl Chen, Arthi Tasmia | Large Language Models (LLMs) have demonstrated impressive capabilities on a broad array of NLP tasks, but their tendency to produce hallucinations$\unicode{x2013}$plausible-sounding but factually incorrect content$\unicode{x2013}$poses severe challenges in high-stakes domains. Existing hallucination detection methods e... | CL | 23/03/2025 |
https://doi.org/10.48550/arXiv.2503.18226 | Mapping Hymns and Organizing Concepts in the Rigveda: Quantitatively Connecting the Vedic Suktas | Venkatesh Bollineni, Igor Crk, Eren Gultepe | Accessing and gaining insight into the Rigveda poses a non-trivial challenge due to its extremely ancient Sanskrit language, poetic structure, and large volume of text. By using NLP techniques, this study identified topics and semantic connections of hymns within the Rigveda that were corroborated by seven well-known g... | CL | 23/03/2025 |
https://doi.org/10.48550/arXiv.2503.18212 | LakotaBERT: A Transformer-based Model for Low Resource Lakota Language | Kanishka Parankusham, Rodrigue Rizk, KC Santosh | Lakota, a critically endangered language of the Sioux people in North America, faces significant challenges due to declining fluency among younger generations. This paper introduces LakotaBERT, the first large language model (LLM) tailored for Lakota, aiming to support language revitalization efforts. Our research has ... | CL | 23/03/2025 |
https://doi.org/10.48550/arXiv.2503.18182 | Exploring Topic Trends in COVID-19 Research Literature using Non-Negative Matrix Factorization | Divya Patel, Vansh Parikh, Om Patel, Agam Shah, Bhaskar Chaudhury | In this work, we apply topic modeling using Non-Negative Matrix Factorization (NMF) on the COVID-19 Open Research Dataset (CORD-19) to uncover the underlying thematic structure and its evolution within the extensive body of COVID-19 research literature. NMF factorizes the document-term matrix into two non-negative matr... | CL | 23/03/2025 |
https://doi.org/10.48550/arXiv.2503.18174 | GINGER: Grounded Information Nugget-Based Generation of Responses | Weronika Łajewska, Krisztian Balog | Retrieval-augmented generation (RAG) faces challenges related to factual correctness, source attribution, and response completeness. To address them, we propose a modular pipeline for grounded response generation that operates on information nuggets-minimal, atomic units of relevant information extracted from retrieved... | CL | 23/03/2025 |
https://doi.org/10.48550/arXiv.2503.18172 | Unmasking Deceptive Visuals: Benchmarking Multimodal Large Language Models on Misleading Chart Question Answering | Zixin Chen, Sicheng Song, Kashun Shum, Yanna Lin, Rui Sheng, Huamin Qu | Misleading chart visualizations, which intentionally manipulate data representations to support specific claims, can distort perceptions and lead to incorrect conclusions. Despite decades of research, misleading visualizations remain a widespread and pressing issue. Recent advances in multimodal large language models (... | CL | 23/03/2025 |
https://doi.org/10.48550/arXiv.2503.18167 | Evaluating Negative Sampling Approaches for Neural Topic Models | Suman Adhya, Avishek Lahiri, Debarshi Kumar Sanyal, Partha Pratim Das | Negative sampling has emerged as an effective technique that enables deep learning models to learn better representations by introducing the paradigm of learn-to-compare. The goal of this approach is to add robustness to deep learning models to learn better representation by comparing the positive samples against the n... | CL | 23/03/2025 |
https://doi.org/10.48550/arXiv.2503.18132 | MathAgent: Leveraging a Mixture-of-Math-Agent Framework for Real-World Multimodal Mathematical Error Detection | Yibo Yan, Shen Wang, Jiahao Huo, Philip S. Yu, Xuming Hu, Qingsong Wen | Mathematical error detection in educational settings presents a significant challenge for Multimodal Large Language Models (MLLMs), requiring a sophisticated understanding of both visual and textual mathematical content along with complex reasoning capabilities. Though effective in mathematical problem-solving, MLLMs o... | CL | 23/03/2025 |
https://doi.org/10.48550/arXiv.2503.18129 | GeoBenchX: Benchmarking LLMs for Multistep Geospatial Tasks | Varvara Krechetova, Denis Kochedykov | In this paper, we establish a benchmark for evaluating large language models (LLMs) on multi-step geospatial tasks relevant to commercial GIS practitioners. We assess seven leading commercial LLMs (Sonnet 3.5 and 3.7, Haiku 3.5, Gemini 2.0, GPT-4o, GPT-4o mini, and o3-mini) using a simple tool-calling agent equipped wi... | CL | 23/03/2025 |
https://doi.org/10.48550/arXiv.2503.18117 | Detection of Somali-written Fake News and Toxic Messages on the Social Media Using Transformer-based Language Models | Muhidin A. Mohamed, Shuab D. Ahmed, Yahye A. Isse, Hanad M. Mohamed, Fuad M. Hassan, Houssein A. Assowe | The fact that everyone with a social media account can create and share content, and the increasing public reliance on social media platforms as a news and information source bring about significant challenges such as misinformation, fake news, harmful content, etc. Although human content moderation may be useful to an... | CL | 23/03/2025 |
https://doi.org/10.48550/arXiv.2503.18095 | Clarifying Misconceptions in COVID-19 Vaccine Sentiment and Stance Analysis and Their Implications for Vaccine Hesitancy Mitigation: A Systematic Review | Lorena G Barberia, Belinda Lombard, Norton Trevisan Roman, Tatiane C. M. Sousa | Background Advances in machine learning (ML) models have increased the capability of researchers to detect vaccine hesitancy in social media using Natural Language Processing (NLP). A considerable volume of research has identified the persistence of COVID-19 vaccine hesitancy in discourse shared on various social media... | CL | 23/03/2025 |
https://doi.org/10.48550/arXiv.2503.18089 | $D^2LoRA$: Data-Driven LoRA Initialization for Low Resource Tasks | Javad SeraJ, Mohammad Mahdi Mohajeri, Mohammad Javad Dousti | Tuning large language models is essential for optimizing their performance across diverse applications, particularly in scenarios with limited data availability. Tuning large language models in scarce data scenarios is crucial, particularly given that the convergence speed of the LoRA method is lower than that of full ... | CL | 23/03/2025 |
https://doi.org/10.48550/arXiv.2503.18085 | Temporal Relation Extraction in Clinical Texts: A Span-based Graph Transformer Approach | Rochana Chaturvedi, Peyman Baghershahi, Sourav Medya, Barbara Di Eugenio | Temporal information extraction from unstructured text is essential for contextualizing events and deriving actionable insights, particularly in the medical domain. We address the task of extracting clinical events and their temporal relations using the well-studied I2B2 2012 Temporal Relations Challenge corpus. This t... | CL | 23/03/2025 |
https://doi.org/10.48550/arXiv.2503.18076 | A Multi-Model Adaptation of Speculative Decoding for Classification | Somnath Roy, Padharthi Sreekar, Srivatsa Narasimha, Anubhav Anand | The current study introduces a novel adaptation of speculative decoding, repurposed from generation to classification tasks. We propose a multi-model framework employing up to three lightweight worker models and a single, more robust judge model analogous to draft models and target model, respectively, in speculative d... | CL | 23/03/2025 |
https://doi.org/10.48550/arXiv.2503.18072 | On the effectiveness of LLMs for automatic grading of open-ended questions in Spanish | Germán Capdehourat, Isabel Amigo, Brian Lorenzo, Joaquín Trigo | Grading is a time-consuming and laborious task that educators must face. It is an important task since it provides feedback signals to learners, and it has been demonstrated that timely feedback improves the learning process. In recent years, the irruption of LLMs has shed light on the effectiveness of automatic gradin... | CL | 23/03/2025 |
https://doi.org/10.48550/arXiv.2503.18071 | Mind with Eyes: from Language Reasoning to Multimodal Reasoning | Zhiyu Lin, Yifei Gao, Xian Zhao, Yunfan Yang, Jitao Sang | Language models have recently advanced into the realm of reasoning, yet it is through multimodal reasoning that we can fully unlock the potential to achieve more comprehensive, human-like cognitive capabilities. This survey provides a systematic overview of the recent multimodal reasoning approaches, categorizing them ... | CL | 23/03/2025 |
https://doi.org/10.48550/arXiv.2503.18069 | Long Is More Important Than Difficult for Training Reasoning Models | Si Shen, Fei Huang, Zhixiao Zhao, Chang Liu, Tiansheng Zheng, Danhao Zhu | Difficult problems, which often result in long reasoning traces, are widely recognized as key factors for enhancing the performance of reasoning models. However, such high-challenge problems are scarce, limiting the size of available datasets. In this paper, we propose a simple method to decouple the reliance on proble... | CL | 23/03/2025 |
https://doi.org/10.48550/arXiv.2503.18063 | Dynamic Task Vector Grouping for Efficient Multi-Task Prompt Tuning | Pieyi Zhang, Richong Zhang, Zhijie Nie | Multi-task prompt tuning utilizes multiple high-resource source tasks to improve performance on low-source target tasks. Existing approaches transfer the soft prompt trained by combining all source tasks or a single ``high-similar'' source task one-time-only. However, we find that the optimal transfer performance often... | CL | 23/03/2025 |
https://doi.org/10.48550/arXiv.2503.18062 | Investigating Recent Large Language Models for Vietnamese Machine Reading Comprehension | Anh Duc Nguyen, Hieu Minh Phi, Anh Viet Ngo, Long Hai Trieu, Thai Phuong Nguyen | Large Language Models (LLMs) have shown remarkable proficiency in Machine Reading Comprehension (MRC) tasks; however, their effectiveness for low-resource languages like Vietnamese remains largely unexplored. In this paper, we fine-tune and evaluate two state-of-the-art LLMs: Llama 3 (8B parameters) and Gemma (7B param... | CL | 23/03/2025 |
https://doi.org/10.48550/arXiv.2503.18008 | Personalized Language Models via Privacy-Preserving Evolutionary Model Merging | Kyuyoung Kim, Jinwoo Shin, Jaehyung Kim | Personalization in large language models (LLMs) seeks to tailor models to individual user or user group preferences. Prompt-based methods augment queries with user preference information, whereas training-based methods directly encode preferences into model parameters for more effective personalization. Despite achievi... | CL | 23/03/2025 |
https://doi.org/10.48550/arXiv.2503.17994 | Instructing the Architecture Search for Spatial-temporal Sequence Forecasting with LLM | Xin Xue, Haoyi Zhou, Tianyu Chen, Shuai Zhang, Yizhou Long, Jianxin Li | Spatial-temporal sequence forecasting (STSF) is a long-standing research problem with widespread real-world applications. Neural architecture search (NAS), which automates the neural network design, has been shown effective in tackling the STSF problem. However, the existing NAS methods for STSF focus on generating arc... | CL | 23/03/2025 |
https://doi.org/10.48550/arXiv.2503.17965 | Understanding the Effects of RLHF on the Quality and Detectability of LLM-Generated Texts | Beining Xu, Arkaitz Zubiaga | Large Language Models (LLMs) have demonstrated exceptional performance on a range of downstream NLP tasks by generating text that closely resembles human writing. However, the ease of achieving this similarity raises concerns from potential malicious uses at scale by bad actors, as LLM-generated text becomes increasing... | CL | 23/03/2025 |
https://doi.org/10.48550/arXiv.2503.17963 | Won: Establishing Best Practices for Korean Financial NLP | Guijin Son, Hyunwoo Ko, Haneral Jung, Chami Hwang | In this work, we present the first open leaderboard for evaluating Korean large language models focused on finance. Operated for about eight weeks, the leaderboard evaluated 1,119 submissions on a closed benchmark covering five MCQA categories: finance and accounting, stock price prediction, domestic company analysis, ... | CL | 23/03/2025 |
https://doi.org/10.48550/arXiv.2503.17952 | SLIDE: Sliding Localized Information for Document Extraction | Divyansh Singh, Manuel Nunez Martinez, Bonnie J. Dorr, Sonja Schmer Galunder | Constructing accurate knowledge graphs from long texts and low-resource languages is challenging, as large language models (LLMs) experience degraded performance with longer input chunks. This problem is amplified in low-resource settings where data scarcity hinders accurate entity and relationship extraction. Contextu... | CL | 23/03/2025 |
https://doi.org/10.48550/arXiv.2503.17936 | An Empirical Study of the Role of Incompleteness and Ambiguity in Interactions with Large Language Models | Riya Naik, Ashwin Srinivasan, Estrid He, Swati Agarwal | Natural language as a medium for human-computer interaction has long been anticipated, has been undergoing a sea-change with the advent of Large Language Models (LLMs) with startling capacities for processing and generating language. Many of us now treat LLMs as modern-day oracles, asking it almost any kind of question... | CL | 23/03/2025 |
https://doi.org/10.48550/arXiv.2503.17933 | Experience Retrieval-Augmentation with Electronic Health Records Enables Accurate Discharge QA | Justice Ou, Tinglin Huang, Yilun Zhao, Ziyang Yu, Peiqing Lu, Rex Ying | To improve the reliability of Large Language Models (LLMs) in clinical applications, retrieval-augmented generation (RAG) is extensively applied to provide factual medical knowledge. However, beyond general medical knowledge from open-ended datasets, clinical case-based knowledge is also critical for effective medical ... | CL | 23/03/2025 |
https://doi.org/10.48550/arXiv.2503.17932 | STShield: Single-Token Sentinel for Real-Time Jailbreak Detection in Large Language Models | Xunguang Wang, Wenxuan Wang, Zhenlan Ji, Zongjie Li, Pingchuan Ma, Daoyuan Wu, Shuai Wang | Large Language Models (LLMs) have become increasingly vulnerable to jailbreak attacks that circumvent their safety mechanisms. While existing defense methods either suffer from adaptive attacks or require computationally expensive auxiliary models, we present STShield, a lightweight framework for real-time jailbroken j... | CL | 23/03/2025 |
https://doi.org/10.48550/arXiv.2503.17922 | WindowKV: Task-Adaptive Group-Wise KV Cache Window Selection for Efficient LLM Inference | Youhui Zuo, Sibo Wei, Chen Zhang, Zhuorui Liu, Wenpeng Lu, Dawei Song | With the advancements in long-context inference capabilities of large language models (LLMs), the KV cache has become one of the foundational components. However, its substantial GPU memory consumption makes KV cache compression a key technique for enabling efficient LLM inference in industrial scenarios. While recent ... | CL | 23/03/2025 |
https://doi.org/10.48550/arXiv.2503.17900 | MedPlan:A Two-Stage RAG-Based System for Personalized Medical Plan Generation | Hsin-Ling Hsu, Cong-Tinh Dao, Luning Wang, Zitao Shuai, Thao Nguyen Minh Phan, Jun-En Ding, Chun-Chieh Liao, Pengfei Hu, Xiaoxue Han, Chih-Ho Hsu, Dongsheng Luo, Wen-Chih Peng, Feng Liu, Fang-Ming Hung, Chenwei Wu | Despite recent success in applying large language models (LLMs) to electronic health records (EHR), most systems focus primarily on assessment rather than treatment planning. We identify three critical limitations in current approaches: they generate treatment plans in a single pass rather than following the sequential... | CL | 23/03/2025 |
https://doi.org/10.48550/arXiv.2503.17882 | Think Before Refusal : Triggering Safety Reflection in LLMs to Mitigate False Refusal Behavior | Shengyun Si, Xinpeng Wang, Guangyao Zhai, Nassir Navab, Barbara Plank | Recent advancements in large language models (LLMs) have demonstrated that fine-tuning and human alignment can render LLMs harmless. In practice, such "harmlessness" behavior is mainly achieved by training models to reject harmful requests, such as "Explain how to burn down my neighbor's house", where the model appropr... | CL | 22/03/2025 |
https://doi.org/10.48550/arXiv.2503.17876 | Satisfactory Medical Consultation based on Terminology-Enhanced Information Retrieval and Emotional In-Context Learning | Kaiwen Zuo, Jing Tang, Hanbing Qin, Binli Luo, Ligang He, Shiyan Tang | Recent advancements in Large Language Models (LLMs) have marked significant progress in understanding and responding to medical inquiries. However, their performance still falls short of the standards set by professional consultations. This paper introduces a novel framework for medical consultation, comprising two mai... | CL | 22/03/2025 |
https://doi.org/10.48550/arXiv.2503.17860 | Enhancing Retrieval Systems with Inference-Time Logical Reasoning | Felix Faltings, Wei Wei, Yujia Bao | Traditional retrieval methods rely on transforming user queries into vector representations and retrieving documents based on cosine similarity within an embedding space. While efficient and scalable, this approach often fails to handle complex queries involving logical constructs such as negations, conjunctions, and d... | CL | 22/03/2025 |
https://doi.org/10.48550/arXiv.2503.17811 | Feather-SQL: A Lightweight NL2SQL Framework with Dual-Model Collaboration Paradigm for Small Language Models | Wenqi Pei, Hailing Xu, Hengyuan Zhao, Shizheng Hou, Han Chen, Zining Zhang, Pingyi Luo, Bingsheng He | Natural Language to SQL (NL2SQL) has seen significant advancements with large language models (LLMs). However, these models often depend on closed-source systems and high computational resources, posing challenges in data privacy and deployment. In contrast, small language models (SLMs) struggle with NL2SQL tasks, exhi... | CL | 22/03/2025 |
https://doi.org/10.48550/arXiv.2503.17810 | ParsiPy: NLP Toolkit for Historical Persian Texts in Python | Farhan Farsi, Parnian Fazel, Sepand Haghighi, Sadra Sabouri, Farzaneh Goshtasb, Nadia Hajipour, Ehsaneddin Asgari, Hossein Sameti | The study of historical languages presents unique challenges due to their complex orthographic systems, fragmentary textual evidence, and the absence of standardized digital representations of text in those languages. Tackling these challenges needs special NLP digital tools to handle phonetic transcriptions and analyz... | CL | 22/03/2025 |
https://doi.org/10.48550/arXiv.2503.17799 | Relation Extraction with Instance-Adapted Predicate Descriptions | Yuhang Jiang, Ramakanth Kavuluru | Relation extraction (RE) is a standard information extraction task playing a major role in downstream applications such as knowledge discovery and question answering. Although decoder-only large language models are excelling in generative tasks, smaller encoder models are still the go to architecture for RE. In this pa... | CL | 22/03/2025 |
https://doi.org/10.48550/arXiv.2503.17755 | Improving Preference Extraction In LLMs By Identifying Latent Knowledge Through Classifying Probes | Sharan Maiya, Yinhong Liu, Ramit Debnath, Anna Korhonen | Large Language Models (LLMs) are often used as automated judges to evaluate text, but their effectiveness can be hindered by various unintentional biases. We propose using linear classifying probes, trained by leveraging differences between contrasting pairs of prompts, to directly access LLMs' latent knowledge and ext... | CL | 22/03/2025 |
https://doi.org/10.48550/arXiv.2503.17753 | Building Resource-Constrained Language Agents: A Korean Case Study on Chemical Toxicity Information | Hojun Cho, Donghu Kim, Soyoung Yang, Chan Lee, Hunjoo Lee, Jaegul Choo | Language agents powered by large language models (LLMs) face significant deployment challenges in resource-constrained environments, particularly for specialized domains and less-common languages. This paper presents Tox-chat, a Korean chemical toxicity information agent devised within these limitations. We propose two... | CL | 22/03/2025 |
https://doi.org/10.48550/arXiv.2503.17739 | Enhancing Arabic Automated Essay Scoring with Synthetic Data and Error Injection | Chatrine Qwaider, Bashar Alhafni, Kirill Chirkunov, Nizar Habash, Ted Briscoe | Automated Essay Scoring (AES) plays a crucial role in assessing language learners' writing quality, reducing grading workload, and providing real-time feedback. Arabic AES systems are particularly challenged by the lack of annotated essay datasets. This paper presents a novel framework leveraging Large Language Models ... | CL | 22/03/2025 |
https://doi.org/10.48550/arXiv.2503.17684 | Can LLMs Automate Fact-Checking Article Writing? | Dhruv Sahnan, David Corney, Irene Larraz, Giovanni Zagni, Ruben Miguez, Zhuohan Xie, Iryna Gurevych, Elizabeth Churchill, Tanmoy Chakraborty, Preslav Nakov | Automatic fact-checking aims to support professional fact-checkers by offering tools that can help speed up manual fact-checking. Yet, existing frameworks fail to address the key step of producing output suitable for broader dissemination to the general public: while human fact-checkers communicate their findings throu... | CL | 22/03/2025 |
https://doi.org/10.48550/arXiv.2503.17662 | Enhancing Persona Consistency for LLMs' Role-Playing using Persona-Aware Contrastive Learning | Ke Ji, Yixin Lian, Linxu Li, Jingsheng Gao, Weiyuan Li, Bin Dai | In recent years, large language models (LLMs) have achieved breakthrough progress in many dialogue generation tasks. However, their lack of emotion and fine-grained role awareness limits the model's ability to provide personalized and diverse interactions further. Current methods face high costs in collecting high-qual... | CL | 22/03/2025 |
https://doi.org/10.48550/arXiv.2503.17599 | GPBench: A Comprehensive and Fine-Grained Benchmark for Evaluating Large Language Models as General Practitioners | Zheqing Li, Yiying Yang, Jiping Lang, Wenhao Jiang, Yuhang Zhao, Shuang Li, Dingqian Wang, Zhu Lin, Xuanna Li, Yuze Tang, Jiexian Qiu, Xiaolin Lu, Hongji Yu, Shuang Chen, Yuhua Bi, Xiaofei Zeng, Yixian Chen, Junrong Chen, Lin Yao | General practitioners (GPs) serve as the cornerstone of primary healthcare systems by providing continuous and comprehensive medical services. However, due to community-oriented nature of their practice, uneven training and resource gaps, the clinical proficiency among GPs can vary significantly across regions and heal... | CL | 22/03/2025 |
https://doi.org/10.48550/arXiv.2503.17579 | Leveraging Human Production-Interpretation Asymmetries to Test LLM Cognitive Plausibility | Suet-Ying Lam, Qingcheng Zeng, Jingyi Wu, Rob Voigt | Whether large language models (LLMs) process language similarly to humans has been the subject of much theoretical and practical debate. We examine this question through the lens of the production-interpretation distinction found in human sentence processing and evaluate the extent to which instruction-tuned LLMs repli... | CL | 21/03/2025 |
https://doi.org/10.48550/arXiv.2503.17523 | Bayesian Teaching Enables Probabilistic Reasoning in Large Language Models | Linlu Qiu, Fei Sha, Kelsey Allen, Yoon Kim, Tal Linzen, Sjoerd van Steenkiste | Artificial intelligence systems based on large language models (LLMs) are increasingly used as agents that interact with users and with the world. To do so successfully, LLMs need to construct internal representations of the world and form probabilistic beliefs about those representations. To provide a user with person... | CL | 21/03/2025 |
https://doi.org/10.48550/arXiv.2503.17514 | Language Models May Verbatim Complete TextThey Were Not Explicitly Trained On | Ken Ziyu Liu, Christopher A. Choquette-Choo, Matthew Jagielski, Peter Kairouz, Sanmi Koyejo, Percy Liang, Nicolas Papernot | An important question today is whether a given text was used to train a large language model (LLM). A \emph{completion} test is often employed: check if the LLM completes a sufficiently complex text. This, however, requires a ground-truth definition of membership; most commonly, it is defined as a member based on the $... | CL | 21/03/2025 |
https://doi.org/10.48550/arXiv.2503.17509 | Follow-up Question Generation For Enhanced Patient-Provider Conversations | Joseph Gatto, Parker Seegmiller, Timothy Burdick, Inas S. Khayal, Sarah DeLozier, Sarah M. Preum | Follow-up question generation is an essential feature of dialogue systems as it can reduce conversational ambiguity and enhance modeling complex interactions. Conversational contexts often pose core NLP challenges such as (i) extracting relevant information buried in fragmented data sources, and (ii) modeling parallel ... | CL | 21/03/2025 |
https://doi.org/10.48550/arXiv.2503.17489 | Judge Anything: MLLM as a Judge Across Any Modality | Shu Pu, Yaochen Wang, Dongping Chen, Yuhang Chen, Guohao Wang, Qi Qin, Zhongyi Zhang, Zhiyuan Zhang, Zetong Zhou, Shuang Gong, Yi Gui, Yao Wan, Philip S. Yu | Evaluating generative foundation models on open-ended multimodal understanding (MMU) and generation (MMG) tasks across diverse modalities (e.g., images, audio, video) poses significant challenges due to the complexity of cross-modal interactions. To this end, the idea of utilizing Multimodal LLMs (MLLMs) as automated j... | CL | 21/03/2025 |
https://doi.org/10.48550/arXiv.2503.17485 | SaudiCulture: A Benchmark for Evaluating Large Language Models Cultural Competence within Saudi Arabia | Lama Ayash, Hassan Alhuzali, Ashwag Alasmari, Sultan Aloufi | Large Language Models (LLMs) have demonstrated remarkable capabilities in natural language processing; however, they often struggle to accurately capture and reflect cultural nuances. This research addresses this challenge by focusing on Saudi Arabia, a country characterized by diverse dialects and rich cultural tradit... | CL | 21/03/2025 |
https://doi.org/10.48550/arXiv.2503.17460 | ConvoGen: Enhancing Conversational AI with Synthetic Data: A Multi-Agent Approach | Reem Gody, Mahmoud Goudy, Ahmed Y. Tawfik | In this paper, we present ConvoGen: an innovative framework for generating synthetic conversational data using multi-agent systems. Our method leverages few-shot learning and introduces iterative sampling from a dynamically updated few-shot hub to create diverse and realistic conversational scenarios. The generated dat... | CL | 21/03/2025 |
https://doi.org/10.48550/arXiv.2503.17456 | Language-specific Neurons Do Not Facilitate Cross-Lingual Transfer | Soumen Kumar Mondal, Sayambhu Sen, Abhishek Singhania, Preethi Jyothi | Multilingual large language models (LLMs) aim towards robust natural language understanding across diverse languages, yet their performance significantly degrades on low-resource languages. This work explores whether existing techniques to identify language-specific neurons can be leveraged to enhance cross-lingual tas... | CL | 21/03/2025 |
https://doi.org/10.48550/arXiv.2503.17425 | Beyond Negation Detection: Comprehensive Assertion Detection Models for Clinical NLP | Veysel Kocaman, Yigit Gul, M. Aytug Kaya, Hasham Ul Haq, Mehmet Butgul, Cabir Celik, David Talby | Assertion status detection is a critical yet often overlooked component of clinical NLP, essential for accurately attributing extracted medical facts. Past studies have narrowly focused on negation detection, leading to underperforming commercial solutions such as AWS Medical Comprehend, Azure AI Text Analytics, and GP... | CL | 21/03/2025 |
https://doi.org/10.48550/arXiv.2503.17407 | A Comprehensive Survey on Long Context Language Modeling | Jiaheng Liu, Dawei Zhu, Zhiqi Bai, Yancheng He, Huanxuan Liao, Haoran Que, Zekun Wang, Chenchen Zhang, Ge Zhang, Jiebin Zhang, Yuanxing Zhang, Zhuo Chen, Hangyu Guo, Shilong Li, Ziqiang Liu, Yong Shan, Yifan Song, Jiayi Tian, Wenhao Wu, Zhejian Zhou, Ruijie Zhu, Junlan Feng, Yang Gao, Shizhu He, Zhoujun Li, Tianyu Liu,... | Efficient processing of long contexts has been a persistent pursuit in Natural Language Processing. With the growing number of long documents, dialogues, and other textual data, it is important to develop Long Context Language Models (LCLMs) that can process and analyze extensive inputs in an effective and efficient wa... | CL | 20/03/2025 |
https://doi.org/10.48550/arXiv.2503.17403 | ChatGPT or A Silent Everywhere Helper: A Survey of Large Language Models | Azim Akhtarshenas, Afshin Dini, Navid Ayoobi | Large Language Models (LLMs) have revo lutionized natural language processing Natural Language Processing (NLP), with Chat Generative Pre-trained Transformer (ChatGPT) standing out as a notable exampledue to its advanced capabilities and widespread applications. This survey provides a comprehensive analysis of ChatGPT,... | CL | 19/03/2025 |
https://doi.org/10.48550/arXiv.2503.18941 | Exploring Training and Inference Scaling Laws in Generative Retrieval | Hongru Cai, Yongqi Li, Ruifeng Yuan, Wenjie Wang, Zhen Zhang, Wenjie Li, Tat-Seng Chua | Generative retrieval has emerged as a novel paradigm that leverages large language models (LLMs) to autoregressively generate document identifiers. Although promising, the mechanisms that underpin its performance and scalability remain largely unclear. We conduct a systematic investigation of training and inference sca... | CL | 24/03/2025 |
https://doi.org/10.48550/arXiv.2503.18892 | SimpleRL-Zoo: Investigating and Taming Zero Reinforcement Learning for Open Base Models in the Wild | Weihao Zeng, Yuzhen Huang, Qian Liu, Wei Liu, Keqing He, Zejun Ma, Junxian He | DeepSeek-R1 has shown that long chain-of-thought (CoT) reasoning can naturally emerge through a simple reinforcement learning (RL) framework with rule-based rewards, where the training may directly start from the base models-a paradigm referred to as zero RL training. Most recent efforts to reproduce zero RL training h... | CL | 24/03/2025 |
https://doi.org/10.48550/arXiv.2503.18888 | Toward building next-generation Geocoding systems: a systematic review | Zhengcong Yin, Daniel W. Goldberg, Binbin Lin, Bing Zhou, Diya Li, Andong Ma, Ziqian Ming, Heng Cai, Zhe Zhang, Shaohua Wang, Shanzhen Gao, Joey Ying Lee, Xiao Li, Da Huo | Geocoding systems are widely used in both scientific research for spatial analysis and everyday life through location-based services. The quality of geocoded data significantly impacts subsequent processes and applications, underscoring the need for next-generation systems. In response to this demand, this review first... | CL | 24/03/2025 |
https://doi.org/10.48550/arXiv.2503.18866 | Reasoning to Learn from Latent Thoughts | Yangjun Ruan, Neil Band, Chris J. Maddison, Tatsunori Hashimoto | Compute scaling for language model (LM) pretraining has outpaced the growth of human-written texts, leading to concerns that data will become the bottleneck to LM scaling. To continue scaling pretraining in this data-constrained regime, we propose that explicitly modeling and inferring the latent thoughts that underlie... | CL | 24/03/2025 |
https://doi.org/10.48550/arXiv.2503.18825 | EconEvals: Benchmarks and Litmus Tests for LLM Agents in Unknown Environments | Sara Fish, Julia Shephard, Minkai Li, Ran I. Shorrer, Yannai A. Gonczarowski | We develop benchmarks for LLM agents that act in, learn from, and strategize in unknown environments, the specifications of which the LLM agent must learn over time from deliberate exploration. Our benchmarks consist of decision-making tasks derived from key problems in economics. To forestall saturation, the benchmark... | CL | 24/03/2025 |
https://doi.org/10.48550/arXiv.2503.18792 | REALM: A Dataset of Real-World LLM Use Cases | Jingwen Cheng, Kshitish Ghate, Wenyue Hua, William Yang Wang, Hong Shen, Fei Fang | Large Language Models, such as the GPT series, have driven significant industrial applications, leading to economic and societal transformations. However, a comprehensive understanding of their real-world applications remains limited. To address this, we introduce REALM, a dataset of over 94,000 LLM use cases collected... | CL | 24/03/2025 |
https://doi.org/10.48550/arXiv.2503.18773 | BitDecoding: Unlocking Tensor Cores for Long-Context LLMs Decoding with Low-Bit KV Cache | Dayou Du, Shijie Cao, Jianyi Cheng, Ting Cao, Mao Yang | The growing adoption of long-context Large Language Models (LLMs) has introduced significant memory and computational challenges in autoregressive decoding due to the expanding Key-Value (KV) cache. KV cache quantization has emerged as a promising solution, with prior work showing that 4-bit or even 2-bit quantization ... | CL | 24/03/2025 |
https://doi.org/10.48550/arXiv.2503.18680 | ArchSeek: Retrieving Architectural Case Studies Using Vision-Language Models | Danrui Li, Yichao Shi, Yaluo Wang, Ziying Shi, Mubbasir Kapadia | Efficiently searching for relevant case studies is critical in architectural design, as designers rely on precedent examples to guide or inspire their ongoing projects. However, traditional text-based search tools struggle to capture the inherently visual and complex nature of architectural knowledge, often leading to ... | CL | 24/03/2025 |
https://doi.org/10.48550/arXiv.2503.18666 | AgentSpec: Customizable Runtime Enforcement for Safe and Reliable LLM Agents | Haoyu Wang, Christopher M. Poskitt, Jun Sun | Agents built on LLMs are increasingly deployed across diverse domains, automating complex decision-making and task execution. However, their autonomy introduces safety risks, including security vulnerabilities, legal violations, and unintended harmful actions. Existing mitigation methods, such as model-based safeguards... | CL | 24/03/2025 |
https://doi.org/10.48550/arXiv.2503.18570 | Dense Retrieval for Low Resource Languages -- the Case of Amharic Language | Tilahun Yeshambel, Moncef Garouani, Serge Molina, Josiane Mothe | This paper reports some difficulties and some results when using dense retrievers on Amharic, one of the low-resource languages spoken by 120 millions populations. The efforts put and difficulties faced by University Addis Ababa toward Amharic Information Retrieval will be developed during the presentation. | CL | 24/03/2025 |
https://doi.org/10.48550/arXiv.2503.18565 | Distil-xLSTM: Learning Attention Mechanisms through Recurrent Structures | Abdoul Majid O. Thiombiano, Brahim Hnich, Ali Ben Mrad, Mohamed Wiem Mkaouer | The current era of Natural Language Processing (NLP) is dominated by Transformer models. However, novel architectures relying on recurrent mechanisms, such as xLSTM and Mamba, have been proposed as alternatives to attention-based models. Although computation is done differently than with the attention mechanism mechani... | CL | 24/03/2025 |
https://doi.org/10.48550/arXiv.2503.18556 | Instruction-Aligned Visual Attention for Mitigating Hallucinations in Large Vision-Language Models | Bin Li, Dehong Gao, Yeyuan Wang, Linbo Jin, Shanqing Yu, Xiaoyan Cai, Libin Yang | Despite the significant success of Large Vision-Language models(LVLMs), these models still suffer hallucinations when describing images, generating answers that include non-existent objects. It is reported that these models tend to over-focus on certain irrelevant image tokens that do not contain critical information f... | CL | 24/03/2025 |
https://doi.org/10.48550/arXiv.2503.18494 | Verbal Process Supervision Elicits Better Coding Agents | Hao-Yuan Chen, Cheng-Pong Huang, Jui-Ming Yao | The emergence of large language models and their applications as AI agents have significantly advanced state-of-the-art code generation benchmarks, transforming modern software engineering tasks. However, even with test-time computed reasoning models, these systems still struggle with complex software engineering chall... | CL | 24/03/2025 |
https://doi.org/10.48550/arXiv.2503.18492 | Safeguarding Mobile GUI Agent via Logic-based Action Verification | Jungjae Lee, Dongjae Lee, Chihun Choi, Youngmin Im, Jaeyoung Wi, Kihong Heo, Sangeun Oh, Sunjae Lee, Insik Shin | Large Foundation Models (LFMs) have unlocked new possibilities in human-computer interaction, particularly with the rise of mobile Graphical User Interface (GUI) Agents capable of interpreting GUIs. These agents promise to revolutionize mobile computing by allowing users to automate complex mobile tasks through simple ... | CL | 24/03/2025 |
https://doi.org/10.48550/arXiv.2503.18484 | PM4Bench: A Parallel Multilingual Multi-Modal Multi-task Benchmark for Large Vision Language Model | Junyuan Gao, Jiahe Song, Jiang Wu, Runchuan Zhu, Guanlin Shen, Shasha Wang, Xingjian Wei, Haote Yang, Songyang Zhang, Weijia Li, Bin Wang, Dahua Lin, Lijun Wu, Conghui He | Existing multilingual benchmarks for Large Vision Language Models (LVLMs) suffer from limitations including language-specific content biases, disjointed multimodal input formats, and a lack of safety evaluation. To address these gaps, we propose PM4Bench, the first Parallel Multilingual Multi-Modal Multi-task Benchmark... | CL | 24/03/2025 |
https://doi.org/10.48550/arXiv.2503.18476 | Global-Local Tree Search for Language Guided 3D Scene Generation | Wei Deng, Mengshi Qi, Huadong Ma | Large Vision-Language Models (VLMs), such as GPT-4, have achieved remarkable success across various fields. However, there are few studies on 3D indoor scene generation with VLMs. This paper considers this task as a planning problem subject to spatial and layout common sense constraints. To solve the problem with a VLM... | CL | 24/03/2025 |
https://doi.org/10.48550/arXiv.2503.18458 | StableGS: A Floater-Free Framework for 3D Gaussian Splatting | Luchao Wang, Qian Ren, Kaiming He, Hua Wang, Zhi Chen, Yaohua Tang | Recent years have witnessed remarkable success of 3D Gaussian Splatting (3DGS) in novel view synthesis, surpassing prior differentiable rendering methods in both quality and efficiency. However, its training process suffers from coupled opacity-color optimization that frequently converges to local minima, producing flo... | CL | 24/03/2025 |
https://doi.org/10.48550/arXiv.2503.18435 | On the Perception Bottleneck of VLMs for Chart Understanding | Junteng Liu, Weihao Zeng, Xiwen Zhang, Yijun Wang, Zifei Shan, Junxian He | Chart understanding requires models to effectively analyze and reason about numerical data, textual elements, and complex visual components. Our observations reveal that the perception capabilities of existing large vision-language models (LVLMs) constitute a critical bottleneck in this process. In this study, we delve... | CL | 24/03/2025 |
End of preview. Expand in Data Studio
README.md exists but content is empty.
- Downloads last month
- 7