[2510.25662] User Misconceptions of LLM-Based Conversational Programming Assistants

Dataemia
2 Min Read



Summarize this content to 100 words:

[Submitted on 29 Oct 2025 (v1), last revised 26 Feb 2026 (this version, v2)]

View a PDF of the paper titled User Misconceptions of LLM-Based Conversational Programming Assistants, by Gabrielle O’Brien and 5 other authors
View PDF
HTML (experimental)

Abstract:Programming assistants powered by large language models (LLMs) have become widely available, with conversational assistants like ChatGPT particularly accessible to novice programmers. However, varied tool capabilities and inconsistent availability of extensions (web search, code execution, retrieval-augmented generation) create opportunities for user misconceptions that may lead to over-reliance, unproductive practices, or insufficient quality control. We characterize misconceptions that users of conversational LLM-based assistants may have in programming contexts through a two-phase approach: first brainstorming and cataloging potential misconceptions, then conducting qualitative analysis of Python-programming conversations from the WildChat dataset. We find evidence that users have misplaced expectations about features like web access, code execution, and non-text outputs. We also note the potential for deeper conceptual issues around information requirements for debugging, validation, and optimization. Our findings reinforce the need for LLM-based tools to more clearly communicate their capabilities to users and empirically ground aspects that require clarification in programming contexts.

Submission history From: Gabrielle O’Brien [view email] [v1]
Wed, 29 Oct 2025 16:23:46 UTC (1,026 KB)
[v2]
Thu, 26 Feb 2026 20:31:32 UTC (165 KB)



Source link

Share This Article
Leave a Comment

Leave a Reply

Your email address will not be published. Required fields are marked *

error: Content is protected !!