Copilot Chat's DeepSeek Reasoning Bug: Impacting Multi-Turn AI Conversations and GitHub Activity

The integration of advanced AI models into developer tools like GitHub Copilot Chat promises significant boosts in productivity and streamlines complex workflows. However, as highlighted in a recent GitHub Community discussion, even sophisticated setups can encounter unexpected hurdles. This insight delves into a critical bug affecting multi-turn conversations when using Copilot Chat with specific custom models, impacting seamless github activity and AI-assisted development.

Illustration of a developer facing a broken connection between a chat tool and an AI service.
Illustration of a developer facing a broken connection between a chat tool and an AI service.

The DeepSeek Reasoning Disconnect in Copilot Chat

A user, KureKaruna, reported a significant issue when configuring the Copilot Chat extension (v0.45.1) in VS Code (1.97.2) to use OpenRouter as a custom model provider, specifically with DeepSeek's "thinking mode" models like deepseek/deepseek-v4-flash. These models are designed to provide a reasoning_content field within their assistant messages, detailing their thought process before generating a final response. While single-turn interactions worked flawlessly, multi-turn conversations consistently failed.

The 400 Error: A Breakdown in Communication

The core of the problem lies in how Copilot Chat handles the conversation history. In multi-turn scenarios, particularly those involving agents like "Plan mode" which generate several internal requests, the second or subsequent API call to the DeepSeek model through OpenRouter would return a 400 error. The error message was explicit:

Request Failed: 400 {"error":{"message":"Provider returned error","code":400,"metadata":{"raw":"{"error":{"message":"The reasoning_content in the thinking mode must be passed back to the API.","type":"invalid_request_error","param":null,"code":"invalid_request_error"}}","provider_name":"DeepSeek","is_byok":false}},"user_id":"..."}

This indicates that the reasoning_content field, crucial for DeepSeek's thinking models to maintain context and continue a coherent dialogue, was being dropped from the assistant's previous message when the conversation history was sent back to the API. The DeepSeek API, expecting this field for continuity, rejected the request.

Impact on Developer Workflows and Goals

This bug directly hinders the effectiveness of AI in complex development tasks. For developers setting ambitious developer goals examples that involve iterative problem-solving or multi-step code generation with AI, the inability to maintain a continuous, reasoning-aware conversation is a significant roadblock. It forces users to restart conversations, losing valuable context and negating the benefits of advanced AI reasoning capabilities. The seamless flow of github activity that Copilot aims to provide is disrupted, leading to frustration and reduced efficiency.

Illustration of gears representing AI chat, code, and reasoning seamlessly integrating.
Illustration of gears representing AI chat, code, and reasoning seamlessly integrating.

The Expected Fix: Preserving Context

The expected behavior is for the Copilot Chat extension to store and round-trip all relevant fields from assistant messages, including reasoning_content, when constructing subsequent requests in a multi-turn conversation. This would ensure compliance with the DeepSeek API's requirements and allow for uninterrupted, intelligent dialogues.

The suggested fix is straightforward: enhance the conversation history persistence mechanism within the extension to preserve all fields returned by custom model providers in assistant messages. By including these fields in the corresponding assistant message objects when sending the history back to the API, the 400 error can be avoided, and the full potential of reasoning models can be unlocked for more robust and productive AI-assisted development.

This incident underscores the importance of robust integration standards and careful handling of API-specific requirements when building developer tools that leverage diverse AI models. Ensuring that all relevant conversational context is maintained is paramount for truly intelligent and helpful AI assistants in our daily github activity.

|

Dashboards, alerts, and review-ready summaries built on your GitHub activity.

 Install GitHub App to Start
Dashboard with engineering activity trends