Context Understanding
Discover how Cody leverages context from your codebase to provide accurate and relevant code assistance, making your coding workflow more efficient.
Cody's ability to understand and utilize context from your codebase sets it apart from other AI coding assistants. By analyzing your code, Cody can provide context-aware responses, generate code snippets, and even debug your code with precision.
Keyword Search
Cody uses traditional text-based search to find relevant keywords in your codebase.
Sourcegraph Search
Leverage the powerful Sourcegraph Search API to retrieve relevant code snippets.
Code Graph
Analyze the structure of your code to understand how components are interconnected.
Context Sources
Cody uses a variety of sources to retrieve context relevant to your input. These sources include:
-
Keyword Search
Finds keywords matching your input, with automatic query rewriting for better results.
-
Sourcegraph Search
Uses the Sourcegraph Search API to retrieve relevant documents from your codebase.
-
Code Graph
Analyzes the structure of your code to find context based on code elements' relationships.
Context Fetching Features
Cody uses @-mentions to retrieve context from your codebase. You can click the @ icon in the chat window or press @ to open the context picker.
Tier | Client | Files | Symbols | Web URLs | Remote Files/Directories | OpenCtx |
---|---|---|---|---|---|---|
Free/Pro | VS Code | ✅ | ✅ | ✅ | ❌ | ✅ |
Enterprise | VS Code | ✅ | ✅ | ✅ | ✅ | ✅ |
Repo-Based Context
Cody supports repo-based context, allowing you to link single or multiple repositories based on your tier. Here's a breakdown of the number of repositories supported by each client:
Tier | Client | Repositories |
---|---|---|
Free/Pro | VS Code | 1 |
Enterprise | VS Code | Multi |
Token Limits
Cody allows up to 4,000 tokens of output, which is approximately 500-600 lines of code. For Claude 3 Sonnet or Opus models, Cody tracks two separate token limits:
-
@-Mention Context
Limited to 30,000 tokens (~4,000 lines of code). This context is explicitly defined by the user using the @-filename syntax.
-
Conversation Context
Limited to 15,000 tokens, including user questions, system responses, and automatically retrieved context items.
Model | Conversation Context | @-Mention Context | Output |
---|---|---|---|
gpt-3.5-turbo | 7,000 | Shared | 4,000 |
claude-3 Sonnet | 15,000 | 30,000 | 4,000 |
Manage Cody Context Window Size
While Cody aims to provide maximum context for each prompt, there are limits to ensure efficiency. Site administrators can update the maximum context window size to meet their specific requirements.
-
Too Small Context Window
Using too few tokens can cause errors, such as "You've selected too much code."
-
Balance Quality and Cost
Using more tokens usually produces higher-quality responses but increases response times and costs.
Impact of Context: LLM vs Cody
When the same prompt is sent to a standard LLM, the response may lack specifics about your codebase. In contrast, Cody augments the prompt with context from relevant code snippets, making the answer far more specific to your codebase.
Standard LLM
Provides generic responses without codebase-specific context.
Cody
Uses context from your codebase to provide accurate and relevant answers.
How Context Works with Prompts
Cody works in conjunction with an LLM to provide codebase-aware answers. A typical prompt has three parts:
-
Prefix
An optional description of the desired output, often derived from predefined prompts.
-
User Input
The information provided, including your code query or request.
-
Context
Additional information that helps the LLM provide a relevant answer based on your codebase.