Cody FAQs
Find answers to the most common questions about Cody.
General
Does Cody train on my code?
For Enterprise customers, Sourcegraph does not train on your code. For Free and Pro users, we do not train on your data without permission. Third-party LLM providers also do not train on your code.
Does Cody work with self-hosted Sourcegraph?
Yes, Cody works with self-hosted Sourcegraph instances. However, internet access is required for interactions with third-party services like Anthropic or OpenAI.
What programming languages does Cody support?
Cody supports a wide range of languages, including JavaScript, Python, Java, C/C++, Go, and more. Response quality depends on the underlying LLM and language-specific optimizations.
Can Cody answer non-programming questions?
Cody is optimized for coding-related tasks. Using it for non-programming purposes is against our acceptable use policy.
Embeddings
Why were embeddings removed in v5.3?
We replaced embeddings with Sourcegraph Search for better security, scalability, and quality. Sourcegraph Search retrieves code snippets more efficiently without sending data to third-party APIs.
Why are embeddings no longer supported?
Embeddings were replaced with Sourcegraph Search to improve scalability and reduce maintenance. This change ensures better performance across larger repositories.
Third-Party Dependencies
What third-party services does Cody use?
Cody primarily relies on Anthropic's Claude API and OpenAI. These services are accessed via the Sourcegraph Cody Gateway.
Can I use my own API keys?
Yes, using your own API keys is supported in the Enterprise plan. This is an experimental feature for other tiers.
Can I use Cody with my Cloud IDE?
Yes, Cody supports cloud IDEs like GitHub Codespaces, vscode.dev, and editors compatible with the Open VSX Registry.
OpenAI o1
What are OpenAI o1 best practices?
Provide focused context, use concise prompts, and select the appropriate model (e.g., o1-preview for complex tasks, o1-mini for faster responses).
What are the known limitations?
OpenAI o1 has a 45k input token limit, 4k output token limit, and no streaming responses. It also has a limited context window.