-
Notifications
You must be signed in to change notification settings - Fork 28.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Chat leaks text models #216577
Comments
Sorry. The triage bot closed it incorrectly. Will reopen as soon as the fix is in. |
@jrieken It would be great if we could avoid this. Here's how these documents are used:
I'd love to just have the |
I would not do any re-entry logic. Granted the extension (chat) does not call open and is just a provider, the duplicated models should only be around for a short time, e.g the execution of a particular language feature like hover, |
To support IntelliSense across code blocks, we need to make sure extensions have Having to keep duplicates open for every single document is not a good solution but I can't think of any other way to get this working with our current apis |
Instead of opening all code blocks from a session as text documents (and fighting to keep them open) could we just register a file system for chat code blocks? With that language extensions can simply read all documents/files at their pace and in their ways? I don't know how this plays with TypeScript specifically but I would assume that's generally the better approach and is also more similar to scenarios like web. So, it would be
|
When I get a code block as response, I see 2 text models being created with URIs such as:
When opening a new chat I only see the model of
vscode-chat-code-block
getting disposed.I think this could be the source of many listener leak warnings in error telemetry.
//cc @jrieken
The text was updated successfully, but these errors were encountered: