mirror of
https://github.com/larsbaunwall/vscode-copilot-bridge.git
synced 2025-10-05 22:22:59 +00:00
Merge pull request #2 from larsbaunwall/devin/1755020175-copilot-bridge
VS Code Copilot Bridge: local OpenAI-compatible chat endpoint (Desktop, SSE, health, models)
This commit is contained in:
commit
2067dd2e83
6 changed files with 4897 additions and 0 deletions
4
.gitignore
vendored
Normal file
4
.gitignore
vendored
Normal file
|
|
@ -0,0 +1,4 @@
|
|||
node_modules
|
||||
out
|
||||
.vscode-test
|
||||
*.vsix
|
||||
156
README.md
Normal file
156
README.md
Normal file
|
|
@ -0,0 +1,156 @@
|
|||
# VS Code Copilot Bridge (Desktop, Inference-only)
|
||||
|
||||
Local OpenAI-compatible HTTP facade to GitHub Copilot via the VS Code Language Model API.
|
||||
|
||||
- Endpoints (local-only, default bind 127.0.0.1):
|
||||
- POST /v1/chat/completions (SSE streaming; use "stream": false for non-streaming)
|
||||
- GET /v1/models (dynamic listing of available Copilot models)
|
||||
- GET /healthz (ok/unavailable + vscode.version)
|
||||
|
||||
- Copilot pipe: vscode.lm.selectChatModels({ vendor: "copilot", family: "gpt-4o" }) → model.sendRequest(messages)
|
||||
|
||||
- Prompt normalization: last system message + last N user/assistant turns (default 3) rendered as:
|
||||
[SYSTEM]
|
||||
…
|
||||
[DIALOG]
|
||||
user: …
|
||||
assistant: …
|
||||
|
||||
- Security: loopback-only binding by default; optional Authorization: Bearer <token>.
|
||||
|
||||
## Install, Build, and Run
|
||||
|
||||
Prerequisites:
|
||||
- VS Code Desktop (stable), GitHub Copilot signed in
|
||||
- Node.js 18+ (recommended)
|
||||
- npm
|
||||
|
||||
- Auto-recovery: the bridge re-requests Copilot access on each chat request if missing; no restart required after signing in. `/healthz` will best-effort recheck only when `bridge.verbose` is true.
|
||||
|
||||
|
||||
|
||||
|
||||
Steps:
|
||||
1) Install deps and compile:
|
||||
npm install
|
||||
npm run compile
|
||||
|
||||
2) Launch in VS Code (recommended for debugging):
|
||||
- Open this folder in VS Code
|
||||
- Press F5 to run the extension in a new Extension Development Host
|
||||
|
||||
3) Enable the bridge:
|
||||
- Command Palette → “Copilot Bridge: Enable”
|
||||
- Or set in settings: bridge.enabled = true
|
||||
- “Copilot Bridge: Status” shows the bound address/port and token requirement
|
||||
|
||||
Optional: Packaging a VSIX
|
||||
- You can package with vsce (not included):
|
||||
npm i -g @vscode/vsce
|
||||
vsce package
|
||||
- Then install the generated .vsix via “Extensions: Install from VSIX…”
|
||||
## Enabling the Language Model API (if required)
|
||||
|
||||
If your VS Code build requires enabling proposed APIs for the Language Model API, start with:
|
||||
### Models and Selection
|
||||
|
||||
- The bridge lists available GitHub Copilot chat models using the VS Code Language Model API. Example:
|
||||
curl http://127.0.0.1:<port>/v1/models
|
||||
→ { "data": [ { "id": "gpt-4o-copilot", ... }, ... ] }
|
||||
|
||||
- To target a specific model for inference, set the "model" field in your POST body. The bridge accepts:
|
||||
- IDs returned by /v1/models (e.g., "gpt-4o-copilot")
|
||||
- A Copilot family name (e.g., "gpt-4o")
|
||||
- "copilot" to allow default selection
|
||||
|
||||
Examples:
|
||||
curl -N -H "Content-Type: application/json" \
|
||||
-d '{"model":"gpt-4o-copilot","messages":[{"role":"user","content":"hello"}]}' \
|
||||
http://127.0.0.1:<port>/v1/chat/completions
|
||||
|
||||
curl -N -H "Content-Type: application/json" \
|
||||
-d '{"model":"gpt-4o","messages":[{"role":"user","content":"hello"}]}' \
|
||||
http://127.0.0.1:<port>/v1/chat/completions
|
||||
|
||||
- If a requested model is unavailable, the bridge returns:
|
||||
404 with { "error": { "code": "model_not_found", "reason": "not_found" } }.
|
||||
- Stable VS Code: `code --enable-proposed-api thinkability.copilot-bridge`
|
||||
- VS Code Insiders or Extension Development Host (F5) also works.
|
||||
|
||||
When the API is not available, the Output (“Copilot Bridge”) will show:
|
||||
“VS Code Language Model API not available; update VS Code or enable proposed API.”
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
- /healthz shows `copilot: "unavailable"` with a `reason`:
|
||||
- `missing_language_model_api`: Language Model API not available
|
||||
- `copilot_model_unavailable`: No Copilot models selectable
|
||||
- `consent_required`: User consent/sign-in required for Copilot models
|
||||
- `rate_limited`: Provider throttling
|
||||
- `not_found`: Requested model not found
|
||||
- `copilot_unavailable`: Other provider errors
|
||||
- POST /v1/chat/completions returns 503 with the same `reason` codes.
|
||||
|
||||
|
||||
## Configuration (bridge.*)
|
||||
|
||||
- bridge.enabled (boolean; default false): auto-start on VS Code startup
|
||||
- bridge.host (string; default "127.0.0.1"): bind address (keep on loopback)
|
||||
- bridge.port (number; default 0): 0 = ephemeral port
|
||||
- bridge.token (string; default ""): optional bearer token; empty disables auth
|
||||
- bridge.historyWindow (number; default 3): number of user/assistant turns to keep
|
||||
- bridge.maxConcurrent (number; default 1): max concurrent chat requests; excess → 429
|
||||
## Viewing logs
|
||||
|
||||
To see verbose logs:
|
||||
1) Enable: Settings → search “Copilot Bridge” → enable “bridge.verbose”
|
||||
2) Open: View → Output → select “Copilot Bridge” in the dropdown
|
||||
3) Trigger a request (e.g., curl /v1/chat/completions). You’ll see:
|
||||
- HTTP request lines (method/path)
|
||||
- Model selection attempts (“Copilot model selected.”)
|
||||
- SSE lifecycle (“SSE start …”, “SSE end …”)
|
||||
- Health checks (best-effort model check when verbose is on)
|
||||
- API diagnostics (e.g., missing Language Model API)
|
||||
- bridge.verbose (boolean; default false): verbose logs to “Copilot Bridge” output channel
|
||||
|
||||
## Manual Testing (curl)
|
||||
|
||||
Replace <port> with the port shown in “Copilot Bridge: Status”.
|
||||
|
||||
Health:
|
||||
curl http://127.0.0.1:<port>/healthz
|
||||
|
||||
Models:
|
||||
curl http://127.0.0.1:<port>/v1/models
|
||||
|
||||
Streaming completion:
|
||||
curl -N -H "Content-Type: application/json" \
|
||||
-d '{"model":"gpt-4o-copilot","stream":true,"messages":[{"role":"user","content":"hello"}]}' \
|
||||
http://127.0.0.1:<port>/v1/chat/completions
|
||||
|
||||
Non-stream:
|
||||
curl -H "Content-Type: application/json" \
|
||||
-d '{"model":"gpt-4o-copilot","stream":false,"messages":[{"role":"user","content":"hello"}]}' \
|
||||
http://127.0.0.1:<port>/v1/chat/completions
|
||||
|
||||
Auth (when bridge.token is set):
|
||||
- Missing/incorrect Authorization: Bearer <token> → 401
|
||||
|
||||
Copilot unavailable:
|
||||
- Sign out of GitHub Copilot; /healthz shows unavailable; POST returns 503 envelope
|
||||
|
||||
Concurrency:
|
||||
- Fire 2+ concurrent requests; above bridge.maxConcurrent → 429 rate_limit_error
|
||||
|
||||
## Notes
|
||||
|
||||
- Desktop-only, in-process. No VS Code Server dependency.
|
||||
- Single-user, local loopback. Do not expose to remote interfaces.
|
||||
- Non-goals: tools/function calling emulation, workspace file I/O, multi-tenant proxying.
|
||||
|
||||
## Development
|
||||
|
||||
- Build: npm run compile
|
||||
- Watch: npm run watch
|
||||
- Main: src/extension.ts
|
||||
- Note: Previously used Chat API shims are no longer needed; the bridge now uses the Language Model API.
|
||||
4264
package-lock.json
generated
Normal file
4264
package-lock.json
generated
Normal file
File diff suppressed because it is too large
Load diff
92
package.json
Normal file
92
package.json
Normal file
|
|
@ -0,0 +1,92 @@
|
|||
{
|
||||
"name": "copilot-bridge",
|
||||
"displayName": "Copilot Bridge",
|
||||
"description": "Local OpenAI-compatible chat endpoint bridging to GitHub Copilot via the VS Code Language Model API.",
|
||||
"version": "0.1.0",
|
||||
"publisher": "thinkability",
|
||||
"engines": {
|
||||
"vscode": "^1.90.0"
|
||||
},
|
||||
"extensionKind": [
|
||||
"ui"
|
||||
],
|
||||
"categories": [
|
||||
"Other"
|
||||
],
|
||||
"activationEvents": [
|
||||
"onStartupFinished",
|
||||
"onCommand:bridge.enable",
|
||||
"onCommand:bridge.disable",
|
||||
"onCommand:bridge.status"
|
||||
],
|
||||
"main": "./out/extension.js",
|
||||
"contributes": {
|
||||
"commands": [
|
||||
{
|
||||
"command": "bridge.enable",
|
||||
"title": "Copilot Bridge: Enable"
|
||||
},
|
||||
{
|
||||
"command": "bridge.disable",
|
||||
"title": "Copilot Bridge: Disable"
|
||||
},
|
||||
{
|
||||
"command": "bridge.status",
|
||||
"title": "Copilot Bridge: Status"
|
||||
}
|
||||
],
|
||||
"configuration": {
|
||||
"title": "Copilot Bridge",
|
||||
"properties": {
|
||||
"bridge.enabled": {
|
||||
"type": "boolean",
|
||||
"default": false,
|
||||
"description": "Start the Copilot Bridge automatically when VS Code starts."
|
||||
},
|
||||
"bridge.host": {
|
||||
"type": "string",
|
||||
"default": "127.0.0.1",
|
||||
"description": "Bind address for the local HTTP server. For security, keep this on loopback."
|
||||
},
|
||||
"bridge.port": {
|
||||
"type": "number",
|
||||
"default": 0,
|
||||
"description": "Port for the local HTTP server. 0 picks a random ephemeral port."
|
||||
},
|
||||
"bridge.token": {
|
||||
"type": "string",
|
||||
"default": "",
|
||||
"description": "Optional bearer token required in Authorization header. Leave empty to disable."
|
||||
},
|
||||
"bridge.historyWindow": {
|
||||
"type": "number",
|
||||
"default": 3,
|
||||
"minimum": 0,
|
||||
"description": "Number of user/assistant turns to include (system message is kept separately)."
|
||||
},
|
||||
"bridge.maxConcurrent": {
|
||||
"type": "number",
|
||||
"default": 1,
|
||||
"minimum": 1,
|
||||
"description": "Maximum concurrent /v1/chat/completions requests. Excess requests return 429."
|
||||
},
|
||||
"bridge.verbose": {
|
||||
"type": "boolean",
|
||||
"default": false,
|
||||
"description": "Verbose logging to the 'Copilot Bridge' output channel."
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"scripts": {
|
||||
"compile": "tsc -p .",
|
||||
"watch": "tsc -w -p .",
|
||||
"vscode:prepublish": "npm run compile"
|
||||
},
|
||||
"devDependencies": {
|
||||
"@types/node": "^20.10.0",
|
||||
"@types/vscode": "^1.90.0",
|
||||
"@vscode/vsce": "^3.6.0",
|
||||
"typescript": "^5.4.0"
|
||||
}
|
||||
}
|
||||
368
src/extension.ts
Normal file
368
src/extension.ts
Normal file
|
|
@ -0,0 +1,368 @@
|
|||
import * as vscode from 'vscode';
|
||||
import * as http from 'http';
|
||||
import { AddressInfo } from 'net';
|
||||
|
||||
let server: http.Server | undefined;
|
||||
let modelCache: any | undefined;
|
||||
let statusItem: vscode.StatusBarItem | undefined;
|
||||
let output: vscode.OutputChannel | undefined;
|
||||
let running = false;
|
||||
let activeRequests = 0;
|
||||
let lastReason: string | undefined;
|
||||
|
||||
export async function activate(ctx: vscode.ExtensionContext) {
|
||||
output = vscode.window.createOutputChannel('Copilot Bridge');
|
||||
statusItem = vscode.window.createStatusBarItem(vscode.StatusBarAlignment.Left, 100);
|
||||
statusItem.text = 'Copilot Bridge: Disabled';
|
||||
statusItem.show();
|
||||
ctx.subscriptions.push(statusItem, output);
|
||||
|
||||
ctx.subscriptions.push(vscode.commands.registerCommand('bridge.enable', async () => {
|
||||
await startBridge();
|
||||
await getModel(true);
|
||||
}));
|
||||
ctx.subscriptions.push(vscode.commands.registerCommand('bridge.disable', async () => {
|
||||
await stopBridge();
|
||||
}));
|
||||
ctx.subscriptions.push(vscode.commands.registerCommand('bridge.status', async () => {
|
||||
const info = server ? server.address() : undefined;
|
||||
const bound = info && typeof info === 'object' ? `${info.address}:${info.port}` : 'n/a';
|
||||
const needsToken = !!(vscode.workspace.getConfiguration('bridge').get<string>('token') || '').trim();
|
||||
vscode.window.showInformationMessage(`Copilot Bridge: ${running ? 'Enabled' : 'Disabled'} | Bound: ${bound} | Token: ${needsToken ? 'Set' : 'None'}`);
|
||||
}));
|
||||
|
||||
const cfg = vscode.workspace.getConfiguration('bridge');
|
||||
if (cfg.get<boolean>('enabled')) {
|
||||
await startBridge();
|
||||
}
|
||||
}
|
||||
|
||||
export async function deactivate() {
|
||||
await stopBridge();
|
||||
}
|
||||
|
||||
async function startBridge() {
|
||||
if (running) return;
|
||||
running = true;
|
||||
|
||||
const cfg = vscode.workspace.getConfiguration('bridge');
|
||||
const host = cfg.get<string>('host') ?? '127.0.0.1';
|
||||
const portCfg = cfg.get<number>('port') ?? 0;
|
||||
const token = (cfg.get<string>('token') ?? '').trim();
|
||||
const hist = cfg.get<number>('historyWindow') ?? 3;
|
||||
const verbose = cfg.get<boolean>('verbose') ?? false;
|
||||
const maxConc = cfg.get<number>('maxConcurrent') ?? 1;
|
||||
|
||||
try {
|
||||
server = http.createServer(async (req: http.IncomingMessage, res: http.ServerResponse) => {
|
||||
try {
|
||||
if (verbose) output?.appendLine(`HTTP ${req.method} ${req.url}`);
|
||||
if (token && req.headers.authorization !== `Bearer ${token}`) {
|
||||
writeJson(res, 401, { error: { message: 'unauthorized', type: 'invalid_request_error', code: 'unauthorized' } });
|
||||
return;
|
||||
}
|
||||
if (req.method === 'POST' && req.url?.startsWith('/v1/chat/completions')) {
|
||||
if (activeRequests >= maxConc) {
|
||||
res.writeHead(429, { 'Content-Type': 'application/json', 'Retry-After': '1' });
|
||||
res.end(JSON.stringify({ error: { message: 'too many requests', type: 'rate_limit_error', code: 'rate_limit_exceeded' } }));
|
||||
if (verbose) output?.appendLine(`429 throttled (active=${activeRequests}, max=${maxConc})`);
|
||||
return;
|
||||
}
|
||||
}
|
||||
|
||||
if (req.method === 'GET' && req.url === '/healthz') {
|
||||
const cfgNow = vscode.workspace.getConfiguration('bridge');
|
||||
const verboseNow = cfgNow.get<boolean>('verbose') ?? false;
|
||||
const hasLM = !!((vscode as any).lm && typeof (vscode as any).lm.selectChatModels === 'function');
|
||||
if (!modelCache && verboseNow) {
|
||||
output?.appendLine(`Healthz: model=${modelCache ? 'present' : 'missing'} lmApi=${hasLM ? 'ok' : 'missing'}`);
|
||||
await getModel();
|
||||
}
|
||||
const unavailableReason = modelCache ? undefined : (!hasLM ? 'missing_language_model_api' : (lastReason || 'copilot_model_unavailable'));
|
||||
writeJson(res, 200, { ok: true, copilot: modelCache ? 'ok' : 'unavailable', reason: unavailableReason, version: vscode.version });
|
||||
return;
|
||||
}
|
||||
|
||||
if (req.method === 'GET' && req.url === '/v1/models') {
|
||||
try {
|
||||
const models = await listCopilotModels();
|
||||
writeJson(res, 200, { data: models.map((id: string) => ({ id, object: 'model', owned_by: 'vscode-bridge' })) });
|
||||
} catch (e: any) {
|
||||
writeJson(res, 200, { data: [{ id: 'copilot', object: 'model', owned_by: 'vscode-bridge' }] });
|
||||
}
|
||||
return;
|
||||
}
|
||||
|
||||
if (req.method === 'POST' && req.url?.startsWith('/v1/chat/completions')) {
|
||||
activeRequests++;
|
||||
if (verbose) output?.appendLine(`Request started (active=${activeRequests})`);
|
||||
try {
|
||||
const body = await readJson(req);
|
||||
const requestedModel: string | undefined = typeof body?.model === 'string' ? body.model : undefined;
|
||||
let familyOverride: string | undefined = undefined;
|
||||
if (requestedModel && /-copilot$/i.test(requestedModel)) {
|
||||
familyOverride = requestedModel.replace(/-copilot$/i, '');
|
||||
} else if (requestedModel && requestedModel.toLowerCase() === 'copilot') {
|
||||
familyOverride = undefined;
|
||||
}
|
||||
|
||||
let model = await getModel(false, familyOverride);
|
||||
const hasLM = !!((vscode as any).lm && typeof (vscode as any).lm.selectChatModels === 'function');
|
||||
if (!model && familyOverride && hasLM) {
|
||||
lastReason = 'not_found';
|
||||
writeJson(res, 404, { error: { message: 'model not found', type: 'invalid_request_error', code: 'model_not_found', reason: 'not_found' } });
|
||||
return;
|
||||
}
|
||||
if (!model) {
|
||||
const reason = !hasLM ? 'missing_language_model_api' : (lastReason || 'copilot_model_unavailable');
|
||||
writeJson(res, 503, { error: { message: 'Copilot unavailable', type: 'server_error', code: 'copilot_unavailable', reason } });
|
||||
return;
|
||||
}
|
||||
const messages = Array.isArray(body?.messages) ? body.messages : null;
|
||||
if (!messages || messages.length === 0 || !messages.every((m: any) =>
|
||||
m && typeof m.role === 'string' &&
|
||||
/^(system|user|assistant)$/.test(m.role) &&
|
||||
m.content !== undefined && m.content !== null
|
||||
)) {
|
||||
writeJson(res, 400, { error: { message: 'invalid request', type: 'invalid_request_error', code: 'invalid_payload' } });
|
||||
return;
|
||||
}
|
||||
const lmMessages = normalizeMessagesLM(messages, hist);
|
||||
const streamMode = body?.stream !== false;
|
||||
|
||||
if (verbose) output?.appendLine('Sending request to Copilot via Language Model API...');
|
||||
const cts = new vscode.CancellationTokenSource();
|
||||
const response = await (model as any).sendRequest(lmMessages, {}, cts.token);
|
||||
|
||||
if (streamMode) {
|
||||
res.writeHead(200, {
|
||||
'Content-Type': 'text/event-stream',
|
||||
'Cache-Control': 'no-cache',
|
||||
'Connection': 'keep-alive'
|
||||
});
|
||||
const id = `cmp_${Math.random().toString(36).slice(2)}`;
|
||||
if (verbose) output?.appendLine(`SSE start id=${id}`);
|
||||
try {
|
||||
for await (const fragment of response.text as AsyncIterable<string>) {
|
||||
const payload = {
|
||||
id,
|
||||
object: 'chat.completion.chunk',
|
||||
choices: [{ index: 0, delta: { content: fragment } }]
|
||||
};
|
||||
res.write(`data: ${JSON.stringify(payload)}\n\n`);
|
||||
}
|
||||
if (verbose) output?.appendLine(`SSE end id=${id}`);
|
||||
res.write('data: [DONE]\n\n');
|
||||
res.end();
|
||||
} catch (e: any) {
|
||||
throw e;
|
||||
}
|
||||
return;
|
||||
} else {
|
||||
let buf = '';
|
||||
for await (const fragment of response.text as AsyncIterable<string>) {
|
||||
buf += fragment;
|
||||
}
|
||||
if (verbose) output?.appendLine(`Non-stream complete len=${buf.length}`);
|
||||
writeJson(res, 200, {
|
||||
id: `cmpl_${Math.random().toString(36).slice(2)}`,
|
||||
object: 'chat.completion',
|
||||
choices: [{ index: 0, message: { role: 'assistant', content: buf }, finish_reason: 'stop' }]
|
||||
});
|
||||
return;
|
||||
}
|
||||
} finally {
|
||||
activeRequests--;
|
||||
if (verbose) output?.appendLine(`Request complete (active=${activeRequests})`);
|
||||
}
|
||||
}
|
||||
|
||||
res.writeHead(404).end();
|
||||
} catch (e: any) {
|
||||
output?.appendLine(`Error: ${e?.stack || e?.message || String(e)}`);
|
||||
modelCache = undefined;
|
||||
writeJson(res, 500, { error: { message: e?.message ?? 'internal_error', type: 'server_error', code: 'internal_error' } });
|
||||
}
|
||||
});
|
||||
|
||||
await new Promise<void>((resolve, reject) => {
|
||||
server!.once('error', reject);
|
||||
server!.listen(portCfg, host, () => resolve());
|
||||
});
|
||||
|
||||
const addr = server.address() as AddressInfo | null;
|
||||
const shown = addr ? `${addr.address}:${addr.port}` : `${host}:${portCfg}`;
|
||||
statusItem!.text = `Copilot Bridge: ${modelCache ? 'OK' : 'Unavailable'} @ ${shown}`;
|
||||
output?.appendLine(`Started at http://${shown} | Copilot: ${modelCache ? 'ok' : 'unavailable'}`);
|
||||
if (verbose) {
|
||||
const tokenSet = token ? 'set' : 'unset';
|
||||
output?.appendLine(`Config: host=${host} port=${addr?.port ?? portCfg} hist=${hist} maxConcurrent=${maxConc} token=${tokenSet}`);
|
||||
}
|
||||
} catch (e: any) {
|
||||
running = false;
|
||||
output?.appendLine(`Failed to start: ${e?.stack || e?.message || String(e)}`);
|
||||
statusItem!.text = 'Copilot Bridge: Error';
|
||||
throw e;
|
||||
}
|
||||
}
|
||||
|
||||
async function stopBridge() {
|
||||
if (!running) return;
|
||||
running = false;
|
||||
try {
|
||||
await new Promise<void>((resolve) => {
|
||||
if (!server) return resolve();
|
||||
server.close(() => resolve());
|
||||
});
|
||||
} finally {
|
||||
server = undefined;
|
||||
modelCache = undefined;
|
||||
statusItem && (statusItem.text = 'Copilot Bridge: Disabled');
|
||||
output?.appendLine('Stopped');
|
||||
}
|
||||
}
|
||||
|
||||
function toText(content: any): string {
|
||||
if (typeof content === 'string') return content;
|
||||
if (Array.isArray(content)) return content.map(toText).join('\n');
|
||||
if (content && typeof content === 'object' && typeof content.text === 'string') return content.text;
|
||||
try { return JSON.stringify(content); } catch (error) { return String(content); }
|
||||
}
|
||||
|
||||
function normalizeMessagesLM(messages: any[], histWindow: number): any[] {
|
||||
const sys = messages.filter((m) => m && m.role === 'system').pop();
|
||||
const turns = messages.filter((m) => m && (m.role === 'user' || m.role === 'assistant')).slice(-histWindow * 2);
|
||||
|
||||
const User = (vscode as any).LanguageModelChatMessage?.User;
|
||||
const Assistant = (vscode as any).LanguageModelChatMessage?.Assistant;
|
||||
|
||||
const result: any[] = [];
|
||||
let firstUserSeen = false;
|
||||
|
||||
for (const m of turns) {
|
||||
if (m.role === 'user') {
|
||||
let text = toText(m.content);
|
||||
if (!firstUserSeen && sys) {
|
||||
text = `[SYSTEM]\n${toText(sys.content)}\n\n[DIALOG]\nuser: ${text}`;
|
||||
firstUserSeen = true;
|
||||
}
|
||||
result.push(User ? User(text) : { role: 'user', content: text });
|
||||
} else if (m.role === 'assistant') {
|
||||
const text = toText(m.content);
|
||||
result.push(Assistant ? Assistant(text) : { role: 'assistant', content: text });
|
||||
}
|
||||
}
|
||||
|
||||
if (!firstUserSeen && sys) {
|
||||
const text = `[SYSTEM]\n${toText(sys.content)}`;
|
||||
result.unshift(User ? User(text) : { role: 'user', content: text });
|
||||
}
|
||||
|
||||
if (result.length === 0) {
|
||||
result.push(User ? User('') : { role: 'user', content: '' });
|
||||
}
|
||||
|
||||
return result;
|
||||
}
|
||||
|
||||
async function getModel(force = false, family?: string): Promise<any | undefined> {
|
||||
if (!force && modelCache && !family) return modelCache;
|
||||
const cfg = vscode.workspace.getConfiguration('bridge');
|
||||
const verbose = cfg.get<boolean>('verbose') ?? false;
|
||||
|
||||
const hasLM = !!((vscode as any).lm && typeof (vscode as any).lm.selectChatModels === 'function');
|
||||
if (!hasLM) {
|
||||
if (!family) modelCache = undefined;
|
||||
lastReason = 'missing_language_model_api';
|
||||
const info = server ? server.address() : undefined;
|
||||
const bound = info && typeof info === 'object' ? `${info.address}:${info.port}` : '';
|
||||
statusItem && (statusItem.text = `Copilot Bridge: Unavailable ${bound ? `@ ${bound}` : ''}`);
|
||||
if (verbose) output?.appendLine('VS Code Language Model API not available; update VS Code or enable proposed API (Insiders/F5/--enable-proposed-api).');
|
||||
return undefined;
|
||||
}
|
||||
|
||||
try {
|
||||
let models: any[] | undefined;
|
||||
if (family) {
|
||||
models = await (vscode as any).lm.selectChatModels({ vendor: 'copilot', family });
|
||||
} else {
|
||||
models = await (vscode as any).lm.selectChatModels({ vendor: 'copilot', family: 'gpt-4o' });
|
||||
if (!models || models.length === 0) {
|
||||
models = await (vscode as any).lm.selectChatModels({ vendor: 'copilot' });
|
||||
}
|
||||
}
|
||||
if (!models || models.length === 0) {
|
||||
if (!family) modelCache = undefined;
|
||||
lastReason = family ? 'not_found' : 'copilot_model_unavailable';
|
||||
const info = server ? server.address() : undefined;
|
||||
const bound = info && typeof info === 'object' ? `${info.address}:${info.port}` : '';
|
||||
statusItem && (statusItem.text = `Copilot Bridge: Unavailable ${bound ? `@ ${bound}` : ''}`);
|
||||
if (verbose) output?.appendLine(family ? `No Copilot language models available for family="${family}".` : 'No Copilot language models available.');
|
||||
return undefined;
|
||||
}
|
||||
const chosen = models[0];
|
||||
if (!family) modelCache = chosen;
|
||||
lastReason = undefined;
|
||||
const info = server ? server.address() : undefined;
|
||||
const bound = info && typeof info === 'object' ? `${info.address}:${info.port}` : '';
|
||||
statusItem && (statusItem.text = `Copilot Bridge: OK ${bound ? `@ ${bound}` : ''}`);
|
||||
if (verbose) output?.appendLine(`Copilot model selected${family ? ` (family=${family})` : ''}.`);
|
||||
return chosen;
|
||||
} catch (e: any) {
|
||||
if (!family) modelCache = undefined;
|
||||
const info = server ? server.address() : undefined;
|
||||
const bound = info && typeof info === 'object' ? `${info.address}:${info.port}` : '';
|
||||
statusItem && (statusItem.text = `Copilot Bridge: Unavailable ${bound ? `@ ${bound}` : ''}`);
|
||||
if ((vscode as any).LanguageModelError && e instanceof (vscode as any).LanguageModelError) {
|
||||
const code = (e as any).code || '';
|
||||
if (/consent/i.test(e.message) || code === 'UserNotSignedIn') lastReason = 'consent_required';
|
||||
else if (code === 'RateLimited') lastReason = 'rate_limited';
|
||||
else if (code === 'NotFound') lastReason = 'not_found';
|
||||
else lastReason = 'copilot_unavailable';
|
||||
if (verbose) output?.appendLine(`LM select error: ${e.message} code=${code}${family ? ` family=${family}` : ''}`);
|
||||
} else {
|
||||
lastReason = 'copilot_unavailable';
|
||||
if (verbose) output?.appendLine(`LM select error: ${e?.message || String(e)}${family ? ` family=${family}` : ''}`);
|
||||
}
|
||||
return undefined;
|
||||
}
|
||||
}
|
||||
|
||||
async function listCopilotModels(): Promise<string[]> {
|
||||
const hasLM = !!((vscode as any).lm && typeof (vscode as any).lm.selectChatModels === 'function');
|
||||
if (!hasLM) return ['copilot'];
|
||||
try {
|
||||
const models: any[] = await (vscode as any).lm.selectChatModels({ vendor: 'copilot' });
|
||||
if (!models || models.length === 0) return ['copilot'];
|
||||
const ids = models.map((m: any, idx: number) => {
|
||||
const family = (m as any)?.family || (m as any)?.modelFamily || (m as any)?.name || '';
|
||||
const norm = typeof family === 'string' && family.trim() ? family.trim().toLowerCase() : `copilot-${idx + 1}`;
|
||||
return norm.endsWith('-copilot') ? norm : `${norm}-copilot`;
|
||||
});
|
||||
return Array.from(new Set(ids));
|
||||
} catch {
|
||||
return ['copilot'];
|
||||
}
|
||||
}
|
||||
|
||||
function readJson(req: http.IncomingMessage): Promise<any> {
|
||||
return new Promise((resolve, reject) => {
|
||||
let data = '';
|
||||
req.on('data', (c: Buffer) => { data += c.toString(); });
|
||||
req.on('end', () => {
|
||||
if (!data) return resolve({});
|
||||
try {
|
||||
resolve(JSON.parse(data));
|
||||
} catch (e: any) {
|
||||
const snippet = data.length > 200 ? data.slice(0, 200) + '...' : data;
|
||||
reject(new Error(`Failed to parse JSON: ${e && e.message ? e.message : String(e)}. Data: "${snippet}"`));
|
||||
}
|
||||
});
|
||||
req.on('error', reject);
|
||||
});
|
||||
}
|
||||
|
||||
function writeJson(res: http.ServerResponse, status: number, obj: any) {
|
||||
res.writeHead(status, { 'Content-Type': 'application/json' });
|
||||
res.end(JSON.stringify(obj));
|
||||
}
|
||||
13
tsconfig.json
Normal file
13
tsconfig.json
Normal file
|
|
@ -0,0 +1,13 @@
|
|||
{
|
||||
"compilerOptions": {
|
||||
"target": "ES2020",
|
||||
"module": "commonjs",
|
||||
"lib": ["ES2020"],
|
||||
"outDir": "out",
|
||||
"rootDir": "src",
|
||||
"strict": true,
|
||||
"sourceMap": true,
|
||||
"types": ["node", "vscode"]
|
||||
},
|
||||
"include": ["src/**/*.ts"]
|
||||
}
|
||||
Loading…
Add table
Add a link
Reference in a new issue