Add icon and refresh the readme

This commit is contained in:
Lars Baunwall 2025-08-13 00:01:17 +02:00
parent 5e42e4594e
commit 4e29217d99
No known key found for this signature in database
5 changed files with 162 additions and 452 deletions

336
AGENTS.md
View file

@ -1,336 +0,0 @@
# InferenceOnly Copilot Bridge (VS Code Desktop)
## Scope & Goals
Expose a **local, OpenAIcompatible chat endpoint** inside a **running VS Code Desktop** session that forwards requests to **GitHub Copilot Chat** via the **VS Code Chat provider**. No workspace tools (no search/edit), no VS Code Server.
- Endpoints:
- `POST /v1/chat/completions` (supports streaming via SSE)
- `GET /v1/models` (synthetic listing)
- `GET /healthz` (status)
- Local only (`127.0.0.1`), single user, optin via VS Code settings/command.
- Minimal state: one Copilot session per request; no history persisted by the bridge.
**Nongoals:** multitenant proxying, private endpoint scraping, file I/O tools, function/tool calling emulation.
---
## Architecture (Desktoponly, inprocess)
```
VS Code Desktop (running)
┌──────────────────────────────────────────────────────────────┐
│ Bridge Extension (TypeScript, Extension Host) │
│ - HTTP server on 127.0.0.1:<port>
│ - POST /v1/chat/completions → Copilot Chat provider │
│ - GET /v1/models (synthetic) │
│ - GET /healthz │
│ Copilot pipe: │
│ vscode.chat.requestChatAccess('copilot') │
│ → access.startSession().sendRequest({ prompt, ... }) │
└──────────────────────────────────────────────────────────────┘
```
### Data flow
Client (OpenAI API shape) → Bridge HTTP → normalize messages → `requestChatAccess('copilot')``startSession().sendRequest` → stream chunks → SSE to client.
---
## API Contract (subset, OpenAIcompatible)
### POST `/v1/chat/completions`
**Accepted fields**
- `model`: string (ignored internally, echoed back as synthetic id).
- `messages`: array of `{role, content}`; roles: `system`, `user`, `assistant`.
- `stream`: boolean (default `true`). If `false`, return a single JSON completion.
**Ignored fields**
`tools`, `function_call/tool_choice`, `temperature`, `top_p`, `logprobs`, `seed`, penalties, `response_format`, `stop`, `n`.
**Prompt normalization**
- Keep the last **system** message and the last **N** user/assistant turns (configurable, default 3) to bound prompt size.
- Render into a single text prompt:
```
[SYSTEM]
<system text>
[DIALOG]
user: ...
assistant: ...
user: ...
```
**Streaming response (SSE)**
- For each Copilot content chunk:
```
data: {"id":"cmp_<uuid>","object":"chat.completion.chunk","choices":[{"index":0,"delta":{"content":"<chunk>"}}]}
```
- Terminate with:
```
data: [DONE]
```
**Nonstreaming response**
```json
{
"id": "cmpl_<uuid>",
"object": "chat.completion",
"choices": [
{ "index": 0, "message": { "role": "assistant", "content": "<full text>" }, "finish_reason": "stop" }
]
}
```
### GET `/v1/models`
```json
{
"data": [
{ "id": "gpt-4o-copilot", "object": "model", "owned_by": "vscode-bridge" }
]
}
```
### GET `/healthz`
```json
{ "ok": true, "copilot": "ok", "version": "<vscode.version>" }
```
### Error envelope (OpenAIstyle)
```json
{ "error": { "message": "Copilot unavailable", "type": "server_error", "code": "copilot_unavailable" } }
```
---
## Extension Design
### `package.json` (relevant)
- `activationEvents`: `onStartupFinished` (and commands).
- `contributes.commands`: `bridge.enable`, `bridge.disable`, `bridge.status`.
- `contributes.configuration` (under `bridge.*`):
- `enabled` (bool; default `false`)
- `host` (string; default `"127.0.0.1"`)
- `port` (int; default `0` = random ephemeral)
- `token` (string; optional bearer; empty means no auth, still loopback only)
- `historyWindow` (int; default `3`)
### Lifecycle
- On activate:
1. Check `bridge.enabled`; if false, return.
2. Attempt `vscode.chat.requestChatAccess('copilot')`; cache access if granted.
3. Start HTTP server bound to loopback.
4. Status bar item: `Copilot Bridge: OK/Unavailable @ <host>:<port>`.
- On deactivate/disable: close server, dispose listeners.
### Copilot Hook
```ts
const access = await vscode.chat.requestChatAccess('copilot'); // per enable or per request
const session = await access.startSession();
const stream = await session.sendRequest({ prompt, attachments: [] });
// stream.onDidProduceContent(text => ...)
// stream.onDidEnd(() => ...)
```
---
## Implementation Notes
- **HTTP server:** Node `http` or a tiny `express` router. Keep it minimal to reduce dependencies.
- **Auth:** optional `Authorization: Bearer <token>`; recommended for local automation. Reject mismatches with 401.
- **Backpressure:** serialize requests or cap concurrency (configurable). If Copilot throttles, return 429 with `Retry-After`.
- **Message normalization:**
- Coerce content variants (`string`, arrays, objects with `text`) into plain strings.
- Join multipart content with `\n`.
- **Streaming:**
- Set headers: `Content-Type: text/event-stream`, `Cache-Control: no-cache`, `Connection: keep-alive`.
- Flush after each chunk; handle client disconnect by disposing stream subscriptions.
- **Nonstream:** buffer chunks; return a single completion object.
- **Errors:** `503` when Copilot access unavailable; `400` for invalid payloads; `500` for unexpected failures.
- **Logging:** VS Code Output channel: start/stop, port, errors (no prompt bodies unless user enables verbose logging).
- **UX:** `bridge.status` shows availability, bound address/port, and whether a token is required; status bar indicator toggles on availability.
---
## Security & Compliance
- Local only: default bind to `127.0.0.1`; no remote exposure.
- Single user: relies on the users authenticated VS Code Copilot session; bridge does not handle tokens.
- No scraping/private endpoints: all calls go through the VS Code Chat provider.
- No multitenant/proxying: do not expose to others; treat as a personal developer convenience.
---
## Testing Plan
1. **Health**
```bash
curl http://127.0.0.1:<port>/healthz
```
Expect `{ ok: true, copilot: "ok" }` when signed in.
2. **Streaming completion**
```bash
curl -N -H "Content-Type: application/json" \
-d '{"model":"gpt-4o-copilot","stream":true,"messages":[{"role":"user","content":"hello"}]}' \
http://127.0.0.1:<port>/v1/chat/completions
```
Expect multiple `data:` chunks and `[DONE]`.
3. **Nonstream** (`"stream": false`) → single JSON completion.
4. **Bearer** (when configured): missing/incorrect token → `401`.
5. **Unavailable**: sign out of Copilot → `/healthz` shows `unavailable`; POST returns `503`.
6. **Concurrency/throttle**: fire two requests; verify cap or serialized handling.
---
## Minimal Code Skeleton
### `src/extension.ts`
```ts
import * as vscode from 'vscode';
import * as http from 'http';
let server: http.Server | undefined;
let access: vscode.ChatAccess | undefined;
export async function activate(ctx: vscode.ExtensionContext) {
const cfg = vscode.workspace.getConfiguration('bridge');
if (!cfg.get<boolean>('enabled')) return;
try { access = await vscode.chat.requestChatAccess('copilot'); }
catch { access = undefined; }
const host = cfg.get<string>('host') ?? '127.0.0.1';
const portCfg = cfg.get<number>('port') ?? 0;
const token = (cfg.get<string>('token') ?? '').trim();
const hist = cfg.get<number>('historyWindow') ?? 3;
server = http.createServer(async (req, res) => {
try {
if (token && req.headers.authorization !== `Bearer ${token}`) {
res.writeHead(401, { 'Content-Type': 'application/json' });
res.end(JSON.stringify({ error:{ message:'unauthorized' } }));
return;
}
if (req.method === 'GET' && req.url === '/healthz') {
res.writeHead(200, { 'Content-Type':'application/json' });
res.end(JSON.stringify({ ok: !!access, copilot: access ? 'ok':'unavailable', version: vscode.version }));
return;
}
if (req.method === 'GET' && req.url === '/v1/models') {
res.writeHead(200, { 'Content-Type':'application/json' });
res.end(JSON.stringify({ data:[{ id:'gpt-4o-copilot', object:'model', owned_by:'vscode-bridge' }] }));
return;
}
if (req.method === 'POST' && req.url?.startsWith('/v1/chat/completions')) {
if (!access) {
res.writeHead(503, { 'Content-Type':'application/json' });
res.end(JSON.stringify({ error:{ message:'Copilot unavailable', type:'server_error', code:'copilot_unavailable' } }));
return;
}
const body = await readJson(req);
const prompt = normalizeMessages(body?.messages ?? [], hist);
const streamMode = body?.stream !== false; // default=true
const session = await access.startSession();
const chatStream = await session.sendRequest({ prompt, attachments: [] });
if (streamMode) {
res.writeHead(200, {
'Content-Type':'text/event-stream',
'Cache-Control':'no-cache',
'Connection':'keep-alive'
});
const id = `cmp_${Math.random().toString(36).slice(2)}`;
const h1 = chatStream.onDidProduceContent((chunk) => {
res.write(`data: ${JSON.stringify({
id, object:'chat.completion.chunk',
choices:[{ index:0, delta:{ content: chunk } }]
})}\n\n`);
});
const endAll = () => {
res.write('data: [DONE]\n\n'); res.end();
h1.dispose(); h2.dispose();
};
const h2 = chatStream.onDidEnd(endAll);
req.on('close', endAll);
return;
} else {
let buf = '';
const h1 = chatStream.onDidProduceContent((chunk) => { buf += chunk; });
await new Promise<void>(resolve => {
const h2 = chatStream.onDidEnd(() => { h1.dispose(); h2.dispose(); resolve(); });
});
res.writeHead(200, { 'Content-Type':'application/json' });
res.end(JSON.stringify({
id:`cmpl_${Math.random().toString(36).slice(2)}`,
object:'chat.completion',
choices:[{ index:0, message:{ role:'assistant', content: buf }, finish_reason:'stop' }]
}));
return;
}
}
res.writeHead(404).end();
} catch (e:any) {
res.writeHead(500, { 'Content-Type':'application/json' });
res.end(JSON.stringify({ error:{ message: e?.message ?? 'internal_error', type:'server_error', code:'internal_error' } }));
}
});
server.listen(portCfg, host, () => {
const addr = server!.address();
const shown = typeof addr === 'object' && addr ? `${addr.address}:${addr.port}` : `${host}:${portCfg}`;
vscode.window.setStatusBarMessage(`Copilot Bridge: ${access ? 'OK' : 'Unavailable'} @ ${shown}`);
});
ctx.subscriptions.push({ dispose: () => server?.close() });
}
export function deactivate() {
server?.close();
}
function readJson(req: http.IncomingMessage): Promise<any> {
return new Promise((resolve, reject) => {
let data = ''; req.on('data', c => data += c);
req.on('end', () => { try { resolve(data ? JSON.parse(data) : {}); } catch (e) { reject(e); } });
req.on('error', reject);
});
}
function normalizeMessages(messages: any[], histWindow: number): string {
const system = messages.filter((m:any) => m.role === 'system').pop()?.content;
const turns = messages.filter((m:any) => m.role === 'user' || m.role === 'assistant').slice(-histWindow*2);
const dialog = turns.map((m:any) => `${m.role}: ${asText(m.content)}`).join('\n');
return `${system ? `[SYSTEM]\n${asText(system)}\n\n` : ''}[DIALOG]\n${dialog}`;
}
function asText(content: any): string {
if (typeof content === 'string') return content;
if (Array.isArray(content)) return content.map(asText).join('\n');
if ((content as any)?.text) return (content as any).text;
try { return JSON.stringify(content); } catch { return String(content); }
}
```
---
## Delivery Checklist
- Extension skeleton with settings + commands.
- HTTP server (loopback), `/healthz`, `/v1/models`, `/v1/chat/completions`.
- Copilot access + session streaming.
- Prompt normalization (system + last N turns).
- SSE mapping and nonstream fallback.
- Optional bearer token check.
- Status bar + Output channel diagnostics.
- Tests: health, streaming, nonstream, auth, unavailability.
-

258
README.md
View file

@ -1,156 +1,192 @@
# VS Code Copilot Bridge (Desktop, Inference-only)
<img src="images/icon.png" width="100" />
Local OpenAI-compatible HTTP facade to GitHub Copilot via the VS Code Language Model API.
# Copilot Bridge (VS Code Extension)
- Endpoints (local-only, default bind 127.0.0.1):
- POST /v1/chat/completions (SSE streaming; use "stream": false for non-streaming)
- GET /v1/models (dynamic listing of available Copilot models)
- GET /healthz (ok/unavailable + vscode.version)
[![Visual Studio Marketplace](https://vsmarketplacebadges.dev/version/thinkability.copilot-bridge.svg)](https://marketplace.visualstudio.com/items?itemName=thinkability.copilot-bridge)
- Copilot pipe: vscode.lm.selectChatModels({ vendor: "copilot", family: "gpt-4o" }) → model.sendRequest(messages)
Expose GitHub Copilot as a local, OpenAI-compatible HTTP endpoint running inside VS Code. The bridge forwards chat requests to Copilot using the VS Code Language Model API and streams results back to you.
- Prompt normalization: last system message + last N user/assistant turns (default 3) rendered as:
[SYSTEM]
[DIALOG]
user: …
assistant: …
What you get:
- Security: loopback-only binding by default; optional Authorization: Bearer <token>.
- Local HTTP server (loopback-only by default)
- OpenAI-style endpoints and payloads
- Server-Sent Events (SSE) streaming for chat completions
- Dynamic listing of available Copilot chat models
## Install, Build, and Run
Endpoints exposed:
Prerequisites:
- VS Code Desktop (stable), GitHub Copilot signed in
- Node.js 18+ (recommended)
- npm
- POST /v1/chat/completions — OpenAI-style chat API (streaming by default)
- GET /v1/models — lists available Copilot models
- GET /healthz — health + VS Code version
- Auto-recovery: the bridge re-requests Copilot access on each chat request if missing; no restart required after signing in. `/healthz` will best-effort recheck only when `bridge.verbose` is true.
The extension will autostart and requires VS Code to be running.
### Don't break your Copilot license
This extension allows you to use your Copilot outside of VS Code **for your own personal use only** obeying the same terms as set forth in the VS Code and Github Copilot terms of service.
## Quick start
Steps:
1) Install deps and compile:
npm install
npm run compile
Requirements:
2) Launch in VS Code (recommended for debugging):
- Open this folder in VS Code
- Press F5 to run the extension in a new Extension Development Host
- VS Code Desktop with GitHub Copilot signed in
- If building locally: Node.js 18+ and npm
3) Enable the bridge:
- Command Palette → “Copilot Bridge: Enable”
- Or set in settings: bridge.enabled = true
- “Copilot Bridge: Status” shows the bound address/port and token requirement
Steps (dev run):
Optional: Packaging a VSIX
- You can package with vsce (not included):
npm i -g @vscode/vsce
vsce package
- Then install the generated .vsix via “Extensions: Install from VSIX…”
## Enabling the Language Model API (if required)
1. Install and compile
If your VS Code build requires enabling proposed APIs for the Language Model API, start with:
### Models and Selection
```bash
npm install
npm run compile
```
- The bridge lists available GitHub Copilot chat models using the VS Code Language Model API. Example:
curl http://127.0.0.1:&lt;port&gt;/v1/models
→ { "data": [ { "id": "gpt-4o-copilot", ... }, ... ] }
1. Press F5 in VS Code to launch an Extension Development Host
- To target a specific model for inference, set the "model" field in your POST body. The bridge accepts:
- IDs returned by /v1/models (e.g., "gpt-4o-copilot")
- A Copilot family name (e.g., "gpt-4o")
- "copilot" to allow default selection
1. In the Dev Host, enable the bridge
Examples:
- Command Palette → “Copilot Bridge: Enable”
- Or set setting bridge.enabled = true
1. Check status
- Command Palette → “Copilot Bridge: Status” (shows bound address/port and whether a token is required)
Optional: package a VSIX
```bash
npm run package
```
Then install the generated .vsix via “Extensions: Install from VSIX…”.
## Use it
Replace PORT with what “Copilot Bridge: Status” shows.
- List models
```bash
curl http://127.0.0.1:$PORT/v1/models
```
- Stream a completion
```bash
curl -N -H "Content-Type: application/json" \
-d '{"model":"gpt-4o-copilot","messages":[{"role":"user","content":"hello"}]}' \
http://127.0.0.1:&lt;port&gt;/v1/chat/completions
http://127.0.0.1:$PORT/v1/chat/completions
```
curl -N -H "Content-Type: application/json" \
-d '{"model":"gpt-4o","messages":[{"role":"user","content":"hello"}]}' \
http://127.0.0.1:&lt;port&gt;/v1/chat/completions
- Non-streaming
- If a requested model is unavailable, the bridge returns:
404 with { "error": { "code": "model_not_found", "reason": "not_found" } }.
- Stable VS Code: `code --enable-proposed-api thinkability.copilot-bridge`
- VS Code Insiders or Extension Development Host (F5) also works.
```bash
curl -H "Content-Type: application/json" \
-d '{"model":"gpt-4o-copilot","stream":false,"messages":[{"role":"user","content":"hello"}]}' \
http://127.0.0.1:$PORT/v1/chat/completions
```
When the API is not available, the Output (“Copilot Bridge”) will show:
“VS Code Language Model API not available; update VS Code or enable proposed API.”
Tip: You can also pass a family like "gpt-4o" as model. If unavailable youll get 404 with code model_not_found.
## Troubleshooting
### Using OpenAI SDK (Node.js)
- /healthz shows `copilot: "unavailable"` with a `reason`:
- `missing_language_model_api`: Language Model API not available
- `copilot_model_unavailable`: No Copilot models selectable
- `consent_required`: User consent/sign-in required for Copilot models
- `rate_limited`: Provider throttling
- `not_found`: Requested model not found
- `copilot_unavailable`: Other provider errors
- POST /v1/chat/completions returns 503 with the same `reason` codes.
Point your client to the bridge and use your token (if set) as apiKey.
```ts
import OpenAI from "openai";
const client = new OpenAI({
baseURL: `http://127.0.0.1:${process.env.PORT}/v1`,
apiKey: process.env.BRIDGE_TOKEN || "not-used-when-empty",
});
const rsp = await client.chat.completions.create({
model: "gpt-4o-copilot",
messages: [{ role: "user", content: "hello" }],
stream: false,
});
console.log(rsp.choices[0].message?.content);
```
## How it works
The extension uses VS Codes Language Model API to select a GitHub Copilot chat model and forward your conversation. Messages are normalized to preserve the last system prompt and the most recent user/assistant turns (configurable window). Responses are streamed back via SSE or returned as a single JSON payload.
## Configuration (bridge.*)
- bridge.enabled (boolean; default false): auto-start on VS Code startup
- bridge.host (string; default "127.0.0.1"): bind address (keep on loopback)
- bridge.port (number; default 0): 0 = ephemeral port
- bridge.token (string; default ""): optional bearer token; empty disables auth
- bridge.historyWindow (number; default 3): number of user/assistant turns to keep
- bridge.maxConcurrent (number; default 1): max concurrent chat requests; excess → 429
## Viewing logs
Settings live under “Copilot Bridge” in VS Code settings:
To see verbose logs:
1) Enable: Settings → search “Copilot Bridge” → enable “bridge.verbose”
2) Open: View → Output → select “Copilot Bridge” in the dropdown
3) Trigger a request (e.g., curl /v1/chat/completions). Youll see:
- HTTP request lines (method/path)
- Model selection attempts (“Copilot model selected.”)
- SSE lifecycle (“SSE start …”, “SSE end …”)
- Health checks (best-effort model check when verbose is on)
- API diagnostics (e.g., missing Language Model API)
- bridge.verbose (boolean; default false): verbose logs to “Copilot Bridge” output channel
| Setting | Default | Description |
|---|---|---|
| bridge.enabled | false | Start the bridge automatically when VS Code launches. |
| bridge.host | 127.0.0.1 | Bind address. Keep on loopback for safety. |
| bridge.port | 0 | Port for the HTTP server. 0 picks an ephemeral port. |
| bridge.token | "" | Optional bearer token. If set, requests must include `Authorization: Bearer <token>`. |
| bridge.historyWindow | 3 | Number of user/assistant turns kept (system message is tracked separately). |
| bridge.maxConcurrent | 1 | Maximum concurrent /v1/chat/completions; excess return 429. |
| bridge.verbose | false | Verbose logs in the “Copilot Bridge” Output channel. |
## Manual Testing (curl)
Status bar: Shows availability and bound address (e.g., “Copilot Bridge: OK @ 127.0.0.1:12345”).
Replace <port> with the port shown in “Copilot Bridge: Status”.
## Endpoints
Health:
curl http://127.0.0.1:<port>/healthz
### GET /healthz
Models:
curl http://127.0.0.1:<port>/v1/models
Returns `{ ok: true, copilot: "ok" | "unavailable", reason?: string, version: <vscode.version> }`.
Streaming completion:
curl -N -H "Content-Type: application/json" \
-d '{"model":"gpt-4o-copilot","stream":true,"messages":[{"role":"user","content":"hello"}]}' \
http://127.0.0.1:<port>/v1/chat/completions
### GET /v1/models
Non-stream:
curl -H "Content-Type: application/json" \
-d '{"model":"gpt-4o-copilot","stream":false,"messages":[{"role":"user","content":"hello"}]}' \
http://127.0.0.1:<port>/v1/chat/completions
Returns `{ data: [{ id, object: "model", owned_by: "vscode-bridge" }] }`.
Auth (when bridge.token is set):
- Missing/incorrect Authorization: Bearer <token> → 401
### POST /v1/chat/completions
Copilot unavailable:
- Sign out of GitHub Copilot; /healthz shows unavailable; POST returns 503 envelope
OpenAI-style body with `messages` and optional `model` and `stream`. Streaming uses SSE with `data: { ... }` events and a final `data: [DONE]`.
Concurrency:
- Fire 2+ concurrent requests; above bridge.maxConcurrent → 429 rate_limit_error
Accepted model values:
## Notes
- IDs from /v1/models (e.g., `gpt-4o-copilot`)
- Copilot family names (e.g., `gpt-4o`)
- `copilot` to allow default selection
- Desktop-only, in-process. No VS Code Server dependency.
- Single-user, local loopback. Do not expose to remote interfaces.
- Non-goals: tools/function calling emulation, workspace file I/O, multi-tenant proxying.
Common errors:
- 401 unauthorized when token is set but header is missing/incorrect
- 404 model_not_found when the requested family/ID isnt available
- 429 rate_limit_exceeded when above bridge.maxConcurrent
- 503 copilot_unavailable when the Language Model API or Copilot model isnt available
## Logs and diagnostics
To view logs:
1. Enable “bridge.verbose” (optional)
1. View → Output → “Copilot Bridge”
1. Trigger requests to see HTTP lines, model selection, SSE lifecycle, and health messages
If the Language Model API is missing or your VS Code build doesnt support it, youll see a message in the Output channel. Use a recent VS Code build and make sure GitHub Copilot is signed in.
## Security notes
- Binds to 127.0.0.1 by default. Do not expose to remote interfaces.
- Set `bridge.token` to require `Authorization: Bearer <token>` on every request.
- Single-user, local process; intended for local tooling and experiments.
## Troubleshooting
The `/healthz` endpoint may report `copilot: "unavailable"` for reasons like:
- missing_language_model_api — VS Code API not available
- copilot_model_unavailable — No Copilot models selectable
- not_found — Requested model/family not found
- consent_required, rate_limited, copilot_unavailable — provider-specific or transient issues
POST /v1/chat/completions returns 503 with similar reason codes when Copilot isnt usable.
## Development
- Build: npm run compile
- Watch: npm run watch
- Main: src/extension.ts
- Note: Previously used Chat API shims are no longer needed; the bridge now uses the Language Model API.
- Build: `npm run compile`
- Watch: `npm run watch`
- Entry point: `src/extension.ts`
## License
Apache-2.0

BIN
images/icon.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 16 KiB

5
package-lock.json generated
View file

@ -1,12 +1,13 @@
{
"name": "copilot-bridge",
"version": "0.1.2",
"version": "0.1.5",
"lockfileVersion": 3,
"requires": true,
"packages": {
"": {
"name": "copilot-bridge",
"version": "0.1.2",
"version": "0.1.5",
"license": "Apache License 2.0",
"dependencies": {
"polka": "^0.5.2"
},

View file

@ -1,12 +1,20 @@
{
"private": false,
"icon": "images/icon.png",
"name": "copilot-bridge",
"displayName": "Copilot Bridge",
"description": "Local OpenAI-compatible chat endpoint bridging to GitHub Copilot via the VS Code Language Model API.",
"version": "0.1.5",
"description": "Local OpenAI-compatible chat endpoint (inference) bridging to GitHub Copilot via the VS Code Language Model API.",
"version": "0.2.0",
"publisher": "thinkability",
"repository": {
"type": "git",
"url": "https://github.com/larsbaunwall/vscode-copilot-bridge.git"
},
"author": "larsbaunwall",
"engines": {
"vscode": "^1.90.0"
},
"license": "Apache License 2.0",
"extensionKind": [
"ui"
],
@ -81,7 +89,8 @@
"scripts": {
"compile": "tsc -p .",
"watch": "tsc -w -p .",
"vscode:prepublish": "npm run compile"
"vscode:prepublish": "npm run compile",
"package": "npx vsce package"
},
"dependencies": {
"polka": "^0.5.2"