Refresh the readme a bit

This commit is contained in:
Lars Baunwall 2025-10-04 14:51:31 +02:00
parent 38fe95b5f6
commit 3649adfca1
No known key found for this signature in database

View file

@ -5,7 +5,7 @@
[![Visual Studio Marketplace Version](https://img.shields.io/visual-studio-marketplace/v/thinkability.copilot-bridge)](https://marketplace.visualstudio.com/items?itemName=thinkability.copilot-bridge)
[![Visual Studio Marketplace Installs](https://img.shields.io/visual-studio-marketplace/d/thinkability.copilot-bridge?label=installs)](https://marketplace.visualstudio.com/items?itemName=thinkability.copilot-bridge)
## Bring your Copilot subscription to your local toolchain
## 🛠️ Bring your Copilot subscription to your local toolchain
Copilot Bridge turns Visual Studio Code into a local OpenAI-compatible gateway to the GitHub Copilot access you already pay for. Point your favorite terminals, scripts, and desktop apps at the bridge and keep the chat experience you rely on inside the editor—without routing traffic to another vendor.
@ -15,13 +15,13 @@ With the bridge running inside VS Code (the editor must stay open), every reques
- **Reuse existing OpenAI integrations** — Swap the base URL and keep your Copilot responses flowing into the same workflows.
- **Stay in control** — Keep latency low, keep traffic loopback-only, and gate access with an optional bearer token.
## How developers use Copilot Bridge
## 💡 How developers use Copilot Bridge
- Script Copilot answers into local build helpers, documentation generators, or commit bots.
- Experiment with agents and prompts while keeping requests on-device.
- Trigger Copilot completions from Raycast, Alfred, or custom UI shells without leaving VS Code.
## Feature highlights
## Feature highlights
- Local HTTP server bound to 127.0.0.1 by default
- Fully OpenAI-style `/v1/chat/completions`, `/v1/models`, and `/health` endpoints
@ -30,16 +30,17 @@ With the bridge running inside VS Code (the editor must stay open), every reques
- Concurrency guard with early 429 handling to keep your IDE responsive
- Starts automatically with VS Code once enabled, so the bridge is always ready when you are
> [!IMPORTANT]
> ⚖️ **Usage note**: Copilot Bridge extends your personal GitHub Copilot subscription. Use it in accordance with the existing Copilot and VS Code terms of service; you are responsible for staying compliant.
## Get started in minutes
## 🚀 Get started in minutes
### Prerequisites
### Prerequisites
- Visual Studio Code Desktop with GitHub Copilot signed in
- (Optional for local builds) Node.js 18+ and npm
### Install & launch
### ▶️ Install & launch
1. Install the extension from the [Visual Studio Marketplace](https://marketplace.visualstudio.com/items?itemName=thinkability.copilot-bridge) or side-load the latest `.vsix`.
2. In VS Code, press **F5** to open an Extension Development Host if you're iterating locally.
@ -49,7 +50,10 @@ With the bridge running inside VS Code (the editor must stay open), every reques
- Set the `bridge.enabled` setting to `true`.
5. Check the status anytime via “Copilot Bridge: Status” to see the bound address, port, and auth requirements.
### Build from source (optional)
> [!NOTE]
> Copilot Bridge stops listening as soon as VS Code closes. Keep the editor running whenever you need the endpoint available.
### 🏗️ Build from source (optional)
```bash
npm install
@ -62,7 +66,7 @@ Package a VSIX when you need to distribute a build:
npm run package
```
## First requests
## 📡 First requests
Replace `PORT` with what “Copilot Bridge: Status” reports.
@ -88,9 +92,10 @@ curl -H "Content-Type: application/json" \
http://127.0.0.1:$PORT/v1/chat/completions
```
Tip: You can also pass a family such as `gpt-4o`. If unavailable, the bridge returns `404 model_not_found`.
> [!TIP]
> You can also pass a family such as `gpt-4o`. If unavailable, the bridge returns `404 model_not_found`.
### Use the OpenAI SDK (Node.js)
### 💻 Use the OpenAI SDK (Node.js)
```ts
import OpenAI from "openai";
@ -109,13 +114,13 @@ const rsp = await client.chat.completions.create({
console.log(rsp.choices[0].message?.content);
```
## Architecture at a glance
## 🧠 Architecture at a glance
The extension invokes VS Codes Language Model API to select an available GitHub Copilot chat model, normalizes your recent conversation turns, and forwards the request from a local Polka HTTP server. Responses are streamed back via SSE (or buffered when `stream: false`). The server respects concurrency limits so multiple calls wont stall the editor.
## Technical reference
## 📚 Technical reference
### Endpoints
### 🔌 Endpoints
- `GET /health` — Reports IDE version, Copilot availability, and reason codes like `missing_language_model_api`.
- `GET /v1/models` — Lists Copilot model IDs the bridge can access.
@ -123,14 +128,14 @@ The extension invokes VS Codes Language Model API to select an available GitH
Supported `model` values include IDs returned by `/v1/models`, Copilot families such as `gpt-4o`, or the keyword `copilot` for default selection. Common error responses: `401 unauthorized`, `404 model_not_found`, `429 rate_limit_exceeded`, `503 copilot_unavailable`.
#### OpenAI compatibility notes
#### OpenAI compatibility notes
- Always returns a single choice (`n = 1`) and omits fields such as `usage`, `service_tier`, and `system_fingerprint`.
- Treats `tool_choice: "required"` the same as `"auto"`; `parallel_tool_calls` is ignored because the VS Code API lacks those hooks.
- Extra request options (`logprobs`, `response_format`, `seed`, `metadata`, `store`, etc.) are accepted but currently no-ops.
- Streaming tool call deltas send complete JSON fragments; clients should replace previously received argument snippets.
### Configuration (bridge.*)
### ⚙️ Configuration (bridge.*)
Settings appear under **Copilot Bridge** in VS Code settings:
@ -146,7 +151,7 @@ Settings appear under **Copilot Bridge** in VS Code settings:
Status bar: `Copilot Bridge: OK @ 127.0.0.1:12345` (or similar) shows when the server is ready.
### Logs & diagnostics
### 🧾 Logs & diagnostics
1. (Optional) Enable `bridge.verbose`.
2. Open **View → Output → “Copilot Bridge”**.
@ -154,13 +159,16 @@ Status bar: `Copilot Bridge: OK @ 127.0.0.1:12345` (or similar) shows when the s
If Copilot or the Language Model API isnt available, the output channel explains the reason along with the health status code.
### Security posture
### 🔒 Security posture
> [!WARNING]
> Never expose the bridge to remote interfaces. It is intended for single-user, local workflows only.
- Binds to `127.0.0.1` by default; do not expose it to remote interfaces.
- Set `bridge.token` to require `Authorization: Bearer <token>` on each request.
- Designed for single-user, local workflows and experiments.
### Troubleshooting
### 🚑 Troubleshooting
`/health` may report `copilot: "unavailable"` with reason codes such as:
@ -171,13 +179,13 @@ If Copilot or the Language Model API isnt available, the output channel expla
`POST /v1/chat/completions` returns `503` with similar reasons when Copilot cannot handle the request.
### Development workflow
### 🛠️ Development workflow
- Build once: `npm run compile`
- Watch mode: `npm run watch`
- Entry point: `src/extension.ts`
## Changelog
## 🗂️ Changelog
- **v1.1.0** — Simplified architecture with emphasis on faster inference (2030% improvement).
- **v1.0.0** — Modular architecture refactor, OpenAI typings, and tool-calling support.
@ -187,6 +195,6 @@ If Copilot or the Language Model API isnt available, the output channel expla
- **v0.1.3** — Migration to the Language Model API with robust guards and reason codes.
- **v0.1.0** — Initial OpenAI-compatible bridge with SSE streaming.
## License
## 📄 License
Apache-2.0