Update VNC functionality for Proxmox 7 (#148)

Chown `targets`, Add run and kill scripts

Lol Joe figured it out

* Dude it works holy shit

We need to fix some logistical bugs, probably, and also like remove dead
code lol

* Open VNC session on the node that the VM belongs

Figured out why I couldn't open a session on anything but 01. It was
because I was making the API call on proxmox01-nrh. So that's where the
session opened. I hope that by doing this, it will balance the load
(what little there is) from VNC sessions.

* Update websockify-related tasks

* Remove SSH key from build

* Add option to specify VNC port.

Should be 443 for OKD, probably 8081 for development.

This hosts a smattering of fixes, acutally uses gunicorn properly(?),
launches websockify correctly, and introduces MORE DEAD CODE!

TODO: Fix the scheduling system

* Make things not crash as much :)

* Remove obviously dead code

There's still some code in here that may require more careful
extraction, testing, and review, so I'm saving that for another PR.

* Fix Joe's complaints

* Replace hardcoded URL
This commit is contained in:
Willard Nilges 2022-07-29 20:56:00 -04:00 committed by GitHub
parent 3bad0f003c
commit 2c17d6988f
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
15 changed files with 213 additions and 144 deletions

View file

@ -7,5 +7,5 @@ COPY start_worker.sh start_scheduler.sh .
COPY .git ./.git
COPY *.py .
COPY proxstar ./proxstar
RUN touch proxmox_ssh_key && chmod a+w proxmox_ssh_key # This is some OKD shit.
ENTRYPOINT ddtrace-run python3 wsgi.py
RUN touch proxmox_ssh_key targets && chmod a+w proxmox_ssh_key targets # This is some OKD shit.
ENTRYPOINT ddtrace-run gunicorn proxstar:app --bind=0.0.0.0:8080

3
HACKING/.gitignore vendored
View file

@ -1,3 +1,4 @@
volume/
volume/*
.env
.env
ssh_key

View file

@ -1,3 +1,43 @@
# Contributing
1. [Fork](https://help.github.com/en/articles/fork-a-repo) this repository
- Optionally create a new [git branch](https://git-scm.com/book/en/v2/Git-Branching-Branches-in-a-Nutshell) if your change is more than a small tweak (`git checkout -b BRANCH-NAME-HERE`)
2. Follow the _Podman Environment Instructions_ to set up a Podman dev environment. If you'd like to run Proxstar entirely on your own hardware, check out _Setting up a full dev environment_
3. Create a Virtualenv to do your linting in
```
mkdir venv
python3.8 -m venv venv
source venv/bin/activate
```
4. Make your changes locally, commit, and push to your fork
- If you want to test locally, you should copy `HACKING/.env.sample` to `HACKING/.env`, and talk to an RTP about filling in secrets.
- Lint and format your local changes with `pylint proxstar` and `black proxstar`
- You'll need dependencies installed locally to do this. You should do that in a [venv](https://packaging.python.org/tutorials/installing-packages/#creating-virtual-environments) of some sort to keep your system clean. All the dependencies are listed in [requirements.txt](./requirements.txt), so you can install everything with `pip install -r requirements.txt`. You'll need python 3.6 at minimum, though things should work up to python 3.8.
5. Create a [Pull Request](https://help.github.com/en/articles/about-pull-requests) on this repo for our Webmasters to review
### Podman Environment Instructions
1. Build your containers. The `proxstar` container serves as proxstar, rq, rq-scheduler, and VNC. The `proxstar-postgres` container sets up the database schema.
`mkdir HACKING/proxstar-postgres/volume`
`podman build . --tag=proxstar`
`podman build HACKING/proxstar-postgres --tag=proxstar-postgres`
2. Configure your environment variables. I'd recommend setting up a .env file and passing that into your container. Check `.env.template` for more info.
3. Run it. This sets up redis, postgres, rq, and proxstar.
`./HACKING/launch_env.sh`
4. To stop all containers, use the provided script
`./HACKING/stop_env.sh`
## Setting up a full dev environment
If you want to work on Proxstar using a 1:1 development setup, there are a couple things you're going to need
@ -6,9 +46,11 @@ If you want to work on Proxstar using a 1:1 development setup, there are a coupl
- SSH into
- With portforwarding (see `man ssh` for info on the `-L` option)
- and run
- Podman
- Flask
- Redis
- Docker
- Redis
- Postgres
- RQ
- At least one (1) Proxmox host running Proxmox >6.3
- A CSH account
- An RTP (to tell you secrets)
@ -25,24 +67,4 @@ If you're trying to run this all on a VM without a graphical web browser, you ca
```
ssh example@dev-server.csh.rit.edu -L 8000:localhost:8000
```
# New Deployment Instructions
1. Build your containers. The `proxstar` container serves as proxstar, rq, rq-scheduler, and VNC. The `proxstar-postgres` container sets up the database schema.
`mkdir HACKING/proxstar-postgres/volume`
`podman build . --tag=proxstar`
`podman build HACKING/proxstar-postgres --tag=proxstar-postgres`
2. Configure your environment variables. I'd recommend setting up a .env file and passing that into your container. Check `.env.template` for more info.
3. Run it. This sets up redis, postgres, rq, and proxstar.
```
podman run --rm -d --network=proxstar --name=proxstar-redis redis:alpine
podman run --rm -d --network=proxstar --name=proxstar-postgres -e POSTGRES_PASSWORD=changeme -v ./HACKING/proxstar-postgres/volume:/var/lib/postgresql/data:Z proxstar-postgres
podman run --rm -d --network=proxstar --name=proxstar-rq-scheduler --env-file=HACKING/.env --entrypoint ./start_scheduler.sh proxstar
podman run --rm -d --network=proxstar --name=proxstar-rq --env-file=HACKING/.env --entrypoint ./start_worker.sh proxstar
podman run --rm -d --network=proxstar --name=proxstar -p 8000:8000 --env-file=HACKING/.env proxstar
```

6
HACKING/launch_env.sh Executable file
View file

@ -0,0 +1,6 @@
#!/bin/bash
podman run --rm -d --network=proxstar --name=proxstar-redis redis:alpine
podman run --rm -d --network=proxstar --name=proxstar-postgres -e POSTGRES_PASSWORD=changeme -v ./HACKING/proxstar-postgres/volume:/var/lib/postgresql/data:Z proxstar-postgres
podman run --rm -d --network=proxstar --name=proxstar-rq-scheduler --env-file=HACKING/.env --entrypoint ./start_scheduler.sh proxstar
podman run --rm -d --network=proxstar --name=proxstar-rq --env-file=HACKING/.env --entrypoint ./start_worker.sh proxstar
podman run --rm -it --network=proxstar --name=proxstar -p 8000:8000 -p 8081:8081 --env-file=HACKING/.env --entrypoint='["gunicorn", "proxstar:app", "--bind=0.0.0.0:8000"]' proxstar

6
HACKING/stop_env.sh Executable file
View file

@ -0,0 +1,6 @@
#!/bin/bash
podman kill proxstar
podman kill proxstar-rq
podman kill proxstar-rq-scheduler
podman stop proxstar-redis
podman stop proxstar-postgres

View file

@ -17,13 +17,7 @@ It is available to house members at [proxstar.csh.rit.edu](https://proxstar.csh.
## Contributing
1. [Fork](https://help.github.com/en/articles/fork-a-repo) this repository
- Optionally create a new [git branch](https://git-scm.com/book/en/v2/Git-Branching-Branches-in-a-Nutshell) if your change is more than a small tweak (`git checkout -b BRANCH-NAME-HERE`)
3. Make your changes locally, commit, and push to your fork
- If you want to test locally, you should copy `config.py` to `config_local.py`, and talk to an RTP about filling in secrets.
- Lint and format your local changes with `pylint proxstar` and `black proxstar`
- You'll need dependencies installed locally to do this. You should do that in a [venv](https://packaging.python.org/tutorials/installing-packages/#creating-virtual-environments) of some sort to keep your system clean. All the dependencies are listed in [requirements.txt](./requirements.txt), so you can install everything with `pip install -r requirements.txt`. You'll need python 3.6 at minimum, though things should work up to python 3.8.
4. Create a [Pull Request](https://help.github.com/en/articles/about-pull-requests) on this repo for our Webmasters to review
Check out `HACKING/` for more info.
## Questions/Concerns

View file

View file

@ -62,8 +62,10 @@ RQ_DASHBOARD_REDIS_HOST = environ.get('PROXSTAR_REDIS_HOST', 'localhost')
REDIS_PORT = int(environ.get('PROXSTAR_REDIS_PORT', '6379'))
# VNC
WEBSOCKIFY_PATH = environ.get('PROXSTAR_WEBSOCKIFY_PATH', '/opt/app-root/bin/websockify')
WEBSOCKIFY_TARGET_FILE = environ.get('PROXSTAR_WEBSOCKIFY_TARGET_FILE', '/opt/app-root/src/targets')
WEBSOCKIFY_PATH = environ.get('PROXSTAR_WEBSOCKIFY_PATH', '/usr/local/bin/websockify')
WEBSOCKIFY_TARGET_FILE = environ.get('PROXSTAR_WEBSOCKIFY_TARGET_FILE', '/opt/proxstar/targets')
VNC_HOST = environ.get('PROXSTAR_VNC_HOST', 'proxstar-vnc.csh.rit.edu')
VNC_PORT = environ.get('PROXSTAR_VNC_PORT', '443')
# SENTRY
# If you set the sentry dsn locally, make sure you use the local-dev or some

View file

@ -2,7 +2,6 @@ import os
import subprocess
from flask import Flask
app = Flask(__name__)
if os.path.exists(os.path.join(app.config.get('ROOT_DIR', os.getcwd()), "config_local.py")):
config = os.path.join(app.config.get('ROOT_DIR', os.getcwd()), "config_local.py")
@ -16,6 +15,7 @@ timeout = app.config['TIMEOUT']
def start_websockify(websockify_path, target_file):
result = subprocess.run(['pgrep', 'websockify'], stdout=subprocess.PIPE)
if not result.stdout:
print("Websockify is stopped. Starting websockify.")
subprocess.call(
[
websockify_path,
@ -28,7 +28,10 @@ def start_websockify(websockify_path, target_file):
],
stdout=subprocess.PIPE,
)
else:
print("Websockify started.")
def on_starting(server):
print("Booting Websockify server in daemon mode...")
start_websockify(app.config['WEBSOCKIFY_PATH'], app.config['WEBSOCKIFY_TARGET_FILE'])

View file

@ -6,13 +6,25 @@ import logging
import subprocess
import psutil
import psycopg2
# from gunicorn_conf import start_websockify
import rq_dashboard
from rq import Queue
from redis import Redis
from rq_scheduler import Scheduler
from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker
from flask import Flask, render_template, request, redirect, session, abort, url_for, jsonify
from flask import (
Flask,
render_template,
request,
redirect,
session,
abort,
url_for,
jsonify,
Response,
)
import sentry_sdk
from sentry_sdk.integrations.flask import FlaskIntegration
from sentry_sdk.integrations.rq import RqIntegration
@ -34,13 +46,11 @@ from proxstar.db import (
set_template_info,
)
from proxstar.vnc import (
send_stop_ssh_tunnel,
stop_ssh_tunnel,
add_vnc_target,
start_ssh_tunnel,
get_vnc_targets,
delete_vnc_target,
stop_websockify,
open_vnc_session,
)
from proxstar.auth import get_auth
from proxstar.util import gen_password
@ -67,8 +77,9 @@ sentry_sdk.init(
environment=app.config['SENTRY_ENV'],
)
with open('proxmox_ssh_key', 'w') as ssh_key_file:
ssh_key_file.write(app.config['PROXMOX_SSH_KEY'])
if not os.path.exists('proxmox_ssh_key'):
with open('proxmox_ssh_key', 'w') as ssh_key_file:
ssh_key_file.write(app.config['PROXMOX_SSH_KEY'])
ssh_tunnels = []
@ -142,14 +153,22 @@ app.register_blueprint(rq_dashboard_blueprint, url_prefix='/rq')
@app.errorhandler(404)
def not_found(e):
user = User(session['userinfo']['preferred_username'])
return render_template('404.html', user=user, e=e), 404
try:
user = User(session['userinfo']['preferred_username'])
return render_template('404.html', user=user, e=e), 404
except KeyError as exception:
print(exception)
return render_template('404.html', user='chom', e=e), 404
@app.errorhandler(403)
def forbidden(e):
user = User(session['userinfo']['preferred_username'])
return render_template('403.html', user=user, e=e), 403
try:
user = User(session['userinfo']['preferred_username'])
return render_template('403.html', user=user, e=e), 403
except KeyError as exception:
print(exception)
return render_template('403.html', user='chom', e=e), 403
@app.route('/')
@ -247,15 +266,16 @@ def vm_power(vmid, action):
vm.start()
elif action == 'stop':
vm.stop()
send_stop_ssh_tunnel(vmid)
# TODO (willnilges): Replace with remove target function or something
# send_stop_ssh_tunnel(vmid)
elif action == 'shutdown':
vm.shutdown()
send_stop_ssh_tunnel(vmid)
# send_stop_ssh_tunnel(vmid)
elif action == 'reset':
vm.reset()
elif action == 'suspend':
vm.suspend()
send_stop_ssh_tunnel(vmid)
# send_stop_ssh_tunnel(vmid)
elif action == 'resume':
vm.resume()
return '', 200
@ -263,31 +283,26 @@ def vm_power(vmid, action):
return '', 403
@app.route('/console/vm/<string:vmid>/stop', methods=['POST'])
def vm_console_stop(vmid):
if request.form['token'] == app.config['VNC_CLEANUP_TOKEN']:
stop_ssh_tunnel(vmid, ssh_tunnels)
return '', 200
else:
return '', 403
@app.route('/console/vm/<string:vmid>', methods=['POST'])
@auth.oidc_auth
def vm_console(vmid):
user = User(session['userinfo']['preferred_username'])
connect_proxmox()
if user.rtp or int(vmid) in user.allowed_vms:
# import pdb; pdb.set_trace()
vm = VM(vmid)
stop_ssh_tunnel(vm.id, ssh_tunnels)
port = str(5900 + int(vmid))
token = add_vnc_target(port)
node = '{}.csh.rit.edu'.format(vm.node)
logging.info('creating SSH tunnel to %s for VM %s', node, vm.id)
tunnel = start_ssh_tunnel(node, port)
ssh_tunnels.append(tunnel)
vm.start_vnc(port)
return token, 200
vnc_ticket, vnc_port = open_vnc_session(
vmid, vm.node, app.config['PROXMOX_USER'], app.config['PROXMOX_PASS']
)
node = f'{vm.node}.csh.rit.edu'
token = add_vnc_target(node, vnc_port)
return {
'host': app.config['VNC_HOST'],
'port': app.config['VNC_PORT'],
'token': token,
'password': vnc_ticket,
}, 200
else:
return '', 403
@ -399,7 +414,7 @@ def delete(vmid):
user = User(session['userinfo']['preferred_username'])
connect_proxmox()
if user.rtp or int(vmid) in user.allowed_vms:
send_stop_ssh_tunnel(vmid)
# send_stop_ssh_tunnel(vmid)
# Submit the delete VM task to RQ
q.enqueue(delete_vm_task, vmid)
return '', 200
@ -571,29 +586,12 @@ def allowed_users(user):
@app.route('/console/cleanup', methods=['POST'])
def cleanup_vnc():
if request.form['token'] == app.config['VNC_CLEANUP_TOKEN']:
for target in get_vnc_targets():
tunnel = next(
(tunnel for tunnel in ssh_tunnels if tunnel.local_bind_port == int(target['port'])),
None,
)
if tunnel:
if not next(
(
conn
for conn in psutil.net_connections()
if conn.laddr[1] == int(target['port']) and conn.status == 'ESTABLISHED'
),
None,
):
try:
tunnel.stop()
except:
pass
ssh_tunnels.remove(tunnel)
delete_vnc_target(target['port'])
return '', 200
else:
return '', 403
print('Cleaning up targets file...')
with open(app.config['WEBSOCKIFY_TARGET_FILE'], 'w') as targets:
targets.truncate()
return '', 200
print('Got bad cleanup request')
return '', 403
@app.route('/template/<string:template_id>/disk')

View file

@ -650,9 +650,11 @@ $("#console-vm").click(function(){
credentials: 'same-origin',
method: 'post'
}).then((response) => {
return response.text()
}).then((token) => {
window.open(`/static/noVNC/vnc.html?autoconnect=true&encrypt=true&host=proxstar-vnc.csh.rit.edu&port=443&path=path?token=${token}`, '_blank');
return response.json()
}).then((vnc_params) => {
// TODO (willnilges): encrypt=true
// TODO (willnilges): set host and port to an env variable
window.open(`/static/noVNC/vnc.html?autoconnect=true&password=${vnc_params.password}&host=${vnc_params.host}&port=${vnc_params.port}&path=path?token=${vnc_params.token}`, '_blank');
}).catch(err => {
if (err) {
swal("Uh oh...", `Unable to start console for ${vmname}. Please try again later.`, "error");

View file

@ -22,7 +22,6 @@ from proxstar.proxmox import connect_proxmox, get_pools
from proxstar.starrs import get_next_ip, register_starrs, delete_starrs
from proxstar.user import User, get_vms_for_rtp
from proxstar.vm import VM, clone_vm, create_vm
from proxstar.vnc import send_stop_ssh_tunnel
logging.basicConfig(format='%(asctime)s %(levelname)s %(message)s', level=logging.INFO)
@ -151,7 +150,7 @@ def process_expiring_vms_task():
vm.name, vm.id
)
)
send_stop_ssh_tunnel(vm.id)
# send_stop_ssh_tunnel(vm.id) # TODO (willnilges): Remove target from targets file
delete_vm_task(vm.id)
if expiring_vms:
send_vm_expire_email(pool, expiring_vms)
@ -227,8 +226,23 @@ def setup_template_task(template_id, name, user, ssh_key, cores, memory):
def cleanup_vnc_task():
requests.post(
'https://{}/console/cleanup'.format(app.config['SERVER_NAME']),
data={'token': app.config['VNC_CLEANUP_TOKEN']},
verify=False,
)
"""Removes all open VNC sessions. This runs in the RQ worker, and so
needs to be routed properly via the Proxstar API
TODO (willnilges): Use API, track the task IDs, and kill only the finished
ones every couple of minutes
https://github.com/ComputerScienceHouse/proxstar/issues/153
"""
print('Clearing vnc targets')
with open(app.config['WEBSOCKIFY_TARGET_FILE'], 'w') as targets:
targets.truncate()
# FIXME (willnilges): This... might be working...?
try:
requests.post(
'https://{}/console/cleanup'.format(app.config['SERVER_NAME']),
data={'token': app.config['VNC_CLEANUP_TOKEN']},
verify=False,
)
except Exception as e: # pylint: disable=W0703
print(e)

View file

@ -262,13 +262,6 @@ class VM:
iso = 'None'
return iso
def start_vnc(self, port):
proxmox = connect_proxmox()
port = str(int(port) - 5900)
proxmox.nodes(self.node).qemu(self.id).monitor.post(
command='change vnc 127.0.0.1:{}'.format(port)
)
@retry(wait=wait_fixed(2), stop=stop_after_attempt(5))
def eject_iso(self):
proxmox = connect_proxmox()

View file

@ -1,7 +1,9 @@
import os
import subprocess
import time
import urllib.parse
from deprecated import deprecated
import requests
from flask import current_app as app
from sshtunnel import SSHTunnelForwarder
@ -13,14 +15,18 @@ from proxstar.util import gen_password
def stop_websockify():
result = subprocess.run(['pgrep', 'websockify'], stdout=subprocess.PIPE, check=False)
if result.stdout:
pid = result.stdout.strip()
subprocess.run(['kill', pid], stdout=subprocess.PIPE, check=False)
time.sleep(3)
if subprocess.run(['pgrep', 'websockify'], stdout=subprocess.PIPE, check=False).stdout:
time.sleep(10)
pids = result.stdout.splitlines()
for pid in pids:
subprocess.run(['kill', pid], stdout=subprocess.PIPE, check=False)
# FIXME (willnilges): Willard is lazy.
time.sleep(1)
if subprocess.run(['pgrep', 'websockify'], stdout=subprocess.PIPE, check=False).stdout:
logging.info("websockify didn't stop, killing forcefully")
subprocess.run(['kill', '-9', pid], stdout=subprocess.PIPE, check=False)
time.sleep(5)
if subprocess.run(
['pgrep', 'websockify'], stdout=subprocess.PIPE, check=False
).stdout:
logging.info("websockify didn't stop, killing forcefully")
subprocess.run(['kill', '-9', pid], stdout=subprocess.PIPE, check=False)
def get_vnc_targets():
@ -31,38 +37,81 @@ def get_vnc_targets():
target_dict = {}
values = line.strip().split(':')
target_dict['token'] = values[0]
target_dict['port'] = values[2]
target_dict['host'] = f'{values[1].strip()}:{values[2]}'
targets.append(target_dict)
target_file.close()
return targets
def add_vnc_target(port):
def add_vnc_target(node, port):
# TODO (willnilges): This doesn't throw an error if the target file is wrong.
# TODO (willnilges): This will duplicate targets
targets = get_vnc_targets()
target = next((target for target in targets if target['port'] == port), None)
target = next((target for target in targets if target['host'] == f'{node}:{port}'), None)
if target:
print('Host is already in the targets file')
return target['token']
else:
target_file = open(app.config['WEBSOCKIFY_TARGET_FILE'], 'a')
token = gen_password(32, 'abcdefghijklmnopqrstuvwxyz0123456789')
target_file.write('{}: 127.0.0.1:{}\n'.format(token, str(port)))
target_file.write(f'{token}: {node}:{port}\n')
target_file.close()
return token
def delete_vnc_target(port):
def delete_vnc_target(node, port):
targets = get_vnc_targets()
target = next((target for target in targets if target['port'] == str(port)), None)
target = next((target for target in targets if target['host'] == f'{node}:{port}'), None)
if target:
targets.remove(target)
target_file = open(app.config['WEBSOCKIFY_TARGET_FILE'], 'w')
for target in targets:
target_file.write('{}: 127.0.0.1:{}\n'.format(target['token'], target['port']))
target_file.write(f"{target['token']}: {target['host']}\n")
target_file.close()
def open_vnc_session(vmid, node, proxmox_user, proxmox_pass):
"""Pings the Proxmox API to request a VNC Proxy connection. Authenticates
against the API using a Uname/Pass, gets a few tokens back, then uses those
tokens to open the VNC Proxy. Use these to connect to the VM's host with
websockify proxy.
Returns: Ticket to use as the noVNC password, and a port.
"""
# Get Proxmox API ticket and CSRF_Prevention_Token
# TODO (willnilges): Use Proxmoxer to get this information
# TODO (willnilges): Report errors
data = {'username': proxmox_user, 'password': proxmox_pass}
response_data = requests.post(
f'https://{node}.csh.rit.edu:8006/api2/json/access/ticket',
verify=False,
data=data,
).json()['data']
if response_data is None:
raise requests.AuthenticationError(
'Could not authenticate against `ticket` endpoint! Check uname/password'
)
csrf_prevention_token = response_data['CSRFPreventionToken']
ticket = response_data['ticket']
proxy_params = {'node': node, 'vmid': str(vmid), 'websocket': '1', 'generate-password': '0'}
vncproxy_response_data = requests.post(
f'https://{node}.csh.rit.edu:8006/api2/json/nodes/{node}/qemu/{vmid}/vncproxy',
verify=False,
timeout=5,
params=proxy_params,
headers={'CSRFPreventionToken': csrf_prevention_token},
cookies={'PVEAuthCookie': ticket},
).json()['data']
return urllib.parse.quote_plus(vncproxy_response_data['ticket']), vncproxy_response_data['port']
@deprecated('No longer in use')
def start_ssh_tunnel(node, port):
"""Forwards a port on a node
to the proxstar container
"""
port = int(port)
server = SSHTunnelForwarder(
node,
ssh_username=app.config['PROXMOX_SSH_USER'],
@ -73,25 +122,3 @@ def start_ssh_tunnel(node, port):
)
server.start()
return server
def stop_ssh_tunnel(vmid, ssh_tunnels):
# Tear down the SSH tunnel and VNC target entry for a given VM
port = 5900 + int(vmid)
tunnel = next((tunnel for tunnel in ssh_tunnels if tunnel.local_bind_port == port), None)
if tunnel:
logging.info('tearing down SSH tunnel for VM %s', vmid)
try:
tunnel.stop()
except:
pass
ssh_tunnels.remove(tunnel)
delete_vnc_target(port)
def send_stop_ssh_tunnel(vmid):
requests.post(
'https://{}/console/vm/{}/stop'.format(app.config['SERVER_NAME'], vmid),
data={'token': app.config['VNC_CLEANUP_TOKEN']},
verify=False,
)

View file

@ -2,12 +2,13 @@ black~=21.9b0
csh-ldap==2.4.0
click~=7.1.2
ddtrace~=1.2.1
deprecated==1.2.13
flask==1.1.4
jinja2==2.11.3
flask-pyoidc==1.3.0
gunicorn==20.0.4
markupsafe==2.0.1
paramiko==2.7.2
paramiko==2.11.0
proxmoxer==1.1.1
psutil==5.8.0
psycopg2-binary==2.9.3