Installation
Viseron runs exclusively in Docker.
First of all, choose the appropriate Docker container for your machine.
Builds are published to Docker Hub.
Have a look at the supported architectures below.
Supported architectures
Viserons images support multiple architectures such as amd64
, aarch64
and armhf
.
Pulling roflcoopter/viseron:latest
should automatically pull the correct image for you.
An exception to this is if you have the need for a specific container, eg the CUDA version.
Then you will need to specify your desired image.
The images available are:
Image | Architecture | Description |
---|---|---|
roflcoopter/viseron | multiarch | Multiarch image |
roflcoopter/aarch64-viseron | aarch64 | Generic aarch64 image, with RPi4 hardware accelerated decoding/encoding |
roflcoopter/amd64-viseron | amd64 | Generic image |
roflcoopter/amd64-cuda-viseron | amd64 | Image with CUDA support |
roflcoopter/rpi3-viseron | armhf | Built specifically for the RPi3 with hardware accelerated decoding/encoding |
roflcoopter/jetson-nano-viseron | aarch64 | Built specifically for the Jetson Nano with: - GStreamer hardware accelerated decoding - FFmpeg hardware accelerated decoding - CUDA support |
Running Viseron
Below are a few examples on how to run Viseron.
Both docker
and docker-compose
examples are given.
You have to change the values between the brackets {}
to match your setup.
64-bit Linux machine
- Docker
- Docker-Compose
docker run --rm \
-v {recordings path}:/recordings \
-v {recordings path}:/segments \
-v {recordings path}:/snapshots \
-v {recordings path}:/thumbnails \
-v {config path}:/config \
-v /etc/localtime:/etc/localtime:ro \
-p 8888:8888 \
--name viseron \
roflcoopter/viseron:latest
version: "2.4"
services:
viseron:
image: roflcoopter/viseron:latest
container_name: viseron
volumes:
- {recordings path}:/recordings
- {recordings path}:/segments
- {recordings path}:/snapshots
- {recordings path}:/thumbnails
- {config path}:/config
- /etc/localtime:/etc/localtime:ro
ports:
- 8888:8888
64-bit Linux machine with VAAPI (Intel NUC for example)
- Docker
- Docker-Compose
docker run --rm \
-v {recordings path}:/recordings \
-v {recordings path}:/segments \
-v {recordings path}:/snapshots \
-v {recordings path}:/thumbnails \
-v {config path}:/config \
-v /etc/localtime:/etc/localtime:ro \
-p 8888:8888 \
--name viseron \
--device /dev/dri \
roflcoopter/viseron:latest
version: "2.4"
services:
viseron:
image: roflcoopter/viseron:latest
container_name: viseron
volumes:
- {recordings path}:/recordings
- {recordings path}:/segments
- {recordings path}:/snapshots
- {recordings path}:/thumbnails
- {config path}:/config
- /etc/localtime:/etc/localtime:ro
ports:
- 8888:8888
devices:
- /dev/dri
64-bit Linux machine with NVIDIA GPU
- Docker
- Docker-Compose
docker run --rm \
-v {recordings path}:/recordings \
-v {recordings path}:/segments \
-v {recordings path}:/snapshots \
-v {recordings path}:/thumbnails \
-v {config path}:/config \
-v /etc/localtime:/etc/localtime:ro \
-p 8888:8888 \
--name viseron \
--runtime=nvidia \
roflcoopter/amd64-cuda-viseron:latest
version: "2.4"
services:
viseron:
image: roflcoopter/amd64-cuda-viseron:latest
container_name: viseron
volumes:
- {recordings path}:/recordings
- {recordings path}:/segments
- {recordings path}:/snapshots
- {recordings path}:/thumbnails
- {config path}:/config
- /etc/localtime:/etc/localtime:ro
ports:
- 8888:8888
runtime: nvidia
Make sure NVIDIA Container Toolkit is installed.
On a Jetson Nano
- Docker
- Docker-Compose
docker run --rm \
-v {recordings path}:/recordings \
-v {recordings path}:/segments \
-v {recordings path}:/snapshots \
-v {recordings path}:/thumbnails \
-v {config path}:/config \
-v /etc/localtime:/etc/localtime:ro \
-p 8888:8888 \
--name viseron \
--runtime=nvidia \
--privileged \
roflcoopter/jetson-nano-viseron:latest
It is a must to run with --privileged
so the container gets access to all the needed devices for hardware acceleration.
You can probably get around this by manually mounting all the needed devices but this is not something I have looked into.
version: "2.4"
services:
viseron:
image: roflcoopter/jetson-nano-viseron:latest
container_name: viseron
volumes:
- {recordings path}:/recordings
- {recordings path}:/segments
- {recordings path}:/snapshots
- {recordings path}:/thumbnails
- {config path}:/config
- /etc/localtime:/etc/localtime:ro
ports:
- 8888:8888
runtime: nvidia
privileged: true
It is a must to run with privileged: true
so the container gets access to all the needed devices for hardware acceleration.
You can probably get around this by manually mounting all the needed devices but this is not something I have looked into.
On a RaspberryPi 4
- Docker
- Docker-Compose
docker run --rm \
--privileged \
-v {recordings path}:/recordings \
-v {recordings path}:/segments \
-v {recordings path}:/snapshots \
-v {recordings path}:/thumbnails \
-v {config path}:/config \
-v /etc/localtime:/etc/localtime:ro \
-v /dev/bus/usb:/dev/bus/usb \
-v /opt/vc/lib:/opt/vc/lib \
-p 8888:8888 \
--name viseron \
--device=/dev/video10:/dev/video10 \
--device=/dev/video11:/dev/video11 \
--device=/dev/video12:/dev/video12 \
--device /dev/bus/usb:/dev/bus/usb \
roflcoopter/viseron:latest
version: "2.4"
services:
viseron:
image: roflcoopter/viseron:latest
container_name: viseron
volumes:
- {recordings path}:/recordings
- {recordings path}:/segments
- {recordings path}:/snapshots
- {recordings path}:/thumbnails
- {config path}:/config
- /etc/localtime:/etc/localtime:ro
devices:
- /dev/video10:/dev/video10
- /dev/video11:/dev/video11
- /dev/video12:/dev/video12
- /dev/bus/usb:/dev/bus/usb
ports:
- 8888:8888
privileged: true
Viseron is quite RAM intensive, mostly because of the object detection.
I do not recommend using an RPi unless you have a Google Coral EdgeTPU.
The CPU is not fast enough and you might run out of memory.
Configure a substream if you plan on running Viseron on an RPi.
RaspberryPi 3b+
- Docker
- Docker-Compose
docker run --rm \
--privileged \
-v {recordings path}:/recordings \
-v {recordings path}:/segments \
-v {recordings path}:/snapshots \
-v {recordings path}:/thumbnails \
-v {config path}:/config \
-v /etc/localtime:/etc/localtime:ro \
-v /opt/vc/lib:/opt/vc/lib \
-p 8888:8888 \
--name viseron \
--device /dev/vchiq:/dev/vchiq \
--device /dev/vcsm:/dev/vcsm \
--device /dev/bus/usb:/dev/bus/usb \
roflcoopter/viseron:latest
version: "2.4"
services:
viseron:
image: roflcoopter/viseron:latest
container_name: viseron
volumes:
- {recordings path}:/recordings
- {recordings path}:/segments
- {recordings path}:/snapshots
- {recordings path}:/thumbnails
- {config path}:/config
- /etc/localtime:/etc/localtime:ro
- /opt/vc/lib:/opt/vc/lib
devices:
- /dev/vchiq:/dev/vchiq
- /dev/vcsm:/dev/vcsm
- /dev/bus/usb:/dev/bus/usb
ports:
- 8888:8888
privileged: true
Viseron is quite RAM intensive, mostly because of the object detection.
I do not recommend using an RPi unless you have a Google Coral EdgeTPU.
The CPU is not fast enough and you might run out of memory.
To make use of hardware accelerated decoding/encoding you might have to increase the allocated GPU memory.
To do this edit /boot/config.txt
and set gpu_mem=256
and then reboot.
Configure a substream if you plan on running Viseron on an RPi.
Viseron will start up immediately and serve the Web UI on port 8888
.
Please proceed to the next chapter on how to configure Viseron.
VAAPI hardware acceleration support is built into every amd64
container.
To utilize it you need to add --device /dev/dri
to your docker command.
EdgeTPU support is also included in all containers.
To use it, add -v /dev/bus/usb:/dev/bus/usb --privileged
to your docker command.
User and Group Identifiers
When using volumes (-v
flags) permissions issues can happen between the host and the container.
To solve this, you can specify the user PUID
and group PGID
as environment variables to the container.
Docker command
docker run --rm \
-v {recordings path}:/recordings \
-v {recordings path}:/segments \
-v {recordings path}:/snapshots \
-v {recordings path}:/thumbnails \
-v {config path}:/config \
-v /etc/localtime:/etc/localtime:ro \
-p 8888:8888 \
--name viseron \
-e PUID=1000 \
-e PGID=1000 \
roflcoopter/viseron:latest
Docker Compose
Example docker-compose
version: "2.4"
services:
viseron:
image: roflcoopter/viseron:latest
container_name: viseron
volumes:
- {recordings path}:/recordings
- {recordings path}:/segments
- {recordings path}:/snapshots
- {recordings path}:/thumbnails
- {config path}:/config
- /etc/localtime:/etc/localtime:ro
ports:
- 8888:8888
environment:
- PUID=1000
- PGID=1000
Ensure the volumes are owned on the host by the user you specify.
In this example PUID=1000
and PGID=1000
.
To find the UID and GID of your current user you can run this command on the host:
id your_username_here
Viseron runs as root
(PUID=0
and PGID=0
) by default.
This is because it can be problematic to get hardware acceleration and/or EdgeTPUs to work properly for everyone.
The s6-overlay
init scripts do a good job at fixing permissions for other users, but you may still face some issues if you choose to not run as root
.
If you do have issues, please open an issue and i will do my best to fix them.