
Centreon Audible Alerts with Ollama and Piper AI
In this post we are going to combine several elements to have voice notifications in our monitoring system. We will of course use local and secure Artificial Intelligence (Ollama) to generate a friendly and human text with the Centreon alert, to later play it on our 7-inch Screen″ thanks to Piper and a voice in perfect Spanish.
So what I said, if you have a monitoring system based on Centreon you will know that you can receive alerts when there is a problem in a monitored Service or Host, This way you will find out about any problems in your datacenter. In a Past post we saw how to make the messages that Centreon generates more 'human'’ thanks to Ollama's AI, on that occasion he sent the alert in text format by Telegram. Today the difference is that we will have the message reproduced by a human voice (Thanks to Piper) and blast through the speakers of a small Raspberry Pi, that I use on me 7 Screen″ Monitoring.
This would be an example of audio:
Ah, Of course, Remember that everything we will see in this post is open source, local, sure, no internet required
Piper
We therefore start with Piper, that as they say in its official website It's a 'fast, local text-to-speech neural system that sounds great'. It's a TTS (Text-to-speech) that we can install on the Raspberry Pi to convert any text into audio with an immense variety of voices in different languages that will come in handy!
The easiest way to have Piper installed on the Raspberry Pi, it will be downloading the binary compiled for our CPU architecture. That we will know by executing: 'cat /proc/cpuinfo'.
- For 64-bit Linux desktop: amd64
- For the Raspberry Pi 4 64-bit: Arm64
- For the Raspberry Pi 3/4 32-bit: Armv7
Download the latest version you find on their GitHub, Links will become obsolete. We download it, We unzip and enter your directory:
wget https://github.com/rhasspy/piper/releases/download/v1.2.0/piper_armv7.tar.gz tar zxfv piper_armv7.tar.gz cd piper
We choose a trained voice model from your repository: https://github.com/rhasspy/piper/#voices and when we choose the one we like the most, we download the ONNX file and the JSON. By the way, if you want to listen to them to see how they sound, Use this website: https://rhasspy.github.io/piper-samples/. Come, we download them:
wget https://huggingface.co/rhasspy/piper-voices/resolve/v1.0.0/es/es_ES/sharvard/medium/es_ES-sharvard-medium.onnx?download=true -O es_ES-sharvard-medium.onnx wget https://huggingface.co/rhasspy/piper-voices/resolve/v1.0.0/es/es_ES/sharvard/medium/es_ES-sharvard-medium.onnx.json?download=true.json -O es_ES-sharvard-medium.onnx.json
And we can try it now, We will have two options, create the audio stream and listen to it through the Raspberry Pi's speakers directly, or generate a .wav file for what we need. Run:
echo 'Hola, esto es un ejemplo de una frase.' | ./piper --model es_ES-sharvard-medium.onnx --output-raw | Aplay -r 22050 -F S16_LE -T RAW -
And it will sound something like this:
Integration with Centreon
If we want to call Piper from a notification alert, These are the steps I have followed. First on the Raspberry Pi we will create a script for Centreon to call when there is a notification, will run this script that will notify the speakers of the generated alert. The script will receive as the first and only variable the text to be reproduced. I've been left with something like this (/home/piper/piper/alerta_piper.sh):
#!/Bin/Bash CD /Home/Pi/Piper RM alerta.wav Echo $1 | ./piper --model es_ES-sharvard-medium.onnx --length_scale 1.1 --output_file alerta.wav systemctl stop snapclient aplay alerta_inicio.wav aplay alerta.wav aplay alerta_fin.wav systemctl start snapclient
Some details that you will see is that I slow down his voice a little, and that stopped some Snapclient services, since ambient music also usually plays from this screen and to temporarily stop it while the message sounds.
After making the executable script, we must call/execute it from the Centreon Central server, or from a Poller if applicable. To do this,, it will be necessary to fingerprint between the Centreon server and the Raspberry Pi, so you can access via SSH without asking for credentials, and that way we will execute the script remotely.
su centreon-engine ssh-keygen -t rsa cat /var/lib/centreon-engine/.ssh/id_rsa.pub ssh-rsa AAA... ySHsfu0= centreon-engine@os-poller-osit
So at Centreon we log in like the user who runs things in Centreon, we generate a private and public key if we do not have them; Then with CAT we will see the newly created public key and copy it.
Paste the public key from Centreon Central or Poller on the Raspberry Pi into the /home/pi/.ssh/authorized_keys file
And we tried to see if we connect from Centreon with centreon-engine to the Raspberry Pi with an SSH::
Sh pi@DIRECCION_IP_RASPBERRY_PI
Now it remains to register the commands in Centreon that will be used to notify when a Host or a Service has problems. If we use In this previous post For reference, where we already generate the scripts to launch the message generated by Ollama's AI to Telegram, Well, we will simply have to add a line at the end that calls the script created in the Raspberry Pi and that's it. This way we will receive the notification on Telegram and through the speakers. I leave you a summary of the script that alerts problems in the Hosts, As I said, the important thing is the last line:
#!/bin/bash # Ollama's AI generates the message to taste text="Please, generates a text for a notification that will be sent to the user's smartphone with useful information. You're a helpful personal agent who generates text for IT technicians. Your answers can be technical, Don't offer yourself as help, Don't give recommendations. The message you have to say is about a problem of type: "$1", In the team "$2" since it's in the state "$3". Don't say hello, Don't say hello and don't say goodbye, Don't even thank you for anything. The answer should be in Spanish and the message should be brief." texto_generado=$(/usr/bin/curl http://DIRECCION_IP_OLLAMA:11434/api/generate -H "Content-Type: application/json" -d '{ "model": "llama3", "prompt": "'"$text"'", "temperature": 0.1, "Stream": False, "max_length": 50 }' | jq -r '.response') texto_generado_sin_comillas='echo "$texto_generado" | sed 's/\"//g'` # Manda a Telegram /usr/bin/curl -X POST -H "Content-Type: application/json" -d '{ "chat_id": "ID_CHAT_TELEGRAM", "text": "'"$texto_generado_sin_comillas"'" }' "https://api.telegram.org/botTOKEN_BOT_TELEGRAM/sendMessage" # Alerta en la Raspberry Pi con Piper por los altavoces ssh pi@DIRECCION_IP_RASPBERRYPI "sudo /home/pi/piper/alerta_piper.sh \"$texto_generado_sin_comillas\""
¡Y chimpún! Ya lo tenemos listo, script created and those who have doubts have in the previous posts any data they need to create the Command, register it in Centreon, Assemble Ollama's Secure Local AI, Riding Centreon…
As always and for being very original, I hope you have found it interesting, that you can apply it to other types of technologies, ideas… The truth is that Piper and Ollama in my home with Home Assistant are totally necessary, give it an original touch, Now we can start calling it little by little smart home… But that, in other posts, We'll see!
Hugs, Carry yourself well!