@@ -10,6 +10,7 @@ This module provide [Wyoming Protocol](https://www.home-assistant.io/integration
10
10
- any desktop/server microphone/speaker can be used as two-way audio source
11
11
- supported any OS via FFmpeg or any similar software
12
12
- supported Linux via alsa source
13
+ - you can change the behavior using the built-in scripting engine
13
14
14
15
## Typical Voice Pipeline
15
16
@@ -57,6 +58,75 @@ Select one or multiple wake words:
57
58
wake_uri: tcp://192.168.1.23:10400?name=alexa_v0.1&name=hey_jarvis_v0.1&name=hey_mycroft_v0.1&name=hey_rhasspy_v0.1&name=ok_nabu_v0.1
58
59
` ` `
59
60
61
+ # # Events
62
+
63
+ You can add wyoming event handling using the [expr](https://github.com/AlexxIT/go2rtc/blob/master/internal/expr/README.md) language. For example, to pronounce TTS on some media player from HA.
64
+
65
+ Turn on the logs to see what kind of events happens.
66
+
67
+ This is what the default scripts look like :
68
+
69
+ ` ` ` yaml
70
+ wyoming:
71
+ script_example:
72
+ event:
73
+ run-satellite: Detect()
74
+ pause-satellite: Stop()
75
+ voice-stopped: Pause()
76
+ audio-stop: PlayAudio() && WriteEvent("played") && Detect()
77
+ error: Detect()
78
+ internal-run: WriteEvent("run-pipeline", '{"start_stage":"wake","end_stage":"tts"}') && Stream()
79
+ internal-detection: WriteEvent("run-pipeline", '{"start_stage":"asr","end_stage":"tts"}') && Stream()
80
+ ` ` `
81
+
82
+ If you write a script for an event - the default action is no longer executed. You need to repeat the necessary steps yourself.
83
+
84
+ In addition to the standard events, there are two additional events :
85
+
86
+ - ` internal-run` - called after `Detect()` when VAD detected, but WAKE service unavailable
87
+ - ` internal-detection` - called after `Detect()` when WAKE word detected
88
+
89
+ **Example 1.** You want to play a sound file when a wake word detected (only `wav` supported):
90
+
91
+ - ` PlayFile` and `PlayAudio` functions are executed synchronously, the following steps will be executed only after they are completed
92
+
93
+ ` ` ` yaml
94
+ wyoming:
95
+ script_example:
96
+ event:
97
+ internal-detection: PlayFile('/media/beep.wav') && WriteEvent("run-pipeline", '{"start_stage":"asr","end_stage":"tts"}') && Stream()
98
+ ` ` `
99
+
100
+ **Example 2.** You want to play TTS on a Home Assistant media player:
101
+
102
+ Each event has a `Type` and `Data` in JSON format. You can use their values in scripts.
103
+
104
+ - in the `synthesize` step, we get the value of the `text` and call the HA REST API
105
+ - in the `audio-stop` step we get the duration of the TTS in seconds, wait for this time and start the pipeline again
106
+
107
+ ` ` ` yaml
108
+ wyoming:
109
+ script_example:
110
+ event:
111
+ synthesize: |
112
+ let text = fromJSON(Data).text;
113
+ let token = 'eyJhbGci...';
114
+ fetch('http://localhost:8123/api/services/tts/speak', {
115
+ method: 'POST',
116
+ headers: {'Authorization': 'Bearer '+token,'Content-Type': 'application/json'},
117
+ body: toJSON({
118
+ entity_id: 'tts.google_translate_com',
119
+ media_player_entity_id: 'media_player.google_nest',
120
+ message: text,
121
+ language: 'en',
122
+ }),
123
+ }).ok
124
+ audio-stop: |
125
+ let timestamp = fromJSON(Data).timestamp;
126
+ let delay = string(timestamp)+'s';
127
+ Sleep(delay) && WriteEvent("played") && Detect()
128
+ ` ` `
129
+
60
130
# # Config examples
61
131
62
132
Satellite on Windows server using FFmpeg and FFplay.
0 commit comments