You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: packages/audiodocs/docs/analysis/analyser-node.mdx
+8-8Lines changed: 8 additions & 8 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -8,9 +8,9 @@ import { ReadOnly } from '@site/src/components/Badges';
8
8
# AnalyserNode
9
9
10
10
The `AnalyserNode` interface represents a node providing two core functionalities: extracting time-domain data and frequency-domain data from audio signals.
11
-
It is an [`AudioNode`](/core/audio-node) that passes the audio data unchanged from input to output, but allows you to take passed data and process it.
11
+
It is an [`AudioNode`](/docs/core/audio-node) that passes the audio data unchanged from input to output, but allows you to take passed data and process it.
|`fftSize`|`number`| Integer value representing size of [Fast Fourier Transform](https://en.wikipedia.org/wiki/Fast_Fourier_transform) used to determine frequency domain. In general it is size of returning time-domain data. |
33
-
|`minDecibels`|`number`| Float value representing the minimum value for the range of results from [`getByteFrequencyData()`](/analysis/analyser-node#getbytefrequencydata). |
34
-
|`maxDecibels`|`number`| Float value representing the maximum value for the range of results from [`getByteFrequencyData()`](/analysis/analyser-node#getbytefrequencydata). |
33
+
|`minDecibels`|`number`| Float value representing the minimum value for the range of results from [`getByteFrequencyData()`](/docs/analysis/analyser-node#getbytefrequencydata). |
34
+
|`maxDecibels`|`number`| Float value representing the maximum value for the range of results from [`getByteFrequencyData()`](/docs/analysis/analyser-node#getbytefrequencydata). |
35
35
|`smoothingTimeConstant`|`number`| Float value representing averaging constant with the last analysis frame. In general the higher value the smoother is the transition between values over time. |
36
-
|`window`|[`WindowType`](/types/window-type)| Enumerated value that specifies the type of window function applied when extracting frequency data. |
36
+
|`window`|[`WindowType`](/docs/types/window-type)| Enumerated value that specifies the type of window function applied when extracting frequency data. |
37
37
|`frequencyBinCount`|`number`| Integer value representing amount of the data obtained in frequency domain, half of the `fftSize` property. | <ReadOnly /> |
38
38
39
39
:::caution
@@ -98,13 +98,13 @@ Each value in the array is within the range 0 to 255, where value of 127 indicat
98
98
#### `minDecibels`
99
99
- Default value is -100 dB.
100
100
- 0 dB([decibel](https://en.wikipedia.org/wiki/Decibel)) is the loudest possible sound, -10 dB is a 10th of that.
101
-
- When getting data from [`getByteFrequencyData()`](/analysis/analyser-node#getbytefrequencydata), any frequency with amplitude lower then `minDecibels` will be returned as 0.
101
+
- When getting data from [`getByteFrequencyData()`](/docs/analysis/analyser-node#getbytefrequencydata), any frequency with amplitude lower then `minDecibels` will be returned as 0.
102
102
- Throws `IndexSizeError` if set value is greater than or equal to `maxDecibels`.
103
103
104
104
#### `maxDecibels`
105
105
- Default value is -30 dB.
106
106
- 0 dB([decibel](https://en.wikipedia.org/wiki/Decibel)) is the loudest possible sound, -10 dB is a 10th of that.
107
-
- When getting data from [`getByteFrequencyData()`](/analysis/analyser-node#getbytefrequencydata), any frequency with amplitude higher then `maxDecibels` will be returned as 255.
107
+
- When getting data from [`getByteFrequencyData()`](/docs/analysis/analyser-node#getbytefrequencydata), any frequency with amplitude higher then `maxDecibels` will be returned as 255.
108
108
- Throws `IndexSizeError` if set value is less then or equal to `minDecibels`.
Copy file name to clipboardExpand all lines: packages/audiodocs/docs/core/audio-node.mdx
+6-6Lines changed: 6 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -21,19 +21,19 @@ We usually represent the channels with the standard abbreviations detailed in th
21
21
#### Mixing
22
22
23
23
When node has more then one input or number of inputs channels differs from output up-mixing or down-mixing must be conducted.
24
-
There are three properties involved in mixing process: `channelCount`, [`ChannelCountMode`](/types/channel-count-mode), [`ChannelInterpretation`](/types/channel-interpretation).
24
+
There are three properties involved in mixing process: `channelCount`, [`ChannelCountMode`](/docs/types/channel-count-mode), [`ChannelInterpretation`](/docs/types/channel-interpretation).
25
25
Based on them we can obtain output's number of channels and mixing strategy.
|`numberOfInputs`|`number`| Integer value representing the number of input connections for the node. | <ReadOnly /> |
33
33
|`numberOfOutputs`|`number`| Integer value representing the number of output connections for the node. | <ReadOnly /> |
34
34
|`channelCount`|`number`| Integer used to determine how many channels are used when up-mixing or down-mixing node's inputs. | <ReadOnly /> |
35
-
|`channelCountMode`|[`ChannelCountMode`](/types/channel-count-mode)| Enumerated value that specifies the method by which channels are mixed between the node's inputs and outputs. | <ReadOnly /> |
36
-
|`channelInterpretation`|[`ChannelInterpretation`](/types/channel-interpretation)| Enumerated value that specifies how input channels are mapped to output channels when number of them is different. | <ReadOnly /> |
35
+
|`channelCountMode`|[`ChannelCountMode`](/docs/types/channel-count-mode)| Enumerated value that specifies the method by which channels are mixed between the node's inputs and outputs. | <ReadOnly /> |
36
+
|`channelInterpretation`|[`ChannelInterpretation`](/docs/types/channel-interpretation)| Enumerated value that specifies how input channels are mapped to output channels when number of them is different. | <ReadOnly /> |
37
37
38
38
## Examples
39
39
@@ -89,7 +89,7 @@ The above method lets you connect one of the node's outputs to a destination.
89
89
90
90
| Parameters | Type | Description |
91
91
| :---: | :---: | :---- |
92
-
|`destination`|[`AudioNode`](/core/audio-node) or [`AudioParam`](/core/audio-param)|`AudioNode` or `AudioParam` to which to connect. |
92
+
|`destination`|[`AudioNode`](/docs/core/audio-node) or [`AudioParam`](/docs/core/audio-param)|`AudioNode` or `AudioParam` to which to connect. |
93
93
94
94
#### Errors:
95
95
@@ -105,7 +105,7 @@ The above method lets you disconnect one or more nodes from the node.
105
105
106
106
| Parameters | Type | Description |
107
107
| :---: | :---: | :---- |
108
-
|`destination` <Optional /> |[`AudioNode`](/core/audio-node) or [`AudioParam`](/core/audio-param)|`AudioNode` or `AudioParam` from which to disconnect. |
108
+
|`destination` <Optional /> |[`AudioNode`](/docs/core/audio-node) or [`AudioParam`](/docs/core/audio-param)|`AudioNode` or `AudioParam` from which to disconnect. |
109
109
110
110
If no arguments provided node disconnects from all outgoing connections.
The `BaseAudioContext` interface acts as a supervisor of audio-processing graphs. It provides key processing parameters such as current time, output destination or sample rate.
10
10
Additionally, it is responsible for nodes creation and audio-processing graph's lifecycle management.
11
-
However, `BaseAudioContext` itself cannot be directly utilized, instead its functionalities must be accessed through one of its derived interfaces: [`AudioContext`](/core/audio-context), `OfflineAudioContext`.
11
+
However, `BaseAudioContext` itself cannot be directly utilized, instead its functionalities must be accessed through one of its derived interfaces: [`AudioContext`](/docs/core/audio-context), `OfflineAudioContext`.
12
12
13
13
#### Audio graph
14
14
15
15
An audio graph is a structured representation of audio processing elements and their connections within an audio context.
16
16
The graph consists of various types of nodes, each performing specific audio operations, connected in a network that defines the audio signal flow.
17
17
In general we can distinguish four types of nodes:
@@ -27,7 +27,7 @@ In general we can distinguish four types of nodes:
27
27
Audio graph rendering is done in blocks of sample-frames. The number of sample-frames in a block is called render quantum size, and the block itself is called a render quantum.
28
28
By default render quantum size value is 128 and it is constant.
29
29
30
-
The [`AudioContext`](/core/audio-context) rendering thread is driven by a system-level audio callback.
30
+
The [`AudioContext`](/docs/core/audio-context) rendering thread is driven by a system-level audio callback.
31
31
Each call has a system-level audio callback buffer size, which is a varying number of sample-frames that needs to be computed on time before the next system-level audio callback arrives,
32
32
but render quantum size does not have to be a divisor of the system-level audio callback buffer size.
33
33
@@ -42,13 +42,13 @@ Concept of system-level audio callback does not apply to `OfflineAudioContext`.
42
42
|`currentTime`|`number`| Double value representing an ever-increasing hardware time in seconds, starting from 0. | <ReadOnly /> |
43
43
|`destination`|`AudioDestinationNode`| Final output destination associated with the context. | <ReadOnly /> |
44
44
|`sampleRate`|`number`| Float value representing the sample rate (in samples per seconds) used by all nodes in this context. | <ReadOnly /> |
45
-
|`state`|[`ContextState`](/core/base-audio-context#contextstate)| Enumerated value represents the current state of the context. | <ReadOnly /> |
45
+
|`state`|[`ContextState`](/docs/core/base-audio-context#contextstate)| Enumerated value represents the current state of the context. | <ReadOnly /> |
46
46
47
47
## Methods
48
48
49
49
### `createAnalyser`
50
50
51
-
The above method lets you create [`AnalyserNode`](/analysis/analyser-node).
51
+
The above method lets you create [`AnalyserNode`](/docs/analysis/analyser-node).
52
52
53
53
#### Returns `AnalyserNode`.
54
54
@@ -66,7 +66,7 @@ The above method lets you create `BiquadFilterNode`.
66
66
67
67
### `createBuffer`
68
68
69
-
The above method lets you create [`AudioBuffer`](/sources/audio-buffer).
69
+
The above method lets you create [`AudioBuffer`](/docs/sources/audio-buffer).
70
70
71
71
| Parameters | Type | Description |
72
72
| :---: | :---: | :---- |
@@ -86,29 +86,29 @@ The above method lets you create [`AudioBuffer`](/sources/audio-buffer).
86
86
87
87
### `createBufferSource`
88
88
89
-
The above method lets you create [`AudioBufferSourceNode`](/sources/audio-buffer-source-node).
89
+
The above method lets you create [`AudioBufferSourceNode`](/docs/sources/audio-buffer-source-node).
90
90
91
91
| Parameters | Type | Description |
92
92
| :---: | :---: | :---- |
93
-
|`pitchCorrection` <Optional /> |[`AudioBufferSourceNodeOptions`](/core/base-audio-context#audiobuffersourcenodeoptions)| Dictionary object that specifies if pitch correction has to be available. |
93
+
|`pitchCorrection` <Optional /> |[`AudioBufferSourceNodeOptions`](/docs/core/base-audio-context#audiobuffersourcenodeoptions)| Dictionary object that specifies if pitch correction has to be available. |
94
94
95
95
#### Returns `AudioBufferSourceNode`.
96
96
97
97
### `createBufferQueueSource` <MobileOnly />
98
98
99
-
The above method lets you create [`AudioBufferQueueSourceNode`](/sources/audio-buffer-queue-source-node).
99
+
The above method lets you create [`AudioBufferQueueSourceNode`](/docs/sources/audio-buffer-queue-source-node).
100
100
101
101
#### Returns `AudioBufferQueueSourceNode`.
102
102
103
103
### `createGain`
104
104
105
-
The above method lets you create [`GainNode`](/effects/gain-node).
105
+
The above method lets you create [`GainNode`](/docs/effects/gain-node).
106
106
107
107
#### Returns `GainNode`.
108
108
109
109
### `createOscillator`
110
110
111
-
The above method lets you create [`OscillatorNode`](/sources/oscillator-node).
111
+
The above method lets you create [`OscillatorNode`](/docs/sources/oscillator-node).
112
112
113
113
#### Returns `OscillatorNode`.
114
114
@@ -120,7 +120,7 @@ The above method lets you create `PeriodicWave`.
120
120
| :---: | :---: | :---- |
121
121
|`real`|`Float32Array`| An array of cosine terms. |
122
122
|`imag`|`Float32Array`| An array of sine terms. |
123
-
|`constraints` <Optional /> |[`PeriodicWaveConstraints`](/core/base-audio-context#periodicwaveconstraints)| An object that specifies if normalization is disabled. |
123
+
|`constraints` <Optional /> |[`PeriodicWaveConstraints`](/docs/core/base-audio-context#periodicwaveconstraints)| An object that specifies if normalization is disabled. |
124
124
125
125
#### Errors
126
126
@@ -134,7 +134,7 @@ The above method lets you create `PeriodicWave`.
134
134
135
135
The above method lets you create `StereoPannerNode`.
Copy file name to clipboardExpand all lines: packages/audiodocs/docs/effects/gain-node.mdx
+5-5Lines changed: 5 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -8,13 +8,13 @@ import { ReadOnly } from '@site/src/components/Badges';
8
8
# GainNode
9
9
10
10
The `GainNode` interface represents the change in volume (amplitude) of the audio signal.
11
-
It is an [`AudioNode`](/core/audio-node) that applies given gain to processing audio data.
11
+
It is an [`AudioNode`](/docs/core/audio-node) that applies given gain to processing audio data.
12
12
13
-
Direct gain modification often results in unpleasant 'clicks'. To avoid this effect, utilize exponential interpolation methods from the [`AudioParam`](/core/audio-param).
13
+
Direct gain modification often results in unpleasant 'clicks'. To avoid this effect, utilize exponential interpolation methods from the [`AudioParam`](/docs/core/audio-param).
| `gain` | [`AudioParam`](/core/audio-param) | [`a-rate`](/core/audio-param#a-rate-vs-k-rate)`AudioParam` representing value of gain to apply. | <ReadOnly />
43
+
| `gain` | [`AudioParam`](/docs/core/audio-param) | [`a-rate`](/docs/core/audio-param#a-rate-vs-k-rate)`AudioParam` representing value of gain to apply. | <ReadOnly />
Copy file name to clipboardExpand all lines: packages/audiodocs/docs/fundamentals/introduction.mdx
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -8,7 +8,7 @@ sidebar_position: 1
8
8
9
9
React Native Audio API is an imperative, high-level API for processing and synthesizing audio in React Native Applications. React Native Audio API follows the [Web Audio Specification](https://www.w3.org/TR/webaudio-1.1/) making it easier to write audio-heavy applications for iOS, Android and Web with just one codebase.
0 commit comments