Skip to content

Commit 63292b2

Browse files
committed
Windows 10 RTM Release - September 2015 Update 2
1 parent cbeb103 commit 63292b2

File tree

155 files changed

+6060
-1663
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

155 files changed

+6060
-1663
lines changed

Samples/AudioCreation/README.md

Lines changed: 16 additions & 52 deletions
Original file line numberDiff line numberDiff line change
@@ -3,62 +3,34 @@
33
samplefwlink: http://go.microsoft.com/fwlink/p/?LinkId=619481&clcid=0x409
44
--->
55

6-
# Audio graphs sample
6+
# AudioGraph sample
77

8-
This sample shows how to use the APIs in the **Windows.Media.Audio** namespace to create audio graphs for audio routing, mixing, and processing scenarios.
9-
An audio graph is a set of interconnected audio nodes through which audio data flows. Audio input nodes supply audio data to the graph from audio input devices,
10-
audio files, or from custom code. Audio output nodes are the destination for audio processed by the graph. Audio can be routed out of the graph to audio output
11-
devices, audio files, or custom code. The last type of node is a submix node which takes audio from one or more nodes and combines them into a single output that
12-
can be routed to other nodes in the graph. After all of the nodes have been created and the connections between them set up, you simply start the audio graph and
13-
the audio data flows from the input nodes, through any submix nodes, to the output nodes. This model makes scenarios like recording from a device's microphone to
14-
an audio file, playing audio from a file to a device's speaker, or mixing audio from multiple sources quick and easy to implement.
8+
This sample shows how to use the APIs in the **Windows.Media.Audio** namespace to create audio graphs for audio routing, mixing, and processing scenarios. An audio graph is a set of interconnected audio nodes through which audio data flows. Audio input nodes supply audio data to the graph from audio input devices, audio files, or from custom code. Audio output nodes are the destination for audio processed by the graph. Audio can be routed out of the graph to audio output devices, audio files, or custom code. The last type of node is a submix node which takes audio from one or more nodes and combines them into a single output that can be routed to other nodes in the graph. After all of the nodes have been created and the connections between them set up, you simply start the audio graph and the audio data flows from the input nodes, through any submix nodes, to the output nodes. This model makes scenarios like recording from a device's microphone to an audio file, playing audio from a file to a device's speaker, or mixing audio from multiple sources quick and easy to implement.
159

16-
Additional scenarios are enabled with the addition of audio effects to the audio graph. Every node in an audio graph can be populated with zero or more audio effects
17-
that perform audio processing on the audio passing through the node. There are several built-in effects such as echo, equalizer, limiting, and reverb that can be
18-
attached to an audio node with just a few lines of code. You can also create your own custom audio effects that work exactly the same as the built-in effects.
10+
Additional scenarios are enabled with the addition of audio effects to the audio graph. Every node in an audio graph can be populated with zero or more audio effects that perform audio processing on the audio passing through the node. There are several built-in effects such as echo, equalizer, limiting, and reverb that can be attached to an audio node with just a few lines of code. You can also create your own custom audio effects that work exactly the same as the built-in effects.
1911

2012
This sample demonstrates several common scenarios for routing and processing audio with an audio graph:
2113

2214
**Scenario 1: File Playback:**
23-
Press the *Load File* button and file picker is shown that lets you pick an audio file to play. After selecting a file, press *Start Graph* to start playback of the file. The
24-
*Loop* toggle switch lets you toggle looping of the file on and off. The *Playback Speed* slider lets you adjust the playback speed.
15+
Press the *Load File* button and file picker is shown that lets you pick an audio file to play. After selecting a file, press *Start Graph* to start playback of the file. The *Loop* toggle switch lets you toggle looping of the file on and off. The *Playback Speed* slider lets you adjust the playback speed.
2516

26-
In the code-behind for this scenario, an **AudioGraph** is created and two nodes are added: an **AudioDeviceOutputNode**, which represents the default audio output device, and an
27-
**AudioFileInputNode**, which represents the audio file that you select. A connection is added from the input node to the output node. When the audio graph is started, the
28-
audio data flows from the file input node to the device output node.
17+
In the code-behind for this scenario, an **AudioGraph** is created and two nodes are added: an **AudioDeviceOutputNode**, which represents the default audio output device, and an **AudioFileInputNode**, which represents the audio file that you select. A connection is added from the input node to the output node. When the audio graph is started, the audio data flows from the file input node to the device output node.
2918

3019
**Scenario 2: Device Capture:**
31-
In this scenario, audio is recorded from an input device to an audio file while being monitored on an output device. First, select the audio device that will be used for
32-
monitoring from the list box. Press the *Pick Output File* to launch a file save picker that lets you choose the file to which audio will be recorded. Press *Create Graph*
33-
to create the audio graph with the selected inputs and outputs. Press the *Record* button to begin recording from the device input to the audio file. Press the *Stop* button to stop recording.
20+
In this scenario, audio is recorded from an input device to an audio file while being monitored on an output device. First, select the audio device that will be used for monitoring from the list box. Press the *Pick Output File* to launch a file save picker that lets you choose the file to which audio will be recorded. Press *Create Graph* to create the audio graph with the selected inputs and outputs. Press the *Record* button to begin recording from the device input to the audio file. Press the *Stop* button to stop recording.
3421

35-
In the code-behind for this scenario, an **AudioGraph** is created with three nodes: an **AudioDeviceInputNode**, an **AudioDeviceOutputNode**, and a **AudioFileOutputNode**. A
36-
connection is added from the **AudioDeviceInputNode** to the **AudioDeviceOutputNode**, and another connection is added from the **AudioDeviceInputNode** to the
37-
**AudioFileOutputNode**. When the *Record* button is clicked, audio flows from the input node to the two output nodes. Also of note in this scenario is that the quantum size, or
38-
the amount of audio data that is processed at one time, is set to **LowestLatency**, which means that the audio processing will have the lowest latency possible on the device.
39-
This also means that device disconnection errors must be handled.
22+
In the code-behind for this scenario, an **AudioGraph** is created with three nodes: an **AudioDeviceInputNode**, an **AudioDeviceOutputNode**, and a **AudioFileOutputNode**. A connection is added from the **AudioDeviceInputNode** to the **AudioDeviceOutputNode**, and another connection is added from the **AudioDeviceInputNode** to the **AudioFileOutputNode**. When the *Record* button is clicked, audio flows from the input node to the two output nodes. Also of note in this scenario is that the quantum size, or the amount of audio data that is processed at one time, is set to **LowestLatency**, which means that the audio processing will have the lowest latency possible on the device. This also means that device disconnection errors must be handled.
4023

4124
**Scenario 3: Frame Input Node:**
4225
This scenario shows how to generate and route audio data into an audio graph from custom code. Press the *Generate Audio* button to start generating audio from custom code.
4326
Press *Stop* to stop the audio.
4427

45-
In the code-behind for this scenario an **AudioGraph** is created with two nodes: an **AudioFrameInputNode** that represents the custom audio generation code and which is connected
46-
to an **AudioDeviceOutputNode** representing the default output device. The **AudioFrameInputNode** is created with the same encoding as the audio graph so that the generated audio
47-
data has the same format as the graph. Once the audio graph is started, the **QuantumStarted** event is raised by the audio graph whenever the custom code needs to provide more
48-
audio data. The custom code creates a new **AudioFrame** object in which the audio data is stored. The example accesses the underlying buffer of the **AudioFrame**, which
49-
requires an **unsafe** code block, and inserts values from a sine wav into the buffer. The **AudioFrame** containing the audio data is then added to the **AudioFrameInputNode**
50-
list of frames ready to be processed, which is then consumed by the audio graph and passed to the audio device output node.
28+
In the code-behind for this scenario an **AudioGraph** is created with two nodes: an **AudioFrameInputNode** that represents the custom audio generation code and which is connected to an **AudioDeviceOutputNode** representing the default output device. The **AudioFrameInputNode** is created with the same encoding as the audio graph so that the generated audio data has the same format as the graph. Once the audio graph is started, the **QuantumStarted** event is raised by the audio graph whenever the custom code needs to provide more audio data. The custom code creates a new **AudioFrame** object in which the audio data is stored. The example accesses the underlying buffer of the **AudioFrame**, which requires an **unsafe** code block, and inserts values from a sine wav into the buffer. The **AudioFrame** containing the audio data is then added to the **AudioFrameInputNode** list of frames ready to be processed, which is then consumed by the audio graph and passed to the audio device output node.
5129

5230
**Scenario 4: Submix Nodes:**
53-
In this scenario, audio from two audio files is mixed together and an audio echo effect is added to the mix before the audio is routed to the output device. Press *Load File 1*
54-
to select the first audio file and *Load File 2* to select the second file. Press *Start Graph* to start the flow of audio through the graph. Use the *Echo* button to toggle
55-
the echo effect on and off while the graph is running.
31+
In this scenario, audio from two audio files is mixed together and an audio echo effect is added to the mix before the audio is routed to the output device. Press *Load File 1* to select the first audio file and *Load File 2* to select the second file. Press *Start Graph* to start the flow of audio through the graph. Use the *Echo* button to toggle the echo effect on and off while the graph is running.
5632

57-
The **AudioGraph** for this scenario has four nodes: two **AudioFileInputNode** objects, an **AudioDeviceOutputNode**, and an **AudioSubmixNode**. The file input nodes are connected to the submix node
58-
which is connected to the audio device output node. An *EchoEffectDefinition* object representing the built-in echo effect is created and added to the **EffectDefinitions** list
59-
of the submix node. The **EnableEffectsByDefinition** and **DisableEffectsByDefinition** methods are used to toggle the effect on and off. The effect model shown in this example
60-
is implemented by all audio node types, so you can add effects at any point in the audio graph. Also, submix nodes can be chained together to easily create mixes with complex
61-
layering of effects.
33+
The **AudioGraph** for this scenario has four nodes: two **AudioFileInputNode** objects, an **AudioDeviceOutputNode**, and an **AudioSubmixNode**. The file input nodes are connected to the submix node which is connected to the audio device output node. An *EchoEffectDefinition* object representing the built-in echo effect is created and added to the **EffectDefinitions** list of the submix node. The **EnableEffectsByDefinition** and **DisableEffectsByDefinition** methods are used to toggle the effect on and off. The effect model shown in this example is implemented by all audio node types, so you can add effects at any point in the audio graph. Also, submix nodes can be chained together to easily create mixes with complex layering of effects.
6234

6335
**Scenario 5: In-box Effects:**
6436
This example demonstrates each of the effects that are built-in to the platform. These include:
@@ -68,22 +40,14 @@ This example demonstrates each of the effects that are built-in to the platform.
6840
* Limiter
6941
The UI for this scenario lets you load an audio file to play back and then toggle these effects on and off and adjust their parameters.
7042

71-
The code-behind for this scenario uses just two nodes. An **AudioFileInputNode** and a **AudioDeviceOutputNode**. The effects are initialized and then added to the
72-
**EffectDefinitions** list of the file input node.
43+
The code-behind for this scenario uses just two nodes. An **AudioFileInputNode** and a **AudioDeviceOutputNode**. The effects are initialized and then added to the **EffectDefinitions** list of the file input node.
7344

7445
**Scenario 6: Custom Effects:**
75-
This scenario demonstrates how to create an custom audio effect and then using it in an audio graph. Press the *Load File* button to select an audio file to play. Press
76-
*Start Graph* to begin playback of the file with the custom effect.
77-
78-
The custom effect for this scenario is defined in a single file, CustomEffect.cs, that is included in its own project. The class implemented in this file, AudioEchoEffect,
79-
implements the **IBasicAudioEffect** interface which allows it to be used in an audio graph. The actual audio processing is implemented in the **ProcessFrame** method.
80-
The audio graph calls this method and passes in a **ProcessAudioFrameContext** object which provides access to **AudioFrame** objects representing the input to the effect
81-
and the output from the effect. The effect implements a simple echo by storing samples from the input frame in a buffer and then adding the samples previously stored in the
82-
buffer to the current input samples and then inserting thos values into the output frame buffer.
83-
84-
The custom effect has a property
85-
set that can be modified by calling the **SetProperties** method. The custom effect exposes a *Mix* property through the
86-
property set that is used to control the amount of echo that is added back into the original signal.
46+
This scenario demonstrates how to create an custom audio effect and then using it in an audio graph. Press the *Load File* button to select an audio file to play. Press *Start Graph* to begin playback of the file with the custom effect.
47+
48+
The custom effect for this scenario is defined in a single file, CustomEffect.cs, that is included in its own project. The class implemented in this file, AudioEchoEffect, implements the **IBasicAudioEffect** interface which allows it to be used in an audio graph. The actual audio processing is implemented in the **ProcessFrame** method. The audio graph calls this method and passes in a **ProcessAudioFrameContext** object which provides access to **AudioFrame** objects representing the input to the effect and the output from the effect. The effect implements a simple echo by storing samples from the input frame in a buffer and then adding the samples previously stored in the buffer to the current input samples and then inserting thos values into the output frame buffer.
49+
50+
The custom effect has a property set that can be modified by calling the **SetProperties** method. The custom effect exposes a *Mix* property through the property set that is used to control the amount of echo that is added back into the original signal.
8751

8852
Related topics
8953
--------------

Samples/BasicInput/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
<!---
22
category: CustomUserInteractions
33
samplefwlink: http://go.microsoft.com/fwlink/p/?LinkId=620514&clcid=0x409
4-
---!>
4+
--->
55

66
# Basic input sample
77

Samples/BluetoothRfcommChat/README.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,6 @@
11
<!---
22
category: Communications
3+
samplefwlink: http://go.microsoft.com/fwlink/p/?LinkId=626688&clcid=0x409
34
--->
45

56
# Bluetooth RFCOMM chat sample

Samples/CameraFaceDetection/cs/CameraFaceDetection.csproj

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -102,7 +102,9 @@
102102
<Compile Include="MainPage.xaml.cs">
103103
<DependentUpon>MainPage.xaml</DependentUpon>
104104
</Compile>
105-
<Compile Include="Properties\AssemblyInfo.cs" />
105+
<Compile Include="..\..\..\SharedContent\cs\AssemblyInfo.cs">
106+
<Link>Properties\AssemblyInfo.cs</Link>
107+
</Compile>
106108
</ItemGroup>
107109
<ItemGroup>
108110
<AppxManifest Include="Package.appxmanifest">

Samples/CameraFaceDetection/cs/Properties/AssemblyInfo.cs

Lines changed: 0 additions & 29 deletions
This file was deleted.

Samples/CameraGetPreviewFrame/cs/CameraGetPreviewFrame.csproj

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -103,7 +103,9 @@
103103
<Compile Include="MainPage.xaml.cs">
104104
<DependentUpon>MainPage.xaml</DependentUpon>
105105
</Compile>
106-
<Compile Include="Properties\AssemblyInfo.cs" />
106+
<Compile Include="..\..\..\SharedContent\cs\AssemblyInfo.cs">
107+
<Link>Properties\AssemblyInfo.cs</Link>
108+
</Compile>
107109
</ItemGroup>
108110
<ItemGroup>
109111
<AppxManifest Include="Package.appxmanifest">

Samples/CameraGetPreviewFrame/cs/Properties/AssemblyInfo.cs

Lines changed: 0 additions & 29 deletions
This file was deleted.

Samples/CameraHdr/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@
55

66
# High dynamic range sample
77

8-
This sample applies an end-to-end approach to demonstrate how to write a video recording camera application using the Windows.Media.Capture API in conjunction with orientation sensors to cover the functions that most camera apps will require. It will also use the Windows.Media.Core.SceneAnalysisEffect API to get information about the preview scene and give a recommendation on how beneficial an HDR capture would be. In addition, it will show a simple way to use the Windows.Media.Capture.AdvancedCapture API, which enables High Dynamic Range (HDR) captures, included in Windows. This sample is based on the CameraStarterKit.
8+
This sample applies an end-to-end approach to demonstrate how to write a camera application using the Windows.Media.Capture API in conjunction with orientation sensors to cover the functions that most camera apps will require. It will also use the Windows.Media.Core.SceneAnalysisEffect API to get information about the preview scene and give a recommendation on how beneficial an HDR capture would be. In addition, it will show a simple way to use the Windows.Media.Capture.AdvancedCapture API, which enables High Dynamic Range (HDR) captures, included in Windows. This sample is based on the CameraStarterKit.
99

1010
Specifically, this sample will cover how to:
1111

0 commit comments

Comments
 (0)