You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The Web AI Toolkit utilizes the [transformers.js project](https://huggingface.co/docs/transformers.js/index) to run AI workloads. All AI processing is performed locally on the device, ensuring data privacy and reducing latency. AI workloads are run using the [WebNN API](https://learn.microsoft.com/en-us/windows/ai/directml/webnn-overview) when available, otherwise falling back to the WebGPU API, or even to the CPU with WebAssembly. Choosing the correct hardware to target is handled by the library.
// this sample is just meant to be extremely simple
51
+
// for example, your text could be an array of text that you have OCR'ed
52
+
// from some photos
53
+
54
+
constquery="My Search Query";
55
+
constragQuery=awaitdoRAGSearch([text], query);
56
+
console.log(ragQuery);
57
+
});
58
+
```
59
+
42
60
### Transcribe Audio File
43
61
44
62
```javascript
@@ -89,6 +107,10 @@ const text = await classifyImage(image);
89
107
console.log(text);
90
108
```
91
109
110
+
## Technical Details
111
+
112
+
The Web AI Toolkit utilizes the [transformers.js project](https://huggingface.co/docs/transformers.js/index) to run AI workloads. All AI processing is performed locally on the device, ensuring data privacy and reducing latency. AI workloads are run using the [WebNN API](https://learn.microsoft.com/en-us/windows/ai/directml/webnn-overview) when available, otherwise falling back to the WebGPU API, or even to the CPU with WebAssembly. Choosing the correct hardware to target is handled by the library.
113
+
92
114
## Contribution
93
115
94
116
We welcome contributions to the Web AI Toolkit. Please fork the repository and submit a pull request with your changes. For major changes, please open an issue first to discuss what you would like to change.
0 commit comments