mirror of
https://github.com/Picovoice/porcupine.git
synced 2022-01-28 03:27:53 +03:00
doc fix (#565)
This commit is contained in:
committed by
GitHub
parent
a417c77bc8
commit
891f9cf9f5
@@ -233,7 +233,7 @@ For example usage refer to the
|
||||
|
||||
## Resource Usage
|
||||
|
||||
The following profile graph was captured running the [Porcupine Activity demo](/demo/android/Activity/) on a Google Pixel 3:
|
||||
The following profile graph was captured running the [Porcupine Activity demo](/demo/android/Activity) on a Google Pixel 3:
|
||||
|
||||

|
||||
|
||||
|
||||
@@ -2,7 +2,7 @@
|
||||
|
||||
## Porcupine
|
||||
|
||||
Porcupine is is a highly accurate and lightweight wake word engine. It enables building always-listening voice-enabled applications using cutting edge voice AI.
|
||||
Porcupine is a highly accurate and lightweight wake word engine. It enables building always-listening voice-enabled applications using cutting edge voice AI.
|
||||
|
||||
Porcupine is:
|
||||
|
||||
@@ -168,7 +168,7 @@ Once the app is done with using an instance of PorcupineManager, be sure you exp
|
||||
await _porcupineManager.delete();
|
||||
```
|
||||
|
||||
NOTE: Avoid calling `delete()` from the `paused` state unless you have overidden the back button functionality on Android with [WillPopScope](https://api.flutter.dev/flutter/widgets/WillPopScope-class.html).
|
||||
NOTE: Avoid calling `delete()` from the `paused` state unless you have overridden the back button functionality on Android with [WillPopScope](https://api.flutter.dev/flutter/widgets/WillPopScope-class.html).
|
||||
|
||||
There is no need to deal with audio capture to enable wake word detection with PorcupineManager.
|
||||
This is because it uses our
|
||||
@@ -177,7 +177,7 @@ Flutter plugin to capture frames of audio and automatically pass it to the wake
|
||||
|
||||
### Low-Level API
|
||||
|
||||
[Porcupine](/binding/flutter/lib/porcupine.dart) provides low-level access to the wake word engine for those who want to incorporate wake word detection into a already existing audio processing pipeline.
|
||||
[Porcupine](/binding/flutter/lib/porcupine.dart) provides low-level access to the wake word engine for those who want to incorporate wake word detection into an already existing audio processing pipeline.
|
||||
|
||||
`Porcupine` also has `fromBuiltInKeywords` and `fromKeywordPaths` static constructors.
|
||||
|
||||
|
||||
@@ -40,7 +40,7 @@ To obtain your `AccessKey`:
|
||||
|
||||
## Usage
|
||||
|
||||
To create an instance of the engine you first creat a Porcupine struct with the configuration parameters for the wake word engine and then make a call to `.Init()`:
|
||||
To create an instance of the engine you first create a Porcupine struct with the configuration parameters for the wake word engine and then make a call to `.Init()`:
|
||||
|
||||
```go
|
||||
import . "github.com/Picovoice/porcupine/binding/go"
|
||||
@@ -53,7 +53,7 @@ if err != nil {
|
||||
// handle init fail
|
||||
}
|
||||
```
|
||||
In the above example, we've initialzed the engine to detect the built-in wake word "Picovoice". Built-in keywords are constants in the package with the BuiltInKeyword type.
|
||||
In the above example, we've initialized the engine to detect the built-in wake word "Picovoice". Built-in keywords are constants in the package with the BuiltInKeyword type.
|
||||
|
||||
Porcupine can detect multiple keywords concurrently:
|
||||
```go
|
||||
@@ -108,7 +108,7 @@ When done resources have to be released explicitly.
|
||||
porcupine.Delete()
|
||||
```
|
||||
|
||||
Using a defer call to `Delete()` after `Init()` is also a good way to ensure cleanup.
|
||||
Using a `defer` call to `Delete()` after `Init()` is also a good way to ensure cleanup.
|
||||
|
||||
## Non-English Wake Words
|
||||
|
||||
|
||||
@@ -2,7 +2,7 @@
|
||||
|
||||
## Porcupine
|
||||
|
||||
Porcupine is is a highly accurate and lightweight wake word engine. It enables building always-listening voice-enabled applications using cutting edge voice AI.
|
||||
Porcupine is a highly accurate and lightweight wake word engine. It enables building always-listening voice-enabled applications using cutting edge voice AI.
|
||||
|
||||
Porcupine is:
|
||||
|
||||
@@ -135,7 +135,7 @@ porcupineManager.delete()
|
||||
|
||||
### Low-Level API
|
||||
|
||||
[Porcupine](/binding/ios/Porcupine.swift) provides low-level access to the wake word engine for those who want to incorporate wake word detection into a already existing audio processing pipeline.
|
||||
[Porcupine](/binding/ios/Porcupine.swift) provides low-level access to the wake word engine for those who want to incorporate wake word detection into an already existing audio processing pipeline.
|
||||
|
||||
To construct an instance of Porcupine, pass it a keyword.
|
||||
|
||||
|
||||
@@ -123,6 +123,6 @@ const handle = new Porcupine(
|
||||
|
||||
## Unit Tests
|
||||
|
||||
Run `yarn` (or`npm install`) from the [binding/nodejs](https://github.com/Picovoice/porcupine/tree/master/binding/nodejs) directory to install project dependencies. This will also run a script to copy all of the necessary shared resources from the Porcupine repository into the package directory.
|
||||
Run `yarn` (or`npm install`) from the [binding/nodejs](https://github.com/Picovoice/porcupine/tree/master/binding/nodejs) directory to install project dependencies. This will also run a script to copy all the necessary shared resources from the Porcupine repository into the package directory.
|
||||
|
||||
Run `yarn test` (or `npm run test`) from the [binding/nodejs](https://github.com/Picovoice/porcupine/tree/master/binding/nodejs) directory to execute the test suite.
|
||||
|
||||
@@ -2,7 +2,7 @@
|
||||
|
||||
## Porcupine
|
||||
|
||||
Porcupine is is a highly accurate and lightweight wake word engine. It enables building always-listening voice-enabled applications using cutting edge voice AI.
|
||||
Porcupine is a highly accurate and lightweight wake word engine. It enables building always-listening voice-enabled applications using cutting edge voice AI.
|
||||
|
||||
Porcupine is:
|
||||
|
||||
@@ -48,7 +48,7 @@ Link the iOS package
|
||||
cd ios && pod install && cd ..
|
||||
```
|
||||
|
||||
**NOTE**: Due to a limitation in React Native CLI autolinking, these two native modules cannot be included as transitive depedencies. If you are creating a module that depends on porcupine-react-native and/or react-native-voice-processor, you will have to list these as peer dependencies and require developers to install them alongside.
|
||||
**NOTE**: Due to a limitation in React Native CLI auto-linking, these two native modules cannot be included as transitive dependencies. If you are creating a module that depends on porcupine-react-native and/or react-native-voice-processor, you will have to list these as peer dependencies and require developers to install them alongside.
|
||||
|
||||
## AccessKey
|
||||
|
||||
@@ -116,7 +116,7 @@ The module provides you with two levels of API to choose from depending on your
|
||||
|
||||
#### High-Level API
|
||||
|
||||
[PorcupineManager](/binding/react-native/src/porcupinemanager.tsx) provides a high-level API that takes care of
|
||||
[PorcupineManager](/binding/react-native/src/porcupine_manager.tsx) provides a high-level API that takes care of
|
||||
audio recording. This class is the quickest way to get started.
|
||||
|
||||
Using the constructor `PorcupineManager.fromBuiltInKeywords` will create an instance of the PorcupineManager
|
||||
@@ -215,7 +215,7 @@ module to capture frames of audio and automatically pass it to the wake word eng
|
||||
#### Low-Level API
|
||||
|
||||
[Porcupine](/binding/react-native/src/porcupine.tsx) provides low-level access to the wake word engine for those
|
||||
who want to incorporate wake word detection into a already existing audio processing pipeline.
|
||||
who want to incorporate wake word detection into an already existing audio processing pipeline.
|
||||
|
||||
`Porcupine` also has `fromBuiltInKeywords` and `fromKeywordPaths` static constructors.
|
||||
|
||||
@@ -261,17 +261,17 @@ To add a custom wake word to your React Native application you'll need to add th
|
||||
|
||||
### Adding Android Models
|
||||
|
||||
Android custom models and keywords must be added to [`./android/app/src/main/assets/`](android/app/src/main/assets/).
|
||||
Android custom models and keywords must be added to `./android/app/src/main/assets/`
|
||||
|
||||
### Adding iOS Models
|
||||
|
||||
iOS models can be added anywhere under [`./ios`](ios), but it must be included as a bundled resource.
|
||||
iOS models can be added anywhere under `./ios`, but it must be included as a bundled resource.
|
||||
The easiest way to include a bundled resource in the iOS project is to:
|
||||
|
||||
1. Open XCode.
|
||||
2. Either:
|
||||
- Drag and Drop the model/keyword file to the navigation tab.
|
||||
- Right click on the navigation tab, and click `Add Files To ...`.
|
||||
- Right-click on the navigation tab, and click `Add Files To ...`.
|
||||
|
||||
This will bundle your models together when the app is built.
|
||||
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
# porcupine-web-react
|
||||
|
||||
React hook for Porcupine for Web.
|
||||
React Hook for Porcupine for Web.
|
||||
|
||||
## Porcupine
|
||||
|
||||
@@ -22,7 +22,7 @@ The Picovoice SDKs for Web are powered by WebAssembly (WASM), the Web Audio API,
|
||||
|
||||
All modern browsers (Chrome/Edge/Opera, Firefox, Safari) are supported, including on mobile. Internet Explorer is _not_ supported.
|
||||
|
||||
Using the Web Audio API requires a secure context (HTTPS connection), with the exception of `localhost`, for local development.
|
||||
Using the Web Audio API requires a secure context (HTTPS connection), except for `localhost`, for local development.
|
||||
|
||||
## Installation
|
||||
|
||||
@@ -56,7 +56,7 @@ Make sure you handle the possibility of errors with the `isError` and `errorMess
|
||||
|
||||
### Static Import
|
||||
|
||||
Using static imports for the `@picovoice/porcupine-web-xx-worker` packages is straightforward, but will impact your initial bundle size with an additional ~2MB. Depending on your requirements, this may or may not be feasible. If you require a small bundle size, see dynamic importing below.
|
||||
Using static imports for the `@picovoice/porcupine-web-xx-worker` packages is straightforward, but will impact your initial bundle size with an additional ~2 MB. Depending on your requirements, this may or may not be feasible. If you require a small bundle size, see dynamic importing below.
|
||||
|
||||
```javascript
|
||||
import React, { useState } from 'react';
|
||||
@@ -93,7 +93,7 @@ The `keywordEventHandler` will log the keyword detections to the browser's JavaS
|
||||
|
||||
### Dynamic Import / Code Splitting
|
||||
|
||||
If you are shipping Porcupine for Web and wish to avoid adding its ~2MB to your application's initial bundle, you can use dynamic imports. These will split off the porcupine-web-xx-worker packages into separate bundles and load them asynchronously. This means we need additional logic.
|
||||
If you are shipping Porcupine for Web and wish to avoid adding its ~2 MB to your application's initial bundle, you can use dynamic imports. These will split off the porcupine-web-xx-worker packages into separate bundles and load them asynchronously. This means we need additional logic.
|
||||
|
||||
We add a `useEffect` hook to kick off the dynamic import. We store the result of the dynamically loaded worker chunk into a `useState` hook. When `usePorcupine` receives a non-null/undefined value for the worker factory, it will start up Porcupine.
|
||||
|
||||
@@ -157,7 +157,7 @@ Each language includes a set of built-in keywords. The quickest way to get start
|
||||
|
||||
Custom wake words are generated using [Picovoice Console](https://picovoice.ai/console/). They are trained from text using transfer learning into bespoke Porcupine keyword files with a `.ppn` extension. The target platform is WebAssembly (WASM), as that is what backs the React library.
|
||||
|
||||
The `.zip` file containes a `.ppn` file and a `_b64.txt` file which containes the binary model encoded with Base64. Copy the base64 and provide it as an argument to Porcupine as below. You will need to also provide a label so that Porcupine can tell you which keyword occurred ("Deep Sky Blue", in this case):
|
||||
The `.zip` file contains a `.ppn` file and a `_b64.txt` file which contains the binary model encoded with Base64. Copy the base64 and provide it as an argument to Porcupine as below. You will need to also provide a label so that Porcupine can tell you which keyword occurred ("Deep Sky Blue", in this case):
|
||||
|
||||
```typescript
|
||||
const DEEP_SKY_BLUE_PPN_64 = /* Base64 representation of deep_sky_blue.ppn */
|
||||
|
||||
@@ -2,7 +2,7 @@
|
||||
|
||||
## Porcupine
|
||||
|
||||
Porcupine is is a highly accurate and lightweight wake word engine. It enables building always-listening voice-enabled applications using cutting edge voice AI.
|
||||
Porcupine is a highly accurate and lightweight wake word engine. It enables building always-listening voice-enabled applications using cutting edge voice AI.
|
||||
|
||||
Porcupine is:
|
||||
|
||||
@@ -33,9 +33,9 @@ This binding is for running Porcupine on **Unity 2017.4+** on the following plat
|
||||
|
||||
## Installation
|
||||
|
||||
The easiest way to install the Porcupine Unity SDK is to import [porcupine.unitypackage](/binding/unity/porcupine.unitypackage) into your Unity project by either dropping it into the Unity editor or going to _Assets>Import Package>Custom Package..._
|
||||
The easiest way to install the Porcupine Unity SDK is to import [porcupine-2.0.0.unitypackage](/binding/unity/porcupine-2.0.0.unitypackage) into your Unity projects by either dropping it into the Unity editor or going to _Assets>Import Package>Custom Package..._
|
||||
|
||||
**NOTE:** On macOS, the Porcupine library may get flagged as having come from an unverified source if you've downloaded the `.unitypackage` directly from github. This should only come up when running your project in the Editor. To disable this warning, go to Security & Preferences and choose to allow pv_porcupine.dylib to run.
|
||||
**NOTE:** On macOS, the Porcupine library may get flagged as having come from an unverified source if you've downloaded the `.unitypackage` directly from GitHub. This should only come up when running your project in the Editor. To disable this warning, go to Security & Preferences and choose to allow pv_porcupine.dylib to run.
|
||||
|
||||
## AccessKey
|
||||
|
||||
@@ -47,14 +47,14 @@ To obtain your `AccessKey`:
|
||||
2. Once logged in, go to the [`AccessKey` tab](https://console.picovoice.ai/access_key) to create one or use an existing `AccessKey`.
|
||||
|
||||
## Packaging
|
||||
To build the package from source, you have first have to clone the repo with submodules:
|
||||
To build the package from source, first you have to clone the repo with submodules:
|
||||
```console
|
||||
git clone --recurse-submodules git@github.com:Picovoice/porcupine.git
|
||||
# or
|
||||
git clone --recurse-submodules https://github.com/Picovoice/porcupine.git
|
||||
```
|
||||
|
||||
You then have to run the `copy.sh` file to copy the package resources from various locations in the repo to the Unity project located at [/binding/unity](/binding/unity) (**NOTE:** on Windows, Git Bash or another bash shell is required, or you will have to manually copy the resources into the project.). Then, open the Unity project, right click the Assets folder and select Export Package. The resulting Unity package can be imported into other Unity projects as desired.
|
||||
You then have to run the `copy.sh` file to copy the package resources from various locations in the repo to the Unity project located at [/binding/unity](/binding/unity) (**NOTE:** on Windows, Git Bash or another bash shell is required, or you will have to manually copy the resources into the project.). Then, open the Unity project, right-click the Assets folder and select Export Package. The resulting Unity package can be imported into other Unity projects as desired.
|
||||
|
||||
## Usage
|
||||
|
||||
@@ -163,9 +163,9 @@ Unity package to capture frames of audio and automatically pass it to the wake w
|
||||
|
||||
#### Low-Level API
|
||||
|
||||
[Porcupine](/binding/unity/Assets/Porcupine/Porcupine.cs) provides low-level access to the wake word engine for those who want to incorporate wake word detection into a already existing audio processing pipeline.
|
||||
[Porcupine](/binding/unity/Assets/Porcupine/Porcupine.cs) provides low-level access to the wake word engine for those who want to incorporate wake word detection into an already existing audio processing pipeline.
|
||||
|
||||
To create an instance of `Porcupine`, use the `.Create` static constructor. You can pass a list of built-in keywords as its `keywords` argument or a list or paths to custom keywords using its `keywordPaths` arguement.
|
||||
To create an instance of `Porcupine`, use the `.Create` static constructor. You can pass a list of built-in keywords as its `keywords` argument or a list or paths to custom keywords using its `keywordPaths` argument.
|
||||
|
||||
```csharp
|
||||
using Pv.Unity;
|
||||
@@ -205,7 +205,7 @@ catch (Exception ex)
|
||||
For process to work correctly, the audio data must be in the audio format required by Picovoice.
|
||||
The required sample rate is specified by the `SampleRate` property and the required number of audio samples in each frame is specified by the `FrameLength` property. Audio must be single-channel and 16-bit linearly-encoded.
|
||||
|
||||
Porcupine implements the `IDisposable` interface, so you can use Porcupine in a `using` block. If you don't use a `using` block, resources will be released by the garbage collector automatically or you can explicitly release the resources like so:
|
||||
Porcupine implements the `IDisposable` interface, so you can use Porcupine in a `using` block. If you don't use a `using` block, resources will be released by the garbage collector automatically, or you can explicitly release the resources like so:
|
||||
|
||||
```csharp
|
||||
_porcupine.Dispose();
|
||||
|
||||
@@ -18,7 +18,7 @@ The Picovoice SDKs for Web are powered by WebAssembly (WASM), the Web Audio API,
|
||||
|
||||
All modern browsers (Chrome/Edge/Opera, Firefox, Safari) are supported, including on mobile. Internet Explorer is _not_ supported.
|
||||
|
||||
Using the Web Audio API requires a secure context (HTTPS connection), with the exception of `localhost`, for local development.
|
||||
Using the Web Audio API requires a secure context (HTTPS connection), except for `localhost`, for local development.
|
||||
|
||||
## Installation
|
||||
|
||||
@@ -120,7 +120,7 @@ Each language includes a set of built-in keywords. The quickest way to get start
|
||||
|
||||
Custom wake words are generated using [Picovoice Console](https://picovoice.ai/console/). They are trained from text using transfer learning into bespoke Porcupine keyword files with a `.ppn` extension. The target platform is WebAssembly (WASM), as that is what backs the Vue library.
|
||||
|
||||
The `.zip` file containes a `.ppn` file and a `_b64.txt` file which containes the binary model encoded with Base64. Copy the base64 and provide it as an argument to Porcupine as below. You will need to also provide a label so that Porcupine can tell you which keyword occurred ("Deep Sky Blue", in this case):
|
||||
The `.zip` file contains a `.ppn` file and a `_b64.txt` file which contains the binary model encoded with Base64. Copy the base64 and provide it as an argument to Porcupine as below. You will need to also provide a label so that Porcupine can tell you which keyword occurred ("Deep Sky Blue", in this case):
|
||||
|
||||
```html
|
||||
<template>
|
||||
|
||||
@@ -20,7 +20,7 @@ If you are using this library with the [@picovoice/web-voice-processor](https://
|
||||
|
||||
The Porcupine SDK for Web is split into multiple packages due to each language including the entire Voice AI model, which is of nontrivial size. There are separate worker and factory packages as well, due to the complexities with bundling an "all-in-one" web workers without bloating bundle sizes. Import each as required.
|
||||
|
||||
Any Porcupine keyword files (`.ppn` files) generated from [Picovoice Console](https://picovoice.ai/console/) must be trained for the WebAssembly (WASM) platform and match the language of the instance you create. The `.zip` file containes a `.ppn` file and a `_b64.txt` file which containes the binary model encoded with Base64. The Base64 encoded models can then be passed into the Porcupine `create` function as arguments.
|
||||
Any Porcupine keyword files (`.ppn` files) generated from [Picovoice Console](https://picovoice.ai/console/) must be trained for the WebAssembly (WASM) platform and match the language of the instance you create. The `.zip` file contains a `.ppn` file and a `_b64.txt` file which contains the binary model encoded with Base64. The Base64 encoded models can then be passed into the Porcupine `create` function as arguments.
|
||||
|
||||
### Workers
|
||||
|
||||
@@ -118,7 +118,7 @@ if (done) {
|
||||
|
||||
```
|
||||
|
||||
**Important Note**: Because the workers are all-in-one packages that run an entire machine learning inference model in WebAssembly, they are approximately 1-2MB in size. While this is tiny for a speech recognition model, it's large for web delivery. Because of this, you likely will want to use dynamic `import()` instead of static `import {}` to reduce your app's starting bundle size. See e.g. https://webpack.js.org/guides/code-splitting/ for more information.
|
||||
**Important Note**: Because the workers are all-in-one packages that run an entire machine learning inference model in WebAssembly, they are approximately 1-2 MB in size. While this is tiny for a speech recognition model, it's large for web delivery. Because of this, you likely will want to use dynamic `import()` instead of static `import {}` to reduce your app's starting bundle size. See e.g. https://webpack.js.org/guides/code-splitting/ for more information.
|
||||
|
||||
### Factory
|
||||
|
||||
@@ -147,7 +147,7 @@ async function startPorcupine() {
|
||||
startPorcupine()
|
||||
```
|
||||
|
||||
**Important Note**: Because the factories are all-in-one packages that run an entire machine learning inference model in WebAssembly, they are approximately 1-2MB in size. While this is tiny for a speech recognition, it's nontrivial for web delivery. Because of this, you likely will want to use dynamic `import()` instead of static `import {}` to reduce your app's starting bundle size. See e.g. https://webpack.js.org/guides/code-splitting/ for more information.
|
||||
**Important Note**: Because the factories are all-in-one packages that run an entire machine learning inference model in WebAssembly, they are approximately 1-2 MB in size. While this is tiny for a speech recognition, it's nontrivial for web delivery. Because of this, you likely will want to use dynamic `import()` instead of static `import {}` to reduce your app's starting bundle size. See e.g. https://webpack.js.org/guides/code-splitting/ for more information.
|
||||
|
||||
## Build from source (IIFE + ESM outputs)
|
||||
|
||||
|
||||
@@ -2,7 +2,7 @@
|
||||
|
||||
This demo application includes a sample `VoiceWidget` Angular component which uses the `PorcupineService` Angular service to allow listening for keywords. Porcupine keyword detections are handled via the `keyword$` event. Our VoiceWidget subscribes to this event and displays the results.
|
||||
|
||||
The demo uses dynamic imports to split the PorcupineService away from the main application bundle. This means that the initial download size of the Angular app will not be impacted by the ~1-2MB requirement of Porcupine. While small for all-in-one offline Voice AI, the size is large for an initial web app load.
|
||||
The demo uses dynamic imports to split the PorcupineService away from the main application bundle. This means that the initial download size of the Angular app will not be impacted by the ~1-2 MB requirement of Porcupine. While small for all-in-one offline Voice AI, the size is large for an initial web app load.
|
||||
|
||||
If you decline microphone permission in the browser, or another such issue prevents Porcupine from starting, the error will be displayed.
|
||||
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
# AvaloniaVUI Demo
|
||||
|
||||
A voice-enabled .NET Core desktop application that uses the cross-plaform GUI framework [Avalonia](https://github.com/AvaloniaUI/Avalonia). This code was part of a developer tutorial article that can be found [here](https://medium.com/picovoice/from-gui-to-vui-voice-enabling-a-cross-platform-net-desktop-application-aac44e470790).
|
||||
A voice-enabled .NET Core desktop application that uses the cross-platform GUI framework [Avalonia](https://github.com/AvaloniaUI/Avalonia). This code was part of a developer tutorial article that can be found [here](https://medium.com/picovoice/from-gui-to-vui-voice-enabling-a-cross-platform-net-desktop-application-aac44e470790).
|
||||
|
||||
Video tutorial:<br/>
|
||||
[](https://youtu.be/AU87_4GpIzo)
|
||||
|
||||
@@ -159,7 +159,7 @@ index: 0, device name: USB Audio Device
|
||||
index: 1, device name: MacBook Air Microphone
|
||||
```
|
||||
|
||||
You can use the device index to specify which microphone to use for the demo. For instance, if you want to use the USB Auio Device in the above example, you can invoke the demo application as below:
|
||||
You can use the device index to specify which microphone to use for the demo. For instance, if you want to use the USB Audio Device in the above example, you can invoke the demo application as below:
|
||||
|
||||
```console
|
||||
dotnet run -c MicDemo.Release -- \
|
||||
|
||||
@@ -19,7 +19,7 @@ final String accessKey = "{YOUR_ACCESS_KEY_HERE}"; // AccessKey obtained from Pi
|
||||
|
||||
## Usage
|
||||
|
||||
Run the following command from [demo/flutter](/demo/flutter/) to build and deploy the demo to your device:
|
||||
Run the following command from [demo/flutter](/demo/flutter) to build and deploy the demo to your device:
|
||||
```console
|
||||
flutter run
|
||||
```
|
||||
|
||||
@@ -56,7 +56,7 @@ The `keywords` argument refers to the built-in keyword files that are shipped wi
|
||||
go run filedemo/porcupine_file_demo.go -h
|
||||
```
|
||||
|
||||
To detect multiple phrases concurrently provide them as comma-seperated values. If the wake word is more than a single word, surround the argument in quotation marks:
|
||||
To detect multiple phrases concurrently provide them as comma-separated values. If the wake word is more than a single word, surround the argument in quotation marks:
|
||||
|
||||
```console
|
||||
go run filedemo/porcupine_file_demo.go \
|
||||
@@ -105,7 +105,7 @@ The `keywords` argument refers to the built-in keyword files that are shipped wi
|
||||
go run micdemo/porcupine_mic_demo.go -h
|
||||
```
|
||||
|
||||
To detect multiple phrases concurrently provide them as comma-seperated values. If the wake word is more than a single word, surround the argument in quotation marks:
|
||||
To detect multiple phrases concurrently provide them as comma-separated values. If the wake word is more than a single word, surround the argument in quotation marks:
|
||||
|
||||
```console
|
||||
go run filedemo/porcupine_file_demo.go \
|
||||
|
||||
@@ -4,7 +4,7 @@ This package provides two demonstration command-line applications for Porcupine:
|
||||
|
||||
## Introduction to Porcupine
|
||||
|
||||
Porcupine is is a highly accurate and lightweight wake word engine. It enables building always-listening voice-enabled applications using cutting edge voice AI.
|
||||
Porcupine is a highly accurate and lightweight wake word engine. It enables building always-listening voice-enabled applications using cutting edge voice AI.
|
||||
|
||||
Porcupine is:
|
||||
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
# Porcupine Demo App
|
||||
To run the React Native Porcupine demo app you'll first need to setup your React Native environment. For this,
|
||||
To run the React Native Porcupine demo app you'll first need to set up your React Native environment. For this,
|
||||
please refer to [React Native's documentation](https://reactnative.dev/docs/environment-setup). Once your environment has been set up, you can run the following commands from this repo location.
|
||||
|
||||
## AccessKey
|
||||
|
||||
@@ -1,15 +1,15 @@
|
||||
# Porcupine Unity Demo
|
||||
|
||||
The Porcupine demo for Unity is is a multi-platform demo that runs on:
|
||||
The Porcupine demo for Unity is a multi-platform demo that runs on:
|
||||
|
||||
- Desktop Standalone (Windows, macOS and Linux x86_64)
|
||||
- iOS 9.0+
|
||||
- Android 4.1+
|
||||
|
||||
Additionally you will need a version of Unity that is 2017.4 or higher.
|
||||
Additionally, you will need a version of Unity that is 2017.4 or higher.
|
||||
|
||||
## Usage
|
||||
The easiest way to run the demo is to simply import the [Porcupine Unity package](/binding/unity/porcupine.unitypackage) into your project, open the PorcupineDemo scene and hit play. To run on other platforms or in the player, go to _File > Build Settings_, choose your platform and hit the `Build and Run` button.
|
||||
The easiest way to run the demo is to simply import the [Porcupine Unity package](/binding/unity/porcupine-2.0.0.unitypackage) into your project, open the PorcupineDemo scene and hit play. To run on other platforms or in the player, go to _File > Build Settings_, choose your platform and hit the `Build and Run` button.
|
||||
|
||||
Once the demo launches, press the `Start Listening` button and try out any of the listed wake words.
|
||||
|
||||
@@ -22,7 +22,7 @@ To obtain your `AccessKey`:
|
||||
1. Login or Signup for a free account on the [Picovoice Console](https://picovoice.ai/console/).
|
||||
2. Once logged in, go to the [`AccessKey` tab](https://console.picovoice.ai/access_key) to create one or use an existing `AccessKey`.
|
||||
|
||||
Replace the `AccessKey` in [`PorcupineDemo.cs`](PorcupineDemo.cs) with the `AccessKey` obatined above:
|
||||
Replace the `AccessKey` in [`PorcupineDemo.cs`](PorcupineDemo.cs) with the `AccessKey` obtained above:
|
||||
|
||||
```csharp
|
||||
private const string ACCESS_KEY = "${YOUR_ACCESS_KEY_HERE}"; // AccessKey obtained from Picovoice Console (https://picovoice.ai/console/)
|
||||
|
||||
@@ -2,7 +2,7 @@
|
||||
|
||||
This demo application includes a sample `VoiceWidget` Vue component which uses the `Porcupine` renderless Vue component service to allow listening for keywords. Porcupine keyword detections are handled via the `ppn-keyword` event. Our VoiceWidget subscribes to this event and displays the results.
|
||||
|
||||
The demo uses dynamic imports to split the VoiceWidget away from the main application bundle. This means that the initial download size of the Vue app will not be impacted by the ~1-2MB requirement of Porcupine. While small for all-in-one offline Voice AI, the size is large for an initial web app load.
|
||||
The demo uses dynamic imports to split the VoiceWidget away from the main application bundle. This means that the initial download size of the Vue app will not be impacted by the ~1-2 MB requirement of Porcupine. While small for all-in-one offline Voice AI, the size is large for an initial web app load.
|
||||
|
||||
If you decline microphone permission in the browser, or another such issue prevents Porcupine from starting, the error will be displayed.
|
||||
|
||||
|
||||
Reference in New Issue
Block a user