Skip to content
This repository has been archived by the owner on Oct 14, 2022. It is now read-only.

Commit

Permalink
v0.1.6
Browse files Browse the repository at this point in the history
  • Loading branch information
disoul committed Apr 19, 2020
1 parent 9b0d007 commit 968ffaa
Show file tree
Hide file tree
Showing 9 changed files with 45 additions and 49 deletions.
2 changes: 1 addition & 1 deletion Docs/cn/custom_audio.md
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@ navigator.mediaDevices.getUserMedia({ video: false, audio: true })

> MediaStreamTrack 对象是指浏览器原生支持的 MediaStreamTrack 对象,具体用法和浏览器支持状况请参考 [MediaStreamTrack API 文档](https://developer.mozilla.org/zh-CN/docs/Web/API/MediaStreamTrack)
同样,你也可以利用强大的 WebAudio API 来获取 MediaStreamTrack,实现定制化的音频处理。
同样,你也可以利用强大的 [Web Audio API](https://developer.mozilla.org/zh-CN/docs/Web/API/Web_Audio_API) 来获取 MediaStreamTrack,实现定制化的音频处理。

### API 参考

Expand Down
4 changes: 2 additions & 2 deletions Docs/cn/setup.md
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,7 @@ const client: IAgoraRTCClient = AgoraRTC.createClient({ mode: "live", codec: "vp
该方法无需下载安装包。在项目 html 文件中,添加如下代码:

```html
<script src="https://download.agora.io/sdk/web/AgoraRTC_N-0.1.5.js"></script>
<script src="https://download.agora.io/sdk/web/AgoraRTC_N-0.1.6.js"></script>
```

### 方法 3. 手动下载 SDK
Expand All @@ -54,7 +54,7 @@ const client: IAgoraRTCClient = AgoraRTC.createClient({ mode: "live", codec: "vp
3. 在项目文件中,将如下代码添加到 html 中:

```html
<script src="./AgoraRTC_N-0.1.5.js"></script>
<script src="./AgoraRTC_N-0.1.6.js"></script>
```

> - 在方法 2 和方法 3 中,SDK 都会在全局导出一个 `AgoraRTC` 对象,直接访问这个对象即可操作 SDK。
Expand Down
50 changes: 25 additions & 25 deletions Docs/en/audio_effect_mixing.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,17 +8,17 @@ sidebar_label: Audio Effects/Mixing

In a call or live broadcast, you may need to play custom sound or music to all the users in the channel. For example, adding sound effects in a game, or playing background music.

Agora Web SDK NG supports publishing multiple audio tracks and mixing them automatically. You can create and publish multiple audio tracks to play custom sound or music.
Agora Web SDK NG supports publishing and automatically mixing multiple audio tracks to create and play custom sound or music.

Before proceeding, ensure that you have implemented the basic real-time communication function in your project. See [Implement a Basic Video Call](basic_call.md).
Before you start, ensure you have implemented real-time communication in your project. See [Implement a Basic Video Call](basic_call.md).

## Implementation

Both sound effects and background music are essentially local or online audio files. To play a sound effect or background music, you only need to create an audio track from the audio file, and publish it together with the microphone audio track.
To play a sound effect or background music, create an audio track from a local or online audio file and publish it together with the audio track from the microphone.

### Create audio track from audio file

The SDK provides the `createBufferSourceAudioTrack` method to read a local or online audio file and create an audio track object (`BufferSourceAudioTrack`).
The SDK provides `createBufferSourceAudioTrack` to read a local or online audio file and create an audio track object (`BufferSourceAudioTrack`).

```js
// Create an audio track from an online music file
Expand All @@ -28,29 +28,29 @@ const audioFileTrack = await AgoraRTC.createBufferSourceAudioTrack({
console.log("create audio file track success");
```

After creating the audio track, if you directly call `audioFileTrack.play()` or `client.publish([audioFileTrack])`, you will find that neither the local nor remote users can hear the music. This is because the SDK processes the audio track created from an audio file differently from the microphone audio track (`MicrophoneAudioTrack`).
If you call `audioFileTrack.play()` or `client.publish([audioFileTrack])` immediately after creating the audio track, your users will not hear anything. This is because the SDK processes the audio track created from an audio file differently from the microphone audio track (`MicrophoneAudioTrack`).

**MicrophoneAudioTrack**
![](assets/microphone_audio_track.png)

For the microphone audio track, the SDK keeps sampling the latest audio data (`AudioBuffer`) from the microphone.

- When you call `play()`, the SDK sends the audio data to the local playback module (`LocalPlayback`), then the local user can hear the sound.
- When you call `publish()`, the SDK sends the audio data to Agora SD-RTN, then the remote users can hear the sound.
- When you call `play()`, the SDK sends the audio data to the local playback module (`LocalPlayback`), then the local user can hear the audio.
- When you call `publish()`, the SDK sends the audio data to Agora SD-RTN, then the remote users can hear the audio.

Once the microphone audio track is created, the sampling keeps going on until `close()` is called. Then the audio track becomes unavailable.
Once the microphone audio track is created, the sampling continues until `close()` is called, and then the audio track becomes unavailable.

**BufferSourceAudioTrack**
![](assets/buffer_source_audio_track.png)

For an audio file, the SDK cannot sample its audio data, but read the file to achieve similar effects instead, namely the `processing` phase in the above figure.
For an audio file, the SDK cannot sample the audio data directly, and instead reads the file to achieve similar effects, such as the `processing` phase in the previous figure.

Sampling and file reading are different:
Sampling is different from file reading:

- Sampling cannot be paused, because only the latest data can be sampled.
- File reading can be controlled. We can pause reading to pause the playback, seek a reading position to jump the playback, loop reading to loop the playback, and so on. These are the core functions of `BufferSourceAudioTrack`. See [Control the playback](#control-the-playback) for details.
- Reading an audio file enables more control over playback, including pausing, jumping to a different position, looping, and more. These are the core functions of `BufferSourceAudioTrack`. See [Control the playback](#control-the-playback) for details.

For the audio track created from an audio file, the SDK does not read the file by default, so you need to call `BufferSourceAudioTrack.startProcessAudioBuffer()` to start reading and processing the audio data, and then call `play()` and `publish()` for the local and remote users to hear the sound.
For the audio track created from an audio file, the SDK does not read the file by default. Call `BufferSourceAudioTrack.startProcessAudioBuffer()` to start reading and processing the audio data, and then call `play()` and `publish()` for the local and remote users to hear the audio.

### Publish multiple audio tracks

Expand All @@ -75,13 +75,13 @@ await client.unpublish([audioFileTrack]);

`BufferSourceAudioTrack` provides the following methods to control the playback of the audio file:

- [`startProcessAudioBuffer`](/api/cn/interfaces/ibuffersourceaudiotrack.html#startprocessaudiobuffer): Starts reading the audio file and processing data. This method also supports setting loop times and the playback starting position.
- [`pauseProcessAudioBuffer`](/api/cn/interfaces/ibuffersourceaudiotrack.html#pauseprocessaudiobuffer): Pauses processing the audio data to pause the playback.
- [`resumeProcessAudioBuffer`](/api/cn/interfaces/ibuffersourceaudiotrack.html#resumeprocessaudiobuffer): Resumes processing the audio data to resume the playback.
- [`stopProcessAudioBuffer`](/api/cn/interfaces/ibuffersourceaudiotrack.html#stopprocessaudiobuffer): Stops processing the audio data to stop the playback.
- [`seekAudioBuffer`](/api/cn/interfaces/ibuffersourceaudiotrack.html#seekaudiobuffer): Jumps to a specified position.
- [`startProcessAudioBuffer`](/api/en/interfaces/ibuffersourceaudiotrack.html#startprocessaudiobuffer): Starts reading the audio file and processing data. This method also supports setting loop times and the playback starting position.
- [`pauseProcessAudioBuffer`](/api/en/interfaces/ibuffersourceaudiotrack.html#pauseprocessaudiobuffer): Pauses processing the audio data to pause the playback.
- [`resumeProcessAudioBuffer`](/api/en/interfaces/ibuffersourceaudiotrack.html#resumeprocessaudiobuffer): Resumes processing the audio data to resume the playback.
- [`stopProcessAudioBuffer`](/api/en/interfaces/ibuffersourceaudiotrack.html#stopprocessaudiobuffer): Stops processing the audio data to stop the playback.
- [`seekAudioBuffer`](/api/en/interfaces/ibuffersourceaudiotrack.html#seekaudiobuffer): Jumps to a specified position.

After the processing starts, if you have called `play()` and `publish()`, calling the above methods affects both the local and remote users.
After the processing starts, if you have called `play()` and `publish()`, calling the above methods affects both the local and remote users.

```js
// Pause processing the audio data
Expand All @@ -101,17 +101,17 @@ audioFileTrack.duration;
audioFileTrack.seekAudioBuffer(50);
```

If the local user does not need to hear the audio file, you can call `stop()` to stop the local playback, which does not affect the remote users.
If the local user does not need to hear the audio file, call `stop()` to stop the local playback, which does not affect the remote users.

### API reference

- [`createBufferSourceAudioTrack`](/api/cn/interfaces/iagorartc.html#createbuffersourceaudiotrack)
- [`BufferSourceAudioTrack`](/api/cn/interfaces/ibuffersourceaudiotrack.html)
- [`publish`](/api/cn/interfaces/iagorartcclient.html#publish)
- [`createBufferSourceAudioTrack`](/api/en/interfaces/iagorartc.html#createbuffersourceaudiotrack)
- [`BufferSourceAudioTrack`](/api/en/interfaces/ibuffersourceaudiotrack.html)
- [`publish`](/api/en/interfaces/iagorartcclient.html#publish)

## Considerations
- Ensure that you configure [CORS](https://developer.mozilla.org/zh-CN/docs/Web/HTTP/Access_control_CORS) if you use online audio files.
- The supported audio formats include MP3, AAC and other formats that the browser supports.
- Ensure that you configure [CORS](https://developer.mozilla.org/en-US/docs/Web/HTTP/Access_control_CORS) if you use online audio files.
- The supported audio formats include MP3, AAC and other audio formats that the browser supports.
- The local audio files must be [`File`](https://developer.mozilla.org/en-US/docs/Web/API/File) objects.
- Safari does not support publishing multiple audio track on versions earlier than Safari 12.
- Safari does not support publishing multiple audio tracks on versions earlier than Safari 12.
- No matter how many audio tracks are published, the SDK automatically mixes them into one audio track, therefore the remote users only get one `RemoteAudioTrack` object.
6 changes: 1 addition & 5 deletions Docs/en/call_quality.md
Original file line number Diff line number Diff line change
Expand Up @@ -85,8 +85,4 @@ Each exception event has a corresponding recovery event. See the table below for
![](assets/exception-event-en.png)

## Considerations
All the above methods must be called after joining the channel.




All the above methods must be called after joining the channel.
2 changes: 1 addition & 1 deletion Docs/en/cloud_proxy.md
Original file line number Diff line number Diff line change
Expand Up @@ -88,5 +88,5 @@ The following figure shows the working principles of the Agora cloud proxy.

## Considerations
- `startProxyServer` must be called before joining the channel, and `stopProxyServer` must be called after leaving the channel.
- The Agora Web SDK NG also provides t`setProxyServer` `setTurnServer` methods for you to deploy the proxy. The `setProxyServer` `setTurnServer` methods cannot be used with the `startProxyServer` method at the same time, else an error occurs.
- The Agora Web SDK NG also provides `setProxyServer` and `setTurnServer` methods for you to deploy the proxy. The `setProxyServer` and `setTurnServer` methods cannot be used with the `startProxyServer` method at the same time, else an error occurs.
- `stopProxyServer` disables all proxy settings, including those set by the `setProxyServer` and `setTurnServer` methods.
14 changes: 7 additions & 7 deletions Docs/en/custom_audio.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,23 +6,23 @@ sidebar_label: Custom Audio Source

## Introduction

By default, the Agora SDK uses the default audio module for sampling and rendering in real-time communications.
The Agora Web SDK NG uses the default audio module for sampling and rendering in real-time communications.

However, the default module might not meet your development requirements, such as in the following situations:
However, the default module may not meet your requirements, such as in the following situations:

- Your app has its own audio module.
- Your app already has an audio module.
- You want to use a non-microphone source.
- You need to process the sampled audio with a pre-processing library for functions such as voice changer.

This document describes how to use the Agora Web SDK NG to customize audio source.
This article describes how to use the Agora Web SDK NG to customize audio source.

## Implementation

Before proceeding, ensure that you have implemented the basic real-time communication function in your project. See [Implement a Basic Video Call](basic_call.md).
Before you start, ensure that you have implemented real-time communication in your project. See [Implement a Basic Video Call](/docs/en/basic_call).

The SDK provides the [`createCustomAudioTrack`](/api/en/interfaces/iagorartc.html#createcustomaudiotrack) method to support creating an audio track from a [`MediaStreamTrack`](https://developer.mozilla.org/en-US/docs/Web/API/MediaStreamTrack) object. You can use this method to customize audio source.

For example, you can call `getUserMedia` to get a `MediaStreamTrack` object, and then pass this object to `createCustomAudioTrack` to create an audio track that can be used in the SDK.
For example, call `getUserMedia` to get a `MediaStreamTrack` object, and then pass this object to `createCustomAudioTrack` to create a customized audio track.

```js
navigator.mediaDevices.getUserMedia({ video: false, audio: true })
Expand All @@ -40,7 +40,7 @@ navigator.mediaDevices.getUserMedia({ video: false, audio: true })

> `MediaStreamTrack` refers to the `MediaStreamTrack` object supported by the browser. See [MediaStreamTrack API](https://developer.mozilla.org/en-US/docs/Web/API/MediaStreamTrack) for details.
Similarly, you can use the WebAudio API to get the `MediaStreamTrack` object for customization.
Alternatively, use the [Web Audio API](https://developer.mozilla.org/en-US/docs/Web/API/Web_Audio_API) to get the `MediaStreamTrack` object for customization.

### API reference

Expand Down
4 changes: 2 additions & 2 deletions Docs/en/setup.md
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@ const client: IAgoraRTCClient = AgoraRTC.createClient({ mode: "live", codec: "vp
Add the following code to the line before `<style>` in your project.

```html
<script src="https://download.agora.io/sdk/web/AgoraRTC_N-0.1.5.js"></script>
<script src="https://download.agora.io/sdk/web/AgoraRTC_N-0.1.6.js"></script>
```

### Method 3: Through the Agora website
Expand All @@ -55,7 +55,7 @@ Add the following code to the line before `<style>` in your project.

3. Add the following code to the line before `<style>` in your project.
```html
<script src="./AgoraRTC_N-0.1.5.js"></script>
<script src="./AgoraRTC_N-0.1.6.js"></script>
```

> - In method 2 and 3, the SDK fully exports an `AgoraRTC` object. You can visit the `AgoraRTC` object to operate the Agora Web SDK NG.
Expand Down
6 changes: 0 additions & 6 deletions Release/AgoraRTC_N-0.1.5.js

This file was deleted.

6 changes: 6 additions & 0 deletions Release/AgoraRTC_N-0.1.6.js

Large diffs are not rendered by default.

0 comments on commit 968ffaa

Please sign in to comment.