Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Array buffer allocation failed while trying to decode large files #75

Open
sorokinvj opened this issue May 17, 2020 · 1 comment
Open

Comments

@sorokinvj
Copy link

sorokinvj commented May 17, 2020

First of all, thanks for lib, it saved a lot efforts.

I have an AWS Lambda function that extracts peaks from audio files. Whenever I upload file to S3 this Lambda triggered and returns an array of numbers, which after I send to waveplasyer.js in my UI to draw waveform.

Everything works great for small files, but I stuck with large ones – duration about 1 hour, size like 200 Mb, Lambda either exceeds timeout, or get an error:

{
  "errorType": "RangeError",
  "errorMessage": "Array buffer allocation failed",
  "trace": [
    "RangeError: Array buffer allocation failed",
    "    at new ArrayBuffer (<anonymous>)",
    "    at new Float32Array (<anonymous>)",
    "    at Asset.<anonymous> (/var/task/node_modules/av/src/asset.js:118:15)",
    "    at Asset.cb (/var/task/node_modules/av/src/core/events.js:51:19)",
    "    at Asset.EventEmitter.emit (/var/task/node_modules/av/src/core/events.js:64:12)",
    "    at Class.<anonymous> (/var/task/node_modules/av/src/asset.js:202:24)",
    "    at Class.EventEmitter.emit (/var/task/node_modules/av/src/core/events.js:64:12)",
    "    at Class.Decoder.decode (/var/task/node_modules/av/src/decoder.js:90:14)",
    "    at M4ADemuxer.<anonymous> (/var/task/node_modules/av/src/decoder.js:54:26)",
    "    at M4ADemuxer.EventEmitter.emit (/var/task/node_modules/av/src/core/events.js:64:12)"
  ]
}

Could you guys help me with that please? I run out of ideas where to look and what to try to solve it.

This is my code:

// helper function to filter data
const filterData = audioBuffer => {
  const rawData = audioBuffer.getChannelData(0) // We only need to work with one channel of data
  const samples = 50 // Number of samples we want to have in our final data set
  const blockSize = Math.floor(rawData.length / samples) // the number of samples in each subdivision
  const filteredData = []
  for (let i = 0; i < samples; i++) {
    let blockStart = blockSize * i // the location of the first sample in the block
    let sum = 0
    for (let j = 0; j < blockSize; j++) {
      sum = sum + Math.abs(rawData[blockStart + j]) // find the sum of all the samples in the block
    }
    filteredData.push(sum / blockSize) // divide the sum by the block size to get the average
  }
  return filteredData
}

// helper for normalizing values
const normalizeData = filteredData => {
  const multiplier = Math.pow(Math.max(...filteredData), -1)
  return filteredData.map(n => n * multiplier)
}

// main function
async function getPeaks(fileBuffer) {
  console.log('>>>> starting decode...')
  const AudioContext = require('web-audio-api').AudioContext
  const context = new AudioContext()
  const peaksPromise = new Promise((resolve, reject) => {
    try {
      context.decodeAudioData(fileBuffer, audioBuffer => {
        const peaks = normalizeData(filterData(audioBuffer))
        console.log('>>>> finished decoding...', peaks)
        resolve(peaks)
      })
    } catch (error) {
      console.log('error on decoding', error)
      reject(error)
    }
  })
  return peaksPromise
}

module.exports = getPeaks

@anandarpit
Copy link

+1

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants