iOS 录音机在 swift 上的可视化

标签 ios swift avaudioplayer audio-recording

我想像在原始语音备忘录应用程序上那样在记录上进行可视化:

enter image description here

我知道我能达到等级 - 更新仪表 - peakPowerForChannel: - averagePowerForChannel:

但是如何绘制图形,我应该自定义吗?我可以使用免费/付费资源吗?

最佳答案

我遇到了同样的问题。我想创建一个语音备忘录克隆。最近,我找到了一个解决方案,并在 medium 上写了一篇关于它的文章。

我从 UIView 类创建了一个子类,并使用 CGRect 绘制了条形图。

import UIKit

class AudioVisualizerView: UIView {


// Bar width
var barWidth: CGFloat = 4.0
// Indicate that waveform should draw active/inactive state
var active = false {
    didSet {
        if self.active {
            self.color = UIColor.red.cgColor
        }
        else {
            self.color = UIColor.gray.cgColor
        }
    }
}
// Color for bars
var color = UIColor.gray.cgColor
// Given waveforms
var waveforms: [Int] = Array(repeating: 0, count: 100)

// MARK: - Init
override init (frame : CGRect) {
    super.init(frame : frame)
    self.backgroundColor = UIColor.clear
}

required init?(coder decoder: NSCoder) {
    super.init(coder: decoder)
    self.backgroundColor = UIColor.clear
}

// MARK: - Draw bars
override func draw(_ rect: CGRect) {
    guard let context = UIGraphicsGetCurrentContext() else {
        return
    }
    context.clear(rect)
    context.setFillColor(red: 0, green: 0, blue: 0, alpha: 0)
    context.fill(rect)
    context.setLineWidth(1)
    context.setStrokeColor(self.color)
    let w = rect.size.width
    let h = rect.size.height
    let t = Int(w / self.barWidth)
    let s = max(0, self.waveforms.count - t)
    let m = h / 2
    let r = self.barWidth / 2
    let x = m - r
    var bar: CGFloat = 0
    for i in s ..< self.waveforms.count {
        var v = h * CGFloat(self.waveforms[i]) / 50.0
        if v > x {
            v = x
        }
        else if v < 3 {
            v = 3
        }
        let oneX = bar * self.barWidth
        var oneY: CGFloat = 0
        let twoX = oneX + r
        var twoY: CGFloat = 0
        var twoS: CGFloat = 0
        var twoE: CGFloat = 0
        var twoC: Bool = false
        let threeX = twoX + r
        let threeY = m
        if i % 2 == 1 {
            oneY = m - v
            twoY = m - v
            twoS = -180.degreesToRadians
            twoE = 0.degreesToRadians
            twoC = false
        }
        else {
            oneY = m + v
            twoY = m + v
            twoS = 180.degreesToRadians
            twoE = 0.degreesToRadians
            twoC = true
        }
        context.move(to: CGPoint(x: oneX, y: m))
        context.addLine(to: CGPoint(x: oneX, y: oneY))
        context.addArc(center: CGPoint(x: twoX, y: twoY), radius: r, startAngle: twoS, endAngle: twoE, clockwise: twoC)
        context.addLine(to: CGPoint(x: threeX, y: threeY))
        context.strokePath()
        bar += 1
    }
  }

}

对于记录功能,我使用了installTap实例方法来记录、监控、观察节点的输出。

let inputNode = self.audioEngine.inputNode
guard let format = self.format() else {
    return
}

inputNode.installTap(onBus: 0, bufferSize: 1024, format: format) { (buffer, time) in
    let level: Float = -50
    let length: UInt32 = 1024
    buffer.frameLength = length
    let channels = UnsafeBufferPointer(start: buffer.floatChannelData, count: Int(buffer.format.channelCount))
    var value: Float = 0
    vDSP_meamgv(channels[0], 1, &value, vDSP_Length(length))
    var average: Float = ((value == 0) ? -100 : 20.0 * log10f(value))
    if average > 0 {
        average = 0
    } else if average < -100 {
        average = -100
    }
    let silent = average < level
    let ts = NSDate().timeIntervalSince1970
    if ts - self.renderTs > 0.1 {
        let floats = UnsafeBufferPointer(start: channels[0], count: Int(buffer.frameLength))
        let frame = floats.map({ (f) -> Int in
            return Int(f * Float(Int16.max))
        })
        DispatchQueue.main.async {
            let seconds = (ts - self.recordingTs)
            self.timeLabel.text = seconds.toTimeString
            self.renderTs = ts
            let len = self.audioView.waveforms.count
            for i in 0 ..< len {
                let idx = ((frame.count - 1) * i) / len
                let f: Float = sqrt(1.5 * abs(Float(frame[idx])) / Float(Int16.max))
                self.audioView.waveforms[i] = min(49, Int(f * 50))
            }
            self.audioView.active = !silent
            self.audioView.setNeedsDisplay()
        }
    }

这是我写的文章,我希望你能找到你要找的东西: https://medium.com/flawless-app-stories/how-i-created-apples-voice-memos-clone-b6cd6d65f580

该项目也可以在 GitHub 上获得: https://github.com/HassanElDesouky/VoiceMemosClone

请注意,我还是个初学者,很抱歉我的代码看起来不是那么干净!

关于iOS 录音机在 swift 上的可视化,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/29369874/

相关文章:

ios - 如何让 iCarousel 与 UITableViewController 中的 headerView 一起工作

swift - 如何替换 position++ 代码以使其与 Swift 3 兼容?

swift - macOs 应用程序关闭后未唤醒但仍保留在停靠栏中

播放完成后iOS背景音频停止

ios - 应用程序扩展是在没有位码的情况下构建的 - 仅在 M1 mac 上

ios - 我的应用程序中需要核心数据吗?

ios - 将 NSDictionary 中的所有内容放入 UITextView

ios - 如何在特定位置的 uiimage View 上设置小图像?

ios - 请求音乐库权限 swift 或 obj-c

iOS7 音频播放器控制