audio - 使用Gstreamer接收音频流导致原因未协商错误

标签 audio gstreamer multicast gstreamer-1.0

我想使用Gstreamer流式传输来自MIC的音频数据。
但是我无法用rx播放MIC音频。
如何播放来自MIC输入的音频流?

tx: gst-launch-1.0 -v alsasrc device="hw:0" ! decodebin ! audioconvert ! rtpL16pay ! queue ! udpsink host=239.0.0.1 auto-multicast=true port=5004

rx: gst-launch-1.0 udpsrc multicast-group=239.0.0.1 port=5004 caps="application/x-rtp" ! rtpL16depay ! alsasink

rx result: Setting pipeline to PAUSED ... Pipeline is live and does not need PREROLL ... Setting pipeline to PLAYING ... New clock: GstSystemClock ERROR: from element /GstPipeline:pipeline0/GstUDPSrc:udpsrc0: Internal data flow error. Additional debug info: ../../../../gstreamer-1.8.1/libs/gst/base/gstbasesrc.c(2948): gst_base_src_loop (): /GstPipeline:pipeline0/GstUDPSrc:udpsrc0: streaming task paused, reason not-negotiated (-4) Execution ended after 0:00:00.009364000 Setting pipeline to PAUSED ... Setting pipeline to READY ... Setting pipeline to NULL ... Freeing pipeline ...



tx结果如下。

Setting pipeline to PAUSED ... Pipeline is live and does not need PREROLL ... Setting pipeline to PLAYING ... New clock: GstAudioSrcClock /GstPipeline:pipeline0/GstAlsaSrc:alsasrc0: actual-buffer-time = 200000 /GstPipeline:pipeline0/GstAlsaSrc:alsasrc0: actual-latency-time = 10000 /GstPipeline:pipeline0/GstAlsaSrc:alsasrc0.GstPad:src: caps = "audio/x-raw\,\ format\=(string)S16LE\,\ layout\=(string)interleaved\,\ rate\=(int)44100\,\ channels\=(int)2\,\ channel-mask\=(bitmask)0x0000000000000003" /GstPipeline:pipeline0/GstDecodeBin:decodebin0.GstGhostPad:sink.GstProxyPad:proxypad0: caps = "audio/x-raw\,\ format\=(string)S16LE\,\ layout\=(string)interleaved\,\ rate\=(int)44100\,\ channels\=(int)2\,\ channel-mask\=(bitmask)0x0000000000000003" /GstPipeline:pipeline0/GstDecodeBin:decodebin0/GstTypeFindElement:typefind.GstPad:src: caps = "audio/x-raw\,\ format\=(string)S16LE\,\ layout\=(string)interleaved\,\ rate\=(int)44100\,\ channels\=(int)2\,\ channel-mask\=(bitmask)0x0000000000000003" Redistribute latency... /GstPipeline:pipeline0/GstAudioConvert:audioconvert0.GstPad:src: caps = "audio/x-raw\,\ layout\=(string)interleaved\,\ rate\=(int)44100\,\ format\=(string)S16BE\,\ channels\=(int)2\,\ channel-mask\=(bitmask)0x0000000000000003" /GstPipeline:pipeline0/GstRtpL16Pay:rtpl16pay0.GstPad:src: caps = "application/x-rtp\,\ media\=(string)audio\,\ clock-rate\=(int)44100\,\ encoding-name\=(string)L16\,\ encoding-params\=(string)2\,\ channels\=(int)2\,\ payload\=(int)96\,\ ssrc\=(uint)3961155089\,\ timestamp-offset\=(uint)725507323\,\ seqnum-offset\=(uint)20783" /GstPipeline:pipeline0/GstQueue:queue0.GstPad:src: caps = "application/x-rtp\,\ media\=(string)audio\,\ clock-rate\=(int)44100\,\ encoding-name\=(string)L16\,\ encoding-params\=(string)2\,\ channels\=(int)2\,\ payload\=(int)96\,\ ssrc\=(uint)3961155089\,\ timestamp-offset\=(uint)725507323\,\ seqnum-offset\=(uint)20783" /GstPipeline:pipeline0/GstUDPSink:udpsink0.GstPad:sink: caps = "application/x-rtp\,\ media\=(string)audio\,\ clock-rate\=(int)44100\,\ encoding-name\=(string)L16\,\ encoding-params\=(string)2\,\ channels\=(int)2\,\ payload\=(int)96\,\ ssrc\=(uint)3961155089\,\ timestamp-offset\=(uint)725507323\,\ seqnum-offset\=(uint)20783" /GstPipeline:pipeline0/GstQueue:queue0.GstPad:sink: caps = "application/x-rtp\,\ media\=(string)audio\,\ clock-rate\=(int)44100\,\ encoding-name\=(string)L16\,\ encoding-params\=(string)2\,\ channels\=(int)2\,\ payload\=(int)96\,\ ssrc\=(uint)3961155089\,\ timestamp-offset\=(uint)725507323\,\ seqnum-offset\=(uint)20783" /GstPipeline:pipeline0/GstRtpL16Pay:rtpl16pay0.GstPad:sink: caps = "audio/x-raw\,\ layout\=(string)interleaved\,\ rate\=(int)44100\,\ format\=(string)S16BE\,\ channels\=(int)2\,\ channel-mask\=(bitmask)0x0000000000000003" /GstPipeline:pipeline0/GstAudioConvert:audioconvert0.GstPad:sink: caps = "audio/x-raw\,\ format\=(string)S16LE\,\ layout\=(string)interleaved\,\ rate\=(int)44100\,\ channels\=(int)2\,\ channel-mask\=(bitmask)0x0000000000000003" /GstPipeline:pipeline0/GstDecodeBin:decodebin0.GstDecodePad:src_0.GstProxyPad:proxypad1: caps = "audio/x-raw\,\ format\=(string)S16LE\,\ layout\=(string)interleaved\,\ rate\=(int)44100\,\ channels\=(int)2\,\ channel-mask\=(bitmask)0x0000000000000003" /GstPipeline:pipeline0/GstDecodeBin:decodebin0/GstTypeFindElement:typefind.GstPad:sink: caps = "audio/x-raw\,\ format\=(string)S16LE\,\ layout\=(string)interleaved\,\ rate\=(int)44100\,\ channels\=(int)2\,\ channel-mask\=(bitmask)0x0000000000000003" /GstPipeline:pipeline0/GstDecodeBin:decodebin0.GstGhostPad:sink: caps = "audio/x-raw\,\ format\=(string)S16LE\,\ layout\=(string)interleaved\,\ rate\=(int)44100\,\ channels\=(int)2\,\ channel-mask\=(bitmask)0x0000000000000003" /GstPipeline:pipeline0/GstRtpL16Pay:rtpl16pay0: timestamp = 725507323 /GstPipeline:pipeline0/GstRtpL16Pay:rtpl16pay0: seqnum = 20783



我认为rx管道是错误的,但是我找不到解决方案。
请告诉我如何制作管道。

PS:
我尝试执行以下命令,然后RX播放麦克风音频!这意味着接收器设备无法播放L16音频?

tx: gst-launch-1.0 -v alsasrc device="hw:0" ! decodebin ! audioconvert ! audioresample ! alawenc ! rtppcmapay ! queue ! udpsink host=239.0.0.1 auto-multicast=true port=5004

rx: gst-launch-1.0 udpsrc multicast-group=239.0.0.1 port=5004 caps="application/x-rtp, media=(string)audio, clock-rate=(int)8000, encoding-name=(string)PCMA, encoding-params=(string)2, channels=(int)1, payload=(int)8" ! rtppcmadepay ! alawdec ! alsasink

最佳答案

您需要在接收中添加上限,请尝试以下管道:

gst-launch-1.0 udpsrc multicast-group=239.0.0.1 port=5004 caps='application/x-rtp, media=(string)audio, clock-rate=(int)44100, encoding-name=(string)L16, encoding-params=(string)2, channels=(int)2, payload=(int)96' ! rtpL16depay ! audioconvert ! alsasink

关于audio - 使用Gstreamer接收音频流导致原因未协商错误,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/48221472/

相关文章:

java - DatagramSocket能否接收多播数据包

audio - 如何转换音频并跟踪所有格式的元数据

c - uint8_t 类型数组的元素是否总是被打包到连续的内存字节中?

python - 在python中使用pyaudio添加录制持续时间

linux - GStreamer:如何循环播放 1 个文件?

windows - Windows 7 上的多播套接字

android - 无法对非静态方法 getAssets() 进行静态引用 - 在 fragment 中播放音频时遇到问题

Gstreamer 1.0 - 创建自定义消息/事件/信号

linux - 如何修改以下命令行?

python - 看不到来自另一台设备的 UDP 多播消息