python - 使用 python 从 pi 相机解码并显示 H.264 卡住的视频序列

标签 python opencv raspberry-pi h.264 picamera

我想解码 H.264 视频序列并将其显示在屏幕上。视频序列来自 pi 相机,我使用以下代码捕获

import io
import picamera

stream = io.BytesIO()
while True:
    with picamera.PiCamera() as camera:
        camera.resolution = (640, 480)
        camera.start_recording(stream, format='h264', quality=23)
        camera.wait_recording(15)
        camera.stop_recording()

有没有办法解码“流”数据序列并使用 opencv 或其他 python 库显示它们?

最佳答案

我使用 ffmpeg-python 找到了解决方案.
我无法验证树莓派中的解决方案,所以我不确定它是否适合您。

假设:

  • stream 将捕获的整个 h264 流保存在内存缓冲区中。
  • 您不想将流写入文件。

该解决方案适用以下内容:

  • 在子进程中执行 FFmpeg,以 sdtin 作为输入 pipe,以 stdout 作为输出 管道
    输入将是视频流(内存缓冲区)。
    输出格式是 BGR 像素格式的原始视频帧。
  • 将流内容写入管道(写入stdin)。
  • 读取解码视频(逐帧),并显示每一帧(使用cv2.imshow)

代码如下:

import ffmpeg
import numpy as np
import cv2
import io

width, height = 640, 480


# Seek to stream beginning
stream.seek(0)

# Execute FFmpeg in a subprocess with sdtin as input pipe and stdout as output pipe
# The input is going to be the video stream (memory buffer)
# The output format is raw video frames in BGR pixel format.
# https://github.com/kkroening/ffmpeg-python/blob/master/examples/README.md
# https://github.com/kkroening/ffmpeg-python/issues/156
# http://zulko.github.io/blog/2013/09/27/read-and-write-video-frames-in-python-using-ffmpeg/
process = (
    ffmpeg
    .input('pipe:')
    .video
    .output('pipe:', format='rawvideo', pix_fmt='bgr24')
    .run_async(pipe_stdin=True, pipe_stdout=True)
)


# https://stackoverflow.com/questions/20321116/can-i-pipe-a-io-bytesio-stream-to-subprocess-popen-in-python
# https://gist.github.com/waylan/2353749
process.stdin.write(stream.getvalue())  # Write stream content to the pipe
process.stdin.close()  # close stdin (flush and send EOF)


#Read decoded video (frame by frame), and display each frame (using cv2.imshow)
while(True):
    # Read raw video frame from stdout as bytes array.
    in_bytes = process.stdout.read(width * height * 3)

    if not in_bytes:
        break

    # transform the byte read into a numpy array
    in_frame = (
        np
        .frombuffer(in_bytes, np.uint8)
        .reshape([height, width, 3])
    )

    #Display the frame
    cv2.imshow('in_frame', in_frame)

    if cv2.waitKey(100) & 0xFF == ord('q'):
        break

process.wait()
cv2.destroyAllWindows()

注意:我使用 sdtinstdout 作为管道(而不是使用命名管道),因为我希望代码也能在 Windows 中运行。

<小时/>

为了测试该解决方案,我创建了一个示例视频文件,并将其读入内存缓冲区(编码为 H.264)。
我使用内存缓冲区作为上述代码的输入(替换您的stream)。

这是完整的代码,包括测试代码:

import ffmpeg
import numpy as np
import cv2
import io

in_filename = 'in.avi'

# Build synthetic video, for testing begins:
###############################################
# ffmpeg -y -r 10 -f lavfi -i testsrc=size=160x120:rate=1 -c:v libx264 -t 5 in.mp4
width, height = 160, 120

(
    ffmpeg
    .input('testsrc=size={}x{}:rate=1'.format(width, height), r=10, f='lavfi')
    .output(in_filename, vcodec='libx264', crf=23, t=5)
    .overwrite_output()
    .run()
)
###############################################


# Use ffprobe to get video frames resolution
###############################################
p = ffmpeg.probe(in_filename, select_streams='v');
width = p['streams'][0]['width']
height = p['streams'][0]['height']
n_frames = int(p['streams'][0]['nb_frames'])
###############################################


# Stream the entire video as one large array of bytes
###############################################
# https://github.com/kkroening/ffmpeg-python/blob/master/examples/README.md
in_bytes, _ = (
    ffmpeg
    .input(in_filename)
    .video # Video only (no audio).
    .output('pipe:', format='h264', crf=23)
    .run(capture_stdout=True) # Run asynchronous, and stream to stdout
)
###############################################


# Open In-memory binary streams
stream = io.BytesIO(in_bytes)

# Execute FFmpeg in a subprocess with sdtin as input pipe and stdout as output pipe
# The input is going to be the video stream (memory buffer)
# The ouptut format is raw video frames in BGR pixel format.
# https://github.com/kkroening/ffmpeg-python/blob/master/examples/README.md
# https://github.com/kkroening/ffmpeg-python/issues/156
# http://zulko.github.io/blog/2013/09/27/read-and-write-video-frames-in-python-using-ffmpeg/
process = (
    ffmpeg
    .input('pipe:')
    .video
    .output('pipe:', format='rawvideo', pix_fmt='bgr24')
    .run_async(pipe_stdin=True, pipe_stdout=True)
)


# https://stackoverflow.com/questions/20321116/can-i-pipe-a-io-bytesio-stream-to-subprocess-popen-in-python
# https://gist.github.com/waylan/2353749
process.stdin.write(stream.getvalue())  # Write stream content to the pipe
process.stdin.close()  # close stdin (flush and send EOF)


#Read decoded video (frame by frame), and display each frame (using cv2.imshow)
while(True):
    # Read raw video frame from stdout as bytes array.
    in_bytes = process.stdout.read(width * height * 3)

    if not in_bytes:
        break

    # transform the byte read into a numpy array
    in_frame = (
        np
        .frombuffer(in_bytes, np.uint8)
        .reshape([height, width, 3])
    )

    #Display the frame
    cv2.imshow('in_frame', in_frame)

    if cv2.waitKey(100) & 0xFF == ord('q'):
        break

process.wait()
cv2.destroyAllWindows()

关于python - 使用 python 从 pi 相机解码并显示 H.264 卡住的视频序列,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/59998641/

相关文章:

python - 树莓派: GPIO,持续输出GPIO.HIGH

compilation - Raspberry Pi : undefined references to COMXImage and g_OMXImage上的XBMC 13.2链接器错误

python - 如何在后台运行 Flask Server

python - REST服务中的错误方案

python - 在代码中调用 SQLAlchemy flush() 是否有任何副作用?

python - 如何使用 python 列出或发现 RabbitMQ 交换中的队列?

c++ - 使用 sgbm 的视差图

Python与omxplayer通信

python - Cap.read()是否从文件流中跳过相机流中的帧,但不跳过?

c++ - 使用 OpenCV 从图像中去除水印