我有两个不同的 docker 堆栈,一个用于 HBase,一个用于 Spark。我需要将 HBase jar 放入 Spark 路径。无需修改 Spark 容器即可执行此操作的一种方法是使用卷。在我的 HBase docker-compose.yml 中,我定义了一个指向 HBase 主目录的卷(恰好是/opt/hbase-1.2.6)。是否可以与 Spark 堆栈共享该卷?
现在,由于服务名称不同(2 个不同的 docker-compose 文件),卷被添加到前面(hbase_hbasehome 和 spark_hbasehome)导致共享失败。
最佳答案
您可以使用外部
卷。参见 here官方文档:
if set to true, specifies that this volume has been created outside of Compose. docker-compose up does not attempt to create it, and raises an error if it doesn’t exist.
external cannot be used in conjunction with other volume configuration keys (driver, driver_opts).
In the example below, instead of attempting to create a volume called [projectname]_data, Compose looks for an existing volume simply called data and mount it into the db service’s containers.
举个例子:
version: '2'
services:
db:
image: postgres
volumes:
- data:/var/lib/postgresql/data
volumes:
data:
external: true
You can also specify the name of the volume separately from the name used to refer to it within the Compose file:
volumes:
data:
external:
name: actual-name-of-volume
关于apache-spark - 在 docker 堆栈之间共享卷?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/50476275/