docker用的WSL2模式,从docker hub搜寻到condaforge/mambaforge镜像后进行了拉取,准备部署时发现这个镜像不自带端口如图: "image.png" (https://wmprod.oss-cn-shanghai.aliyuncs.com/images/20250109/09c747e99d14296fce0f1defde203f9d.png) 显示No ports exposed in this image 创建container后无法启动 请问大佬如何解决?
各位大佬,在构建answer问答社区时,每次在docker构建的过程中,不论是在wsl中的Ubuntu环境中,还是本机的windows下都会出现报错 在经过几次修改后,错误变成了 "image.png" (https://wmprod.oss-cn-shanghai.aliyuncs.com/images/20241231/dad886f6a31534c9c55aeeb429e8f0d4.png) 以下是我的dockerfile # 阶段一:构建应用程序 FROM golang:1.19-alpine AS golang-builder LABEL maintainer="aichy@sf.com" ARG GOPROXY # ENV GOPROXY ${GOPROXY:-direct} ENV GOPROXY=https://proxy.golang.com.cn,direct ENV GOPATH /go ENV GOROOT /usr/local/go ENV PACKAGE github.com/answerdev/answer ENV BUILD_DIR ${GOPATH}/src/${PACKAGE} ENV ANSWER_MODULE ${BUILD_DIR} ARG TAGS="sqlite sqlite_unlock_notify" ENV TAGS "bindata timetzdata $TAGS" ARG CGO_EXTRA_CFLAGS COPY . ${BUILD_DIR} WORKDIR ${BUILD_DIR} RUN apk --no-cache add build-base git bash nodejs npm && npm install -g pnpm corepack \ && pnpm config set registry https://registry.npm.taobao.org \ && pnpm config set proxy http://your-proxy-url:port # 设置 pnpm 的代理,替换为你的代理信息 RUN make install-ui-packages clean build RUN chmod 755 answer RUN ["/bin/bash","-c","script/build_plugin.sh"] RUN cp answer /usr/bin/answer RUN mkdir -p /data/uploads && chmod 777 /data/uploads \ && mkdir -p /data/i18n && cp -r i18n/*.yaml /data/i18n FROM alpine LABEL maintainer="maintainers@sf.com" ENV TZ "Asia/Shanghai" RUN apk update \ && apk --no-cache add \ bash \ ca-certificates \ curl \ dumb-init \ gettext \ openssh \ sqlite \ gnupg \ && echo "Asia/Shanghai" > /etc/timezone COPY --from=golang-builder /usr/bin/answer /usr/bin/answer COPY --from=golang-builder /data /data COPY /script/entrypoint.sh /entrypoint.sh RUN chmod 755 /entrypoint.sh VOLUME /data EXPOSE 80 ENTRYPOINT ["/entrypoint.sh"]
生产环境是多个客户共用的SAAS类型, 应用环境是前端UI和两个java应用, 环境运行过程中需要连接本机或其他数据库, 请教一下如何使用DockerFile编排镜像, 思路或者想法皆可留言, 具体应该怎么编写Dockerfile? 拜谢!
安装到Ubuntu容器中需要运行/etc/init.d/amh-start才会启动, 把/etc/init.d/amh-start放到dockerfile中的cmd 运行容器启动完程序容器会自动关闭
为啥next的docker镜像会比 go服务端的docker镜像大这么多,next的镜像几乎是go项目镜像的三倍多;(为啥) me 是next项目 ucalendar_service是go语言写的服务端; "4a27cf8c5ef4a617a9a0b21f2d74632.png" (https://wmprod.oss-cn-shanghai.aliyuncs.com/images/20241230/9fe71776e194d9106803de491c136929.png)
docker拉取的zookeeper版本是3.8.2 在docker创建的网络: docker network create --driver bridge --subnet=172.168.0.0/16 --gateway=172.168.1.1 zoo-net docker容器启动命令, 打算起3个zk容器: docker run -d --name zoo-master --restart=always --network zoo-net --ip 172.168.0.2 -p 2181:2181 -p 2888:2888 -p 3888:3888 -e ZOO_MY_ID=1 --privileged -v /data/kafka_cluster/zookeeper/master/conf/zoo.cfg:/conf/zoo.cfg -v /data/kafka_cluster/zookeeper/master/data:/data -v /data/kafka_cluster/zookeeper/master/datalog:/datalog -v /data/kafka_cluster/zookeeper/master/logs:/logs zookeeper:3.8.2 docker run -d --name zoo-node1 --restart=always --network zoo-net --ip 172.168.0.3 -p 2182:2181 -p 2887:2888 -p 3887:3888 -e ZOO_MY_ID=2 --privileged -v /data/kafka_cluster/zookeeper/master/conf/zoo.cfg:/conf/zoo.cfg -v /data/kafka_cluster/zookeeper/master/data:/data -v /data/kafka_cluster/zookeeper/master/datalog:/datalog -v /data/kafka_cluster/zookeeper/master/logs:/logs zookeeper:3.8.2 docker run -d --name zoo-node2 --restart=always --network zoo-net --ip 172.168.0.4 -p 2183:2181 -p 2886:2888 -p 3886:3888 -e ZOO_MY_ID=3 --privileged -v /data/kafka_cluster/zookeeper/master/conf/zoo.cfg:/conf/zoo.cfg -v /data/kafka_cluster/zookeeper/master/data:/data -v /data/kafka_cluster/zookeeper/master/datalog:/datalog -v /data/kafka_cluster/zookeeper/master/logs:/logs zookeeper:3.8.2 这是配置文件, 三个容器都相同: dataDir=/data dataLogDir=/datalog tickTime=2000 initLimit=5 syncLimit=2 autopurge.snapRetainCount=3 autopurge.purgeInterval=0 maxClientCnxns=60 standaloneEnabled=false admin.enableServer=false 4lw.commands.whitelist=* server.1=zoo-master:2888:3888;2181 server.2=zoo-node1:2887:3887;2182 server.3=zoo-node2:2886:3886;2183 结果是这样的 "image.png" (https://wmprod.oss-cn-shanghai.aliyuncs.com/images/20241225/9495fbe04d81bf66e6741cd01a8c343c.png) zoo-master日志: "image.png" (https://wmprod.oss-cn-shanghai.aliyuncs.com/images/20241225/d6de5e0605a91af7e81003a392ed8ac8.png) 进入容器内部执行"./zkServer.sh status"的内容: "image.png" (https://wmprod.oss-cn-shanghai.aliyuncs.com/images/20241225/6c5eafbb00ae9ce027620e7c8a7a48ce.png) 求大佬们帮帮忙了
如下是 docker-compose.yml 文件: services: node: image: node env_file: - .dev.env environment: - MY_NAME=${MY_NAME} 当使用下面的命令启动服务时,会读取不到 MY_NAME 这个变量: docker-compose up 但是使用 "--env-file" 选项就能使用环境变量文件: docker-compose --env-file=.dev.env up 这是什么情况?
我的Dockerfile配置: FROM ghcr.io/puppeteer/puppeteer:latest EXPOSE 4000 # 设置工作目录 WORKDIR /yice RUN chmod -R 777 /yice # # 复制源码 COPY ./dist /yice/dist COPY ./scripts /yice/scripts COPY ./.env /yice/ COPY ./package.json /yice COPY ./static /yice/static COPY ./tsconfig.json /yice/ COPY ./tsconfig.build.json /yice/ COPY ./node_modules /yice/node_modules CMD npm run start:prod 报错信息: "image.png" (https://wmprod.oss-cn-shanghai.aliyuncs.com/images/20250103/aa4df9d9d8f7e3ee94f62825f2191e56.png) 我已经尝试过以下几种方法: * Dockerfile配置中添加"RUN chmod -R 777 /yice" * Dockerfile配置添加 "USER root",它会报: "image.png" (https://wmprod.oss-cn-shanghai.aliyuncs.com/images/20250103/add04e0ca3e42672921842849a67a692.png) 找不Chrome,然后因为ghcr.io/puppeteer/puppeteer:latest镜像切换的用户名为"pptruser",所以我手动在代码里给puppeteer配置"executablePath: '/home/pptruser/node_modules/chrome' ",它又报找不到 "image.png" (https://wmprod.oss-cn-shanghai.aliyuncs.com/images/20250103/dc8b3c6e1d2edae81be0900d53ff74ed.png) * Dockerfile配置添加 "USER pptruser",它还是报: "image.png" (https://wmprod.oss-cn-shanghai.aliyuncs.com/images/20250103/aa4df9d9d8f7e3ee94f62825f2191e56.png) 请问有没有大佬指点下?感谢🙏了
FROM python:3.11.5-bookworm RUN echo "deb http://mirrors.aliyun.com/debian/ bookworm main non-free contrib" > /etc/apt/sources.list 对于 "python:3.11.5-bookworm" 这个镜像 每次运行 "docker build -t "ponponon/svddb_api:2023.09.08.1" ." 好像都回去检查 dockerhub 上 "python:3.11.5-bookworm" 是否要更新? 但是因为连接 dockerhub 挺慢的,我希望本地已经有 "python:3.11.5-bookworm" 了就不要去联网更新 "python:3.11.5-bookworm" 了 ╰─➤ docker images | grep python python 3.10.10-bullseye 2b8b079d7548 5 months ago 912MB 从上面可以看到,我本地已经有 "python:3.11.5-bookworm" 了 ─➤ make build docker build -t "ponponon/svddb_api:2023.09.08.1" . [+] Building 256.2s (4/18) docker:default => [internal] load build definition from Dockerfile 0.0s => => transferring dockerfile: 914B 0.0s => [internal] load .dockerignore 0.0s => => transferring context: 2.31kB 0.0s => [internal] load metadata for docker.io/library/python:3.11.5-bookworm 36.3s => [internal] load build context 0.1s => => transferring context: 176.86kB 0.1s => [ 1/14] FROM docker.io/library/python:3.11.5-bookworm@sha256:3d10a95a05674b7e42ac53267774880b255949e5e2aed 219.9s => => resolve docker.io/library/python:3.11.5-bookworm@sha256:3d10a95a05674b7e42ac53267774880b255949e5e2aed9be5 0.0s => => sha256:22c957c35e37bdd688c2bdda50dc72612477d6c9c393802163b14d197a568bff 7.53kB / 7.53kB 0.0s => => sha256:012c0b3e998c1a0c0bedcf712eaaafb188580529dd026a04aa1ce13fdb39e42b 49.56MB / 49.56MB 80.9s => => sha256:00046d1e755ea94fa55a700ca9a10597e4fac7c47be19d970a359b0267a51fbf 24.03MB / 24.03MB 43.2s => => sha256:9f13f5a53d118643c1f1ff294867c09f224d00edca21f56caa71c2321f8ca004 64.11MB / 64.11MB 69.5s => => sha256:3d10a95a05674b7e42ac53267774880b255949e5e2aed9be590143df33f95c64 1.65kB / 1.65kB 0.0s => => sha256:8a164692c20c8f51986d25c16caa6bf03bde14e4b6e6a4c06b5437d5620cc96c 2.01kB / 2.01kB 0.0s => => sha256:e13e76ad6279c3d69aa6842a935288c7db66878ec3b7815edd3bb34647bd7ed0 137.36MB / 210.99MB 219.9s => => sha256:ad4c837a72f8d2d63d64bf7f9d7c43fe9e67f3d82af7ac47e977a06b95ff7b3a 6.39MB / 6.39MB 92.7s => => extracting sha256:012c0b3e998c1a0c0bedcf712eaaafb188580529dd026a04aa1ce13fdb39e42b 0.6s => => sha256:0f546edb7ae0f7fecbac92a156849e2479dbf591ed0be9ac68e873da28c2a7a7 19.78MB / 19.78MB 120.8s => => extracting sha256:00046d1e755ea94fa55a700ca9a10597e4fac7c47be19d970a359b0267a51fbf 0.2s => => extracting sha256:9f13f5a53d118643c1f1ff294867c09f224d00edca21f56caa71c2321f8ca004 0.8s => => sha256:e2f1160974087f047a90d64ce50bd95d279c89309f32caeaa0b3503c253cab45 244B / 244B 107.9s => => sha256:a0d3c67a6b6b0b67a9e4735c18ae78ca68e2252e0bed7e6fc3912dd5e0e8f042 3.11MB / 3.11MB 124.2s *** 这部分耗时挺久的 "图片.png" (https://wmprod.oss-cn-shanghai.aliyuncs.com/images/20250105/8f1969d4664b0f21e178aacf77082ba9.png) *** 好像可能是乌龙 之前没有显性执行过 docker pull python:3.11.5-bookworm
起了一个 mysql 的 docker 容器 version: "3" services: mysql8: container_name: mysql8 image: mysql:8 restart: always ports: - "3306:3306" environment: - MYSQL_ROOT_PASSWORD=123456 volumes: - ./volumes/:/var/lib/mysql - ./my-custom.cnf:/etc/mysql/conf.d/my-custom.cnf 然后使用 htop 看到 mysql8 容器占用的 res 内存是 556MB "图片.png" (https://wmprod.oss-cn-shanghai.aliyuncs.com/images/20241218/f88888161b2058236e8c1e3757b0fd3a.png) 使用 docker stats 看到的内存使用量是 739.5MB,和 htop 不一致 "图片.png" (https://wmprod.oss-cn-shanghai.aliyuncs.com/images/20241218/3214dec44fe20fbba35a236bd351f0a9.png) 使用 "docker 的 python sdk" (https://link.segmentfault.com/?enc=ksnkIxkjQttNM5iwRCd84g%3D%3D.l2VSPz5X9oZ8W8VtJAxCifOaRz2LnKFoeCxmCEyXDBAiaTpEMV3TWtA9iy%2BoTw8X) 读取 mysql8 容器的信息 代码如下 from typing import List, Dict, Optional from concurrent.futures.thread import ThreadPoolExecutor import click import docker from rich.console import Console from docker.models.containers import Container import json def get_container_info(container: Container): short_id: str = container.short_id status: str = container.attrs['State']['Status'] name: str = container.attrs['Name'] from docker.models.images import Image image: Image = container.image image_name = image.tags[0] if image.tags else image.short_id container_stats: Dict = container.stats(decode=True).__next__() print(json.dumps(container_stats,indent=4)) client = docker.from_env() container = client.containers.get('c50ca07d3d41') get_container_info(container) 输出内容: { "read": "2023-10-03T03:27:20.913142192Z", "preread": "0001-01-01T00:00:00Z", "pids_stats": { "current": 47, "limit": 33438 }, "blkio_stats": { "io_service_bytes_recursive": [ { "major": 259, "minor": 0, "op": "read", "value": 301391872 }, { "major": 259, "minor": 0, "op": "write", "value": 1556733952 } ], "io_serviced_recursive": null, "io_queue_recursive": null, "io_service_time_recursive": null, "io_wait_time_recursive": null, "io_merged_recursive": null, "io_time_recursive": null, "sectors_recursive": null }, "num_procs": 0, "storage_stats": {}, "cpu_stats": { "cpu_usage": { "total_usage": 114408426000, "usage_in_kernelmode": 37930579000, "usage_in_usermode": 76477847000 }, "system_cpu_usage": 105739260000000, "online_cpus": 16, "throttling_data": { "periods": 0, "throttled_periods": 0, "throttled_time": 0 } }, "precpu_stats": { "cpu_usage": { "total_usage": 0, "usage_in_kernelmode": 0, "usage_in_usermode": 0 }, "throttling_data": { "periods": 0, "throttled_periods": 0, "throttled_time": 0 } }, "memory_stats": { "usage": 1346535424, "stats": { "active_anon": 4096, "active_file": 208646144, "anon": 546168832, "anon_thp": 0, "file": 779718656, "file_dirty": 0, "file_mapped": 36380672, "file_writeback": 0, "inactive_anon": 546164736, "inactive_file": 571072512, "kernel_stack": 770048, "pgactivate": 50248, "pgdeactivate": 0, "pgfault": 213604, "pglazyfree": 0, "pglazyfreed": 0, "pgmajfault": 453, "pgrefill": 0, "pgscan": 0, "pgsteal": 0, "shmem": 0, "slab": 17898776, "slab_reclaimable": 17314312, "slab_unreclaimable": 584464, "sock": 0, "thp_collapse_alloc": 0, "thp_fault_alloc": 0, "unevictable": 0, "workingset_activate": 0, "workingset_nodereclaim": 0, "workingset_refault": 0 }, "limit": 29296934912 }, "name": "/mysql8", "id": "c50ca07d3d41b26b9fca21bb90b942919066352c0f1866750afb7f2ab611d5ea", "networks": { "eth0": { "rx_bytes": 169903746, "rx_packets": 423759, "rx_errors": 0, "rx_dropped": 0, "tx_bytes": 18029112, "tx_packets": 228139, "tx_errors": 0, "tx_dropped": 0 } } } 可以看到 memory_stats 的 usage 是 1346535424,换算一下就是 1346535424/1024/1024/1024=1.25GB,这个结果和 htop 和 docker stats 都不一样 所以有两个问题: 问题一:看来需要组合计算 memory_stats 下面的 stats 中的某些参数才能获取和 htop 和 docker stats 一致的结果,但是我不知道该用哪些参数 问题二:为什么 htop 和 docker stats 的结果不一样