-
Notifications
You must be signed in to change notification settings - Fork 782
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RFE: create multi-arch manifests #1136
Comments
For completion, I also tried copying the different style of OCI list generated in containers/buildah#2858, but got the same error message. |
Thanks for your report. The copy pipeline in |
Looking further, if the destination is You can copy one image at a time, currently only if they have different tags; containers/image#1072 will add a way to reference unnamed images in an OCI index. |
@mtrmac my goal is to make a multi-architecture image. I expect them to appear as such on Docker Hub. As far as I can tell, there is no way to create a proper multi-architecture image with skopeo, though the tooling gets very close. |
Both Buildah and Podman have a |
Both of those require user namespace support, and a fully setup Skopeo already generates the OCI directory merged from multiple images. It just generates the wrong type of JSON manifest. |
I wanted to be able to copy an image from dockerhub and import it into my private repo including all multi-arch refs. However in alignment with this issue it seems skopeo (I am running latest version 1.2.0) cannot write/read dockerhub multi-arch manifests. I first started trying to just read the manifest but ran into the following (see Observations below). Perhaps I am doing something wrong? Would be great if skopeo did get multi-arch support with ARM based processors becoming more prominent now but more so in the future. Observationsskopeo inspect docker://ubuntu:21.04
FATA[0003] Error parsing manifest for image: Error choosing image instance: no image found in manifest list for architecture amd64, variant "", OS darwin However when you add the skopeo inspect --raw docker://ubuntu:21.04 | jq Output: {
"manifests": [
{
"digest": "sha256:eb9086d472747453ad2d5cfa10f80986d9b0afb9ae9c4256fe2887b029566d06",
"mediaType": "application/vnd.docker.distribution.manifest.v2+json",
"platform": {
"architecture": "amd64",
"os": "linux"
},
"size": 943
},
{
"digest": "sha256:017b74c5d97855021c7bde7e0d5ecd31bd78cad301dc7c701bb99ae2ea903857",
"mediaType": "application/vnd.docker.distribution.manifest.v2+json",
"platform": {
"architecture": "arm",
"os": "linux",
"variant": "v7"
},
"size": 943
},
{
"digest": "sha256:bb48336f1dd075aa11f9e819fbaa642208d7d92b7ebe38cb202b0187e1df8ed4",
"mediaType": "application/vnd.docker.distribution.manifest.v2+json",
"platform": {
"architecture": "arm64",
"os": "linux",
"variant": "v8"
},
"size": 943
},
{
"digest": "sha256:29c2f09290253a0883690761f411cbe5195cd65a4f23ff40bf66d7586d72ebb7",
"mediaType": "application/vnd.docker.distribution.manifest.v2+json",
"platform": {
"architecture": "ppc64le",
"os": "linux"
},
"size": 943
},
{
"digest": "sha256:e8e0c3580fc5948141d8f60c062e0640d4c7e02d10877a19a433573555eda25b",
"mediaType": "application/vnd.docker.distribution.manifest.v2+json",
"platform": {
"architecture": "s390x",
"os": "linux"
},
"size": 943
}
],
"mediaType": "application/vnd.docker.distribution.manifest.list.v2+json",
"schemaVersion": 2
} If you inspect an image that does not support multi-arch then the manifest returned is different in how it is presented. skopeo inspect --raw docker://jenkins/jenkins:latest | jq Output: {
"schemaVersion": 2,
"mediaType": "application/vnd.docker.distribution.manifest.v2+json",
"config": {
"mediaType": "application/vnd.docker.container.image.v1+json",
"size": 16488,
"digest": "sha256:f98e5f96106f5484d49bd725e1bac1fa92974ec2688783b153ef815a33680f70"
},
"layers": [
{
"mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
"size": 45380658,
"digest": "sha256:3192219afd04f93d90f0af7f89cb527d1af2a16975ea391ea8517c602ad6ddb6"
},
{
"mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
"size": 10797219,
"digest": "sha256:17c160265e75550c2ed099aa7d3906b3fef0bf046a2aeead136f8e587a015159"
},
{
"mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
"size": 4340216,
"digest": "sha256:cc4fe40d0e618e3319afb689c3570bb87e8e8cf51bca080364d1552317bc66c2"
},
... redacted for brevity
]
} |
@darktempla It does know how to read the from the registry, but it resolves to a specific OS and architecture: by default your local one. Ubuntu doesn't have an amd64/darwin build, so there's no image that matches your local architecture. Try using I can't remember if |
@darktempla Yeah, you’re basically running into an artifact of It’s certainly plausible to have a multi-arch inspect/format/… feature, but Returning to your original request to copy a multi-arch image, both |
A friendly reminder that this issue had no activity for 30 days. |
Is there any resolution for this? |
@rverma-nsl The current recommendation is to use I can certainly imagine exceptions (and it would be interesting to hear about them), so this issue is not closed yet, but it’s not something we are too likely to work on soon. (I’d certainly not want for Skopeo to have and maintain an independent reimplementation of the manifest building logic; the existing one should be shared, possibly after being made more general.) |
Yeah we are facing the same issue right now. We would as well build in GitLab CI using Kaniko. Afterwards assemble them in a multi-arch image. And we do not want / are not able to provide extended permissions to use e.g. docker, buildah, podman. |
Same here. Our images are build on a specifically protected infrastructure and the resulting images end up in an intermediate registry. We now search for a valuable solution to generate a multi-arch image from arising separate images that are even build on physically separated machines. The general CI/CD infrastructure is using containers as they are perfectly suited for task isolation. This infrastructure does not allow privileged execution. |
I've got a similar use case. Two archives with
As a final step, one has to create manifest & push it:
Would be nice if |
Maybe https://github.com/estesp/manifest-tool#createpush is an option for this use case.
|
You can also use buildah, as mentioned way up thread.
|
Thanks, aware of |
A friendly reminder that this issue had no activity for 30 days. |
A friendly reminder that ‘stale issues’ is a misnomer. |
A friendly reminder that this issue had no activity for 30 days. |
Because it’s still an issue. Can somebody please disable this annoying bot? |
This annoying bot is the only thing that keeps the issue in the foreground, without it, I doubt anyone will pay attention to the issue, it will get lost in the forest. It is always best if you have an issue to step forward and open a PR to solve it. |
A friendly reminder that this issue had no activity for 30 days. |
@mtrmac, any updates on this please? |
A friendly reminder that this issue had no activity for 30 days. |
Adding a vote for this feature. Our use case: |
vote +1 |
1 similar comment
vote +1 |
We could also make use of this. Currently we have a pipeline that builds the arch specific images, then the final job in the pipeline gathers the arch containers and creates the multi-arch manifest. |
I tried buildah again but it still requires syscalls commonly blocked by seccomp filters, for what should be an entirely non-privileged operation. See containers/buildah#1901 (comment) |
vote +1 |
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
Fedora now has the same use case as @terinjokes , see containers/buildah#5750 . We build container images as part of the Fedora compose process. We then want to create multi-arch manifests for those images, and push those manifests and images to various registries. Currently we use a forest of bash scripts to do this, which run on the compose hosts directly. We would like to replace them with a proper tool, but we want to deploy that tool in Fedora openshift, which uses a very restrictive seccomp profile. Both buildah and podman force all operations, even ones that don't involve any container building (like pushing images and manifests, and even authentication) through an unshare operation, which doesn't work due to the seccomp restrictions. Since buildah and podman are the only things we're aware of which our tool can use to do the manifest creation and pushing, we're a bit stuck. It would be very nice if we had a tool that could do the 'create a manifest, push it and the images' workflow without that unshare overhead. |
The use case makes sense superficially, but I would say it is Weird to build containers in a non-container-native system (koji) and only then publish them in a container-oriented system. Most other people are building containers and pushing them from the same infrastructure that runs as containers.
In general, I think treating disk images and container images symmetrically (as this tool is doing, and Koji is doing, and other build tools in Fedora is doing) is generally wrong - the containers should come first, and be more integrated/central. It's worth calling out that with e.g. Konflux we have opinionated dedicated pipelines for containers that handle this stuff. Anyways indeed buildah and podman today are pretty oriented around containers/storage which holds unpacked images - this use case is just "stitching together" metadata about extant images. Constructing a manifest list doesn't even need to fetch the other images. I don't think we ship a CLI tool for it, but it would be pretty trivial to stick somewhere - skopeo (being a tool that doesn't hard require c/storage in general) could make sense. That said this topic intersects with the whole "OCI artifacts" topic (see e.g. containers/buildah#5091 ) - a manifest list is like a special case of an artifact. |
One major use case of managing manifests has now been handled by “Stitching together” manifests is not hard in principle, but I’d really want to avoid inventing that from scratch, to avoid the duplicated effort. And the current c/common implementation is somewhat tied to c/storage, so that would need to be untangled. |
Well, look, if you want to build us a whole new container-oriented buildsystem by Tuesday week, that would be great. Meanwhile, we are trying to work with what we have. The ELN folks want their container images in the registries. I really do not want to add yet another to the pile of hard-to-maintain bash scripts (which can't even be a straight copy of any of the others, because ELN's requirements are slightly different). It's great to pontificate about what the ideal design is, but we also need to build and ship real products in the real world right now. btw, the take is also a bit weird because the fact we want to deploy this tool in openshift really has nothing much to do with the compose process itself at all. The tool is intentionally written to be (in theory) deployable however you like, and it is also indifferent to how the compose happened - it's just a message consumer. All it needs to know is "a compose happened, and this is where it is". We just want to deploy it in openshift because, you know, that's the Way We Do Things these days, right? VMs are old and boring, everything should be in openshift now. It makes the maintenance easier - we had a whole debate about the best way to deploy the tool on a VM, if we wound up having to do it that way - and it means Jeremy can get updates to the upstream project deployed without needing infra ansible permissions. |
I'd like to share some more context from my own work on why this is useful feature. Hopefully this adds some additional context and perspective to the ask. When our products build there are many different build artifacts that are part of a shasums, release tarballs, zip archives, rpms, debs, msi, and multi-os/arch containers (as N images). We treat them as one large set of outputs throughout our validation process. Though we do load the rpms, debs, or images into their respective repositories for staging so that we can test them just the way users might. But the artifacts themselves are what we are promoting between stages of our release process. They pass or fail together throughout the promotion process. Treating the container image (mainfest) as just another file type is integral to our process. Where we have a bit of trouble is the handling of multi-arch/os images because the current tooling doesn't really support images as a singular manifest file. Since our build process is a large distributed matrix of independent builds and we don't have a good way to support a multi-arch image we end up having tooling needing to stitch all the images back together manually to publish during promotion. We ended up having to build tooling using containers libraries directly to handle the gap. But it just seems like a behavior skopeo could support. Thank you! |
Thanks, @dekimsey . That sounds a lot like the Fedora process. I was kinda informal in my message because I know Colin and Miloslav are already pretty familiar with that process. I think Colin is saying that in his ideal world, we would not build things the way we do, we would build containers "first" (presumably after code/packages, so we have something to put in the containers), using some kinda completely container-native...thing...like podman farm, which would also take care of pushing them to registries and stuff, and then we would do all the other boring things like cloud images and ISOs (us) or MSIs and packages (you) that are way less cool or whatever. Which...we could do! At least Fedora could, I don't know if you could. But then what if we push out the updated container images, but it turns out something is broken badly enough in all the boring things we do after that that we don't want to push those out? Now we have one thing that is "the current Fedora X" (or whatever thing it is @dekimsey is building) with one set of contents - the updated container images - and a bunch of other things that are "the current Fedora X" but with a different set of contents - everything else. Which is a situation we usually want to avoid. For you it sounds like it'd be even worse as this sounds like it's part of your release process, and you clearly don't want to have '5.0' containers pushed to registries but the 5.0 tarballs and RPMs and debs and MSIs blocked by CI that happened after the container build phase. We could, I guess, take the container image builds out of our current process (again, don't know if you could) and do them at the end, after everything else is done - if it succeeded - in a whizzy container-native pipeline that does the builds and the manifests and the pushes. That's a thing we could do. But again, it's not done yet, and unless anyone wants to contribute it quickly...we have to use the thing we have. |
That seems mostly unrelated to me. Sure, there should be a staging mechanism where all things get built and only finally tagged/published at the end. That’s always going to be the case, regardless of whether container, disk images, RPMs, or anything else, comes first. It’s also something Skopeo/Podman is probably not going to need to specifically support — just tell it to write to a staging environment — apart from And, sure, the I primarily wanted to have a note of it here, because, in addition to |
vote +1 |
I created a multi-image OCI path, and can see these have been properly merged in the index
I am, however, unable to push this to Docker Hub to use as a multi-arch image.
The text was updated successfully, but these errors were encountered: