Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docs: editorial improvements, typo fixes #4712

Merged
merged 1 commit into from
Dec 19, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
79 changes: 38 additions & 41 deletions docs/reference/commandline/attach.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,13 +22,14 @@ Attach local standard input, output, and error streams to a running container

Use `docker attach` to attach your terminal's standard input, output, and error
(or any combination of the three) to a running container using the container's
ID or name. This allows you to view its ongoing output or to control it
interactively, as though the commands were running directly in your terminal.
ID or name. This lets you view its output or control it interactively, as
though the commands were running directly in your terminal.

> **Note:**
> The `attach` command will display the output of the `ENTRYPOINT/CMD` process. This
> can appear as if the attach command is hung when in fact the process may simply
> not be interacting with the terminal at that time.
> **Note**
>
> The `attach` command displays the output of the container's `ENTRYPOINT` and
> `CMD` process. This can appear as if the attach command is hung when in fact
> the process may simply not be writing any output at that time.

You can attach to the same contained process multiple times simultaneously,
from different sessions on the Docker host.
Expand All @@ -38,44 +39,41 @@ container. If `--sig-proxy` is true (the default),`CTRL-c` sends a `SIGINT` to
the container. If the container was run with `-i` and `-t`, you can detach from
a container and leave it running using the `CTRL-p CTRL-q` key sequence.

> **Note:**
> **Note**
>
> A process running as PID 1 inside a container is treated specially by
> Linux: it ignores any signal with the default action. So, the process
> will not terminate on `SIGINT` or `SIGTERM` unless it is coded to do
> so.
> doesn't terminate on `SIGINT` or `SIGTERM` unless it's coded to do so.

It is forbidden to redirect the standard input of a `docker attach` command
while attaching to a TTY-enabled container (using the `-i` and `-t` options).
You can't redirect the standard input of a `docker attach` command while
attaching to a TTY-enabled container (using the `-i` and `-t` options).

While a client is connected to container's `stdio` using `docker attach`, Docker
uses a ~1MB memory buffer to maximize the throughput of the application.
While a client is connected to container's `stdio` using `docker attach`,
Docker uses a ~1MB memory buffer to maximize the throughput of the application.
Once this buffer is full, the speed of the API connection is affected, and so
this impacts the output process' writing speed. This is similar to other
applications like SSH. Because of this, it is not recommended to run
performance critical applications that generate a lot of output in the
foreground over a slow client connection. Instead, users should use the
`docker logs` command to get access to the logs.
applications like SSH. Because of this, it isn't recommended to run
performance-critical applications that generate a lot of output in the
foreground over a slow client connection. Instead, use the `docker logs`
command to get access to the logs.

## Examples

### Attach to and detach from a running container

The following example starts an ubuntu container running `top` in detached mode,
The following example starts an Alpine container running `top` in detached mode,
then attaches to the container;

```console
$ docker run -d --name topdemo ubuntu:22.04 /usr/bin/top -b
$ docker run -d --name topdemo alpine top -b

$ docker attach topdemo

top - 12:27:44 up 3 days, 21:54, 0 users, load average: 0.00, 0.00, 0.00
Tasks: 1 total, 1 running, 0 sleeping, 0 stopped, 0 zombie
%Cpu(s): 0.1 us, 0.1 sy, 0.0 ni, 99.8 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
MiB Mem : 3934.3 total, 770.1 free, 674.2 used, 2490.1 buff/cache
MiB Swap: 1024.0 total, 839.3 free, 184.7 used. 2814.0 avail Mem

PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
1 root 20 0 7180 2896 2568 R 0.0 0.1 0:00.02 top
Mem: 2395856K used, 5638884K free, 2328K shrd, 61904K buff, 1524264K cached
CPU: 0% usr 0% sys 0% nic 99% idle 0% io 0% irq 0% sirq
Load average: 0.15 0.06 0.01 1/567 6
PID PPID USER STAT VSZ %VSZ CPU %CPU COMMAND
1 0 root R 1700 0% 3 0% top -b
```

As the container was started without the `-i`, and `-t` options, signals are
Expand All @@ -85,14 +83,15 @@ container:

```console
<...>
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
1 root 20 0 7180 2896 2568 R 0.0 0.1 0:00.02 top^P^Q
PID PPID USER STAT VSZ %VSZ CPU %CPU COMMAND
1 0 root R 1700 0% 7 0% top -b
^P^Q
^C

$ docker ps -a --filter name=topdemo

CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
4cf0d0ebb079 ubuntu:22.04 "/usr/bin/top -b" About a minute ago Exited (0) About a minute ago topdemo
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
96254a235bd6 alpine "top -b" 44 seconds ago Exited (130) 8 seconds ago topdemo
```

Repeating the example above, but this time with the `-i` and `-t` options set;
Expand All @@ -109,19 +108,17 @@ with `docker ps` shows that the container is still running in the background:
```console
$ docker attach topdemo2

top - 12:44:32 up 3 days, 22:11, 0 users, load average: 0.00, 0.00, 0.00
Tasks: 1 total, 1 running, 0 sleeping, 0 stopped, 0 zombie
%Cpu(s): 50.0 us, 0.0 sy, 0.0 ni, 50.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
MiB Mem : 3934.3 total, 770.6 free, 672.4 used, 2491.4 buff/cache
MiB Swap: 1024.0 total, 839.3 free, 184.7 used. 2815.8 avail Mem

PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
1 root 20 0 7180 2776 2452 R 0.0 0.1 0:00.02 topread escape sequence
Mem: 2405344K used, 5629396K free, 2512K shrd, 65100K buff, 1524952K cached
CPU: 0% usr 0% sys 0% nic 99% idle 0% io 0% irq 0% sirq
Load average: 0.12 0.12 0.05 1/594 6
PID PPID USER STAT VSZ %VSZ CPU %CPU COMMAND
1 0 root R 1700 0% 3 0% top -b
read escape sequence

$ docker ps -a --filter name=topdemo2

CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
b1661dce0fc2 ubuntu:22.04 "/usr/bin/top -b" 2 minutes ago Up 2 minutes topdemo2
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
fde88b83c2c2 alpine "top -b" 22 seconds ago Up 21 seconds topdemo2
```

### Get the exit code of the container's command
Expand Down
65 changes: 30 additions & 35 deletions docs/reference/commandline/build.md
Original file line number Diff line number Diff line change
Expand Up @@ -60,11 +60,11 @@ pre-packaged tarball contexts and plain text files.

When the `URL` parameter points to the location of a Git repository, the
repository acts as the build context. The system recursively fetches the
repository and its submodules. The commit history is not preserved. A
repository and its submodules. The commit history isn't preserved. A
repository is first pulled into a temporary directory on your local host. After
that succeeds, the directory is sent to the Docker daemon as the context.
Local copy gives you the ability to access private repositories using local
user credentials, VPN's, and so forth.
user credentials, VPNs, and so forth.

> **Note**
>
Expand Down Expand Up @@ -100,18 +100,18 @@ contexts:

### Tarball contexts

If you pass an URL to a remote tarball, the URL itself is sent to the daemon:
If you pass a URL to a remote tarball, the URL itself is sent to the daemon:

```console
$ docker build http://server/context.tar.gz
```

The download operation will be performed on the host the Docker daemon is
running on, which is not necessarily the same host from which the build command
running on, which isn't necessarily the same host from which the build command
is being issued. The Docker daemon will fetch `context.tar.gz` and use it as the
build context. Tarball contexts must be tar archives conforming to the standard
`tar` UNIX format and can be compressed with any one of the 'xz', 'bzip2',
'gzip' or 'identity' (no compression) formats.
`tar` Unix format and can be compressed with any one of the `xz`, `bzip2`,
`gzip` or `identity` (no compression) formats.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think zstd may now also work, but we'd have to check if it does (so no need to change for this PR)


### Text files

Expand All @@ -122,7 +122,7 @@ Instead of specifying a context, you can pass a single `Dockerfile` in the
$ docker build - < Dockerfile
```

With Powershell on Windows, you can run:
With PowerShell on Windows, you can run:

```powershell
Get-Content Dockerfile | docker build -
Expand All @@ -136,8 +136,7 @@ By default the `docker build` command will look for a `Dockerfile` at the root
of the build context. The `-f`, `--file`, option lets you specify the path to
an alternative file to use instead. This is useful in cases where the same set
of files are used for multiple builds. The path must be to a file within the
build context. If a relative path is specified then it is interpreted as
relative to the root of the context.
build context. Relative path are interpreted as relative to the root of the context.

In most cases, it's best to put each Dockerfile in an empty directory. Then,
add to that directory only the files needed for building the Dockerfile. To
Expand All @@ -152,8 +151,7 @@ running at the time the build is cancelled, the pull is cancelled as well.

## Return code

On a successful build, a return code of success `0` will be returned. When the
build fails, a non-zero failure code will be returned.
Successful builds return exit code `0`. Failed builds return a non-zero exit code.

There should be informational output of the reason for failure output to
`STDERR`:
Expand Down Expand Up @@ -214,15 +212,15 @@ local directory get `tar`d and sent to the Docker daemon. The `PATH` specifies
where to find the files for the "context" of the build on the Docker daemon.
Remember that the daemon could be running on a remote machine and that no
parsing of the Dockerfile happens at the client side (where you're running
`docker build`). That means that *all* the files at `PATH` get sent, not just
the ones listed to [*ADD*](https://docs.docker.com/engine/reference/builder/#add)
`docker build`). That means that all the files at `PATH` get sent, not just
the ones listed to [`ADD`](https://docs.docker.com/engine/reference/builder/#add)
in the Dockerfile.

The transfer of context from the local machine to the Docker daemon is what the
`docker` client means when you see the "Sending build context" message.

If you wish to keep the intermediate containers after the build is complete,
you must use `--rm=false`. This does not affect the build cache.
you must use `--rm=false`. This doesn't affect the build cache.

### Build with URL

Expand Down Expand Up @@ -252,7 +250,7 @@ Successfully built 377c409b35e4

This sends the URL `http://server/ctx.tar.gz` to the Docker daemon, which
downloads and extracts the referenced tarball. The `-f ctx/Dockerfile`
parameter specifies a path inside `ctx.tar.gz` to the `Dockerfile` that is used
parameter specifies a path inside `ctx.tar.gz` to the `Dockerfile` used
to build the image. Any `ADD` commands in that `Dockerfile` that refers to local
paths must be relative to the root of the contents inside `ctx.tar.gz`. In the
example above, the tarball contains a directory `ctx/`, so the `ADD
Expand All @@ -274,7 +272,7 @@ $ docker build - < context.tar.gz
```

This will build an image for a compressed context read from `STDIN`. Supported
formats are: bzip2, gzip and xz.
formats are: `bzip2`, `gzip` and `xz`.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same here (also for follow-ups)


### Use a .dockerignore file

Expand Down Expand Up @@ -314,7 +312,6 @@ found, the `.dockerignore` file is used if present. Using a Dockerfile based
`.dockerignore` is useful if a project contains multiple Dockerfiles that expect
to ignore different sets of files.


### <a name="tag"></a> Tag an image (-t, --tag)

```console
Expand Down Expand Up @@ -375,12 +372,12 @@ the command line.
> **Note**
>
> `docker build` returns a `no such file or directory` error if the
> file or directory does not exist in the uploaded context. This may
> happen if there is no context, or if you specify a file that is
> file or directory doesn't exist in the uploaded context. This may
> happen if there is no context, or if you specify a file that's
> elsewhere on the Host system. The context is limited to the current
> directory (and its children) for security reasons, and to ensure
> repeatable builds on remote Docker hosts. This is also the reason why
> `ADD ../file` does not work.
> `ADD ../file` doesn't work.

### <a name="cgroup-parent"></a> Use a custom parent cgroup (--cgroup-parent)

Expand All @@ -396,7 +393,7 @@ container to be started using those [`--ulimit` flag values](run.md#ulimit).

You can use `ENV` instructions in a Dockerfile to define variable
values. These values persist in the built image. However, often
persistence is not what you want. Users want to specify variables differently
persistence isn't what you want. Users want to specify variables differently
depending on which host they build an image on.

A good example is `http_proxy` or source versions for pulling intermediate
Expand All @@ -410,7 +407,7 @@ $ docker build --build-arg HTTP_PROXY=http://10.20.30.2:1234 --build-arg FTP_PRO
This flag allows you to pass the build-time variables that are
accessed like regular environment variables in the `RUN` instruction of the
Dockerfile. Also, these values don't persist in the intermediate or final images
like `ENV` values do. You must add `--build-arg` for each build argument.
like `ENV` values do. You must add `--build-arg` for each build argument.

Using this flag will not alter the output you see when the `ARG` lines from the
Dockerfile are echoed during the build process.
Expand Down Expand Up @@ -638,15 +635,14 @@ $ docker build --cache-from myname/myapp .
#### Overview

Once the image is built, squash the new layers into a new image with a single
new layer. Squashing does not destroy any existing image, rather it creates a new
new layer. Squashing doesn't destroy any existing image, rather it creates a new
image with the content of the squashed layers. This effectively makes it look
like all `Dockerfile` commands were created with a single layer. The build
cache is preserved with this method.

The `--squash` option is an experimental feature, and should not be considered
The `--squash` option is an experimental feature, and shouldn't be considered
stable.


Squashing layers can be beneficial if your Dockerfile produces multiple layers
modifying the same files, for example, files that are created in one step, and
removed in another step. For other use-cases, squashing images may actually have
Expand All @@ -656,24 +652,23 @@ images (saving space).

For most use cases, multi-stage builds are a better alternative, as they give more
fine-grained control over your build, and can take advantage of future
optimizations in the builder. Refer to the [use multi-stage builds](https://docs.docker.com/develop/develop-images/multistage-build/)
section in the userguide for more information.

optimizations in the builder. Refer to the [Multi-stage builds](https://docs.docker.com/build/building/multi-stage/)
section for more information.

#### Known limitations

The `--squash` option has a number of known limitations:

- When squashing layers, the resulting image cannot take advantage of layer
- When squashing layers, the resulting image can't take advantage of layer
sharing with other images, and may use significantly more space. Sharing the
base image is still supported.
- When using this option you may see significantly more space used due to
storing two copies of the image, one for the build cache with all the cache
layers intact, and one for the squashed version.
- While squashing layers may produce smaller images, it may have a negative
impact on performance, as a single layer takes longer to extract, and
downloading a single layer cannot be parallelized.
- When attempting to squash an image that does not make changes to the
downloading a single layer can't be parallelized.
- When attempting to squash an image that doesn't make changes to the
filesystem (for example, the Dockerfile only contains `ENV` instructions),
the squash step will fail (see [issue #33823](https://github.com/moby/moby/issues/33823)).

Expand All @@ -686,7 +681,7 @@ the Docker daemon or setting `experimental: true` in the `daemon.json` configura
file.

By default, experimental mode is disabled. To see the current configuration of
the docker daemon, use the `docker version` command and check the `Experimental`
the Docker daemon, use the `docker version` command and check the `Experimental`
line in the `Engine` section:

```console
Expand All @@ -711,10 +706,10 @@ Server: Docker Engine - Community
[...]
```

To enable experimental mode, users need to restart the docker daemon with the
To enable experimental mode, users need to restart the Docker daemon with the
experimental flag enabled.

#### Enable Docker experimental
#### Enable experimental features

To enable experimental features, you need to start the Docker daemon with
`--experimental` flag. You can also enable the daemon flag via
Expand All @@ -735,7 +730,7 @@ true

#### Build an image with `--squash` argument

The following is an example of docker build with `--squash` argument
The following is an example of a build with `--squash` argument

```dockerfile
FROM busybox
Expand Down
14 changes: 7 additions & 7 deletions docs/reference/commandline/checkpoint.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ Manage checkpoints
## Description

Checkpoint and Restore is an experimental feature that allows you to freeze a running
container by checkpointing it, which turns its state into a collection of files
container by specifying a checkpoint, which turns the container state into a collection of files
on disk. Later, the container can be restored from the point it was frozen.

This is accomplished using a tool called [CRIU](https://criu.org), which is an
Expand All @@ -29,7 +29,7 @@ checkpoint and restore in Docker is available in this
### Installing CRIU

If you use a Debian system, you can add the CRIU PPA and install with `apt-get`
[from the criu launchpad](https://launchpad.net/~criu/+archive/ubuntu/ppa).
[from the CRIU launchpad](https://launchpad.net/~criu/+archive/ubuntu/ppa).

Alternatively, you can [build CRIU from source](https://criu.org/Installation).

Expand Down Expand Up @@ -91,17 +91,17 @@ abc0123
```

This process just logs an incrementing counter to stdout. If you run `docker logs`
in between running/checkpoint/restoring you should see that the counter
increases while the process is running, stops while it's checkpointed, and
in-between running/checkpoint/restoring, you should see that the counter
increases while the process is running, stops while it's frozen, and
resumes from the point it left off once you restore.

### Known limitations

seccomp is only supported by CRIU in very up to date kernels.
`seccomp` is only supported by CRIU in very up-to-date kernels.

External terminal (i.e. `docker run -t ..`) is not supported at the moment.
External terminals (i.e. `docker run -t ..`) aren't supported.
If you try to create a checkpoint for a container with an external terminal,
it would fail:
it fails:

```console
$ docker checkpoint create cr checkpoint1
Expand Down
Loading
Loading