Skip to content

Commit

Permalink
Refactor developer documentation
Browse files Browse the repository at this point in the history
This commit splits developer documentation into separate files,
with a table of contents to help developers find docs more easily.
It also reorganizes the results documentation to more clearly separate
results documentation from results implementation (except in the case of
a known issue with our implementation).
  • Loading branch information
lbernick authored and tekton-robot committed Oct 19, 2022
1 parent 2ae8b6a commit b63c1e2
Show file tree
Hide file tree
Showing 8 changed files with 678 additions and 784 deletions.
700 changes: 13 additions & 687 deletions docs/developers/README.md

Large diffs are not rendered by default.

40 changes: 0 additions & 40 deletions docs/developers/adding-a-new-apiversion.md

This file was deleted.

166 changes: 166 additions & 0 deletions docs/developers/api-versioning.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,166 @@
# API Versioning

## Adding feature gated API fields

We've introduced a feature-flag called `enable-api-fields` to the
[config-feature-flags.yaml file](../../config/config-feature-flags.yaml)
deployed as part of our releases.

This field can be configured either to be `alpha`, `beta`, or `stable`. This field is
documented as part of our
[install docs](../install.md#customizing-the-pipelines-controller-behavior).

For developers adding new features to Pipelines' CRDs we've got a couple of
helpful tools to make gating those features simpler and to provide a consistent
testing experience.

### Guarding Features with Feature Gates

Writing new features is made trickier when you need to support both the existing
stable behaviour as well as your new alpha behaviour.

In reconciler code you can guard your new features with an `if` statement such
as the following:

```go
alphaAPIEnabled := config.FromContextOrDefaults(ctx).FeatureFlags.EnableAPIFields == "alpha"
if alphaAPIEnabled {
// new feature code goes here
} else {
// existing stable code goes here
}
```

Notice that you'll need a context object to be passed into your function for
this to work. When writing new features keep in mind that you might need to
include this in your new function signatures.

### Guarding Validations with Feature Gates

Just because your application code might be correctly observing the feature gate
flag doesn't mean you're done yet! When a user submits a Tekton resource it's
validated by Pipelines' webhook. Here too you'll need to ensure your new
features aren't accidentally accepted when the feature gate suggests they
shouldn't be. We've got a helper function,
[`ValidateEnabledAPIFields`](../../pkg/apis/version/version_validation.go),
to make validating the current feature gate easier. Use it like this:

```go
requiredVersion := config.AlphaAPIFields
// errs is an instance of *apis.FieldError, a common type in our validation code
errs = errs.Also(ValidateEnabledAPIFields(ctx, "your feature name", requiredVersion))
```

If the user's cluster isn't configured with the required feature gate it'll
return an error like this:

```
<your feature> requires "enable-api-fields" feature gate to be "alpha" but it is "stable"
```

### Unit Testing with Feature Gates

Any new code you write that uses the `ctx` context variable is trivially unit
tested with different feature gate settings. You should make sure to unit test
your code both with and without a feature gate enabled to make sure it's
properly guarded. See the following for an example of a unit test that sets the
feature gate to test behaviour:

```go
featureFlags, err := config.NewFeatureFlagsFromMap(map[string]string{
"enable-api-fields": "alpha",
})
if err != nil {
t.Fatalf("unexpected error initializing feature flags: %v", err)
}
cfg := &config.Config{
FeatureFlags: featureFlags,
}
ctx := config.ToContext(context.Background(), cfg)
if err := ts.TestThing(ctx); err != nil {
t.Errorf("unexpected error with alpha feature gate enabled: %v", err)
}
```

### Example YAMLs

Writing new YAML examples that require a feature gate to be set is easy. New
YAML example files typically go in a directory called something like
`examples/v1beta1/taskruns` in the root of the repo. To create a YAML that
should only be exercised when the `enable-api-fields` flag is `alpha` just put
it in an `alpha` subdirectory so the structure looks like:

```
examples/v1beta1/taskruns/alpha/your-example.yaml
```

This should work for both taskruns and pipelineruns.

**Note**: To execute alpha examples with the integration test runner you must
manually set the `enable-api-fields` feature flag to `alpha` in your testing
cluster before kicking off the tests.

When you set this flag to `stable` in your cluster it will prevent `alpha`
examples from being created by the test runner. When you set the flag to `alpha`
all examples are run, since we want to exercise backwards-compatibility of the
examples under alpha conditions.

### Integration Tests

For integration tests we provide the
[`requireAnyGate` function](../../test/gate.go) which should be passed to the
`setup` function used by tests:

```go
c, namespace := setup(ctx, t, requireAnyGate(map[string]string{"enable-api-fields": "alpha"}))
```

This will Skip your integration test if the feature gate is not set to `alpha`
with a clear message explaining why it was skipped.

**Note**: As with running example YAMLs you have to manually set the
`enable-api-fields` flag to `alpha` in your test cluster to see your alpha
integration tests run. When the flag in your cluster is `alpha` _all_
integration tests are executed, both `stable` and `alpha`. Setting the feature
flag to `stable` will exclude `alpha` tests.

## Adding a new API version to a Pipelines CRD

1. Read the [Kubernetes documentation](https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/)
on versioning CRDs, especially the section on
[specifying multiple versions](https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/#specify-multiple-versions)

1. If needed, create a new folder for the new API version under pkg/apis/pipeline.
Update codegen scripts in the "hack" folder to generate client code for the Go structs in the new folder.
Example: [#5055](https://github.com/tektoncd/pipeline/pull/5055)
- Codegen scripts will not work correctly if there are no CRDs in the new folder, but you do not need to add the
full Go definitions of the new CRDs.
- Knative uses annotations on the Go structs to determine what code to generate. For example, you must annotate a
struct with "// +k8s:openapi-gen=true" for OpenAPI schema generation.

1. Add Go struct types for the new API version. Example: [#5125](https://github.com/tektoncd/pipeline/pull/5125)
- Consider moving any logic unrelated to the API out of pkg/apis/pipeline so it's not duplicated in
the new folder.
- Once this code is merged, the code in pkg/apis/pipeline will need to be kept in sync between
the two API versions until we are ready to serve the new API version to users.

1. Implement [apis.Convertible](https://github.com/tektoncd/pipeline/blob/2f93ab2fcabcf6dcc61fe16d6ef54fcdf3424a0e/vendor/knative.dev/pkg/apis/interfaces.go#L37-L45)
for the old API version. Example: [#5202](https://github.com/tektoncd/pipeline/pull/5202)
- Knative uses this function to generate conversion code between API versions.
- Prefer duplicating Go structs in the new type over using type aliases. Once we move to supporting
a new API version, we don't want to make changes to the old one.
- Before changing the stored version of the CRD to the newer version, you must implement conversion for deprecated fields.
This is because resources that were created with earlier stored versions will use the current stored version when they're updated.
Deprecated fields can be serialized to a CRD's annotations. Example: [#5253](https://github.com/tektoncd/pipeline/pull/5253)

1. Add the new versions to the webhook and the CRD. Example: [#5234](https://github.com/tektoncd/pipeline/pull/5234)

1. Switch the "storage" version of the CRD to the new API version, and update the reconciler code
to use this API version. Example: [#2577](https://github.com/tektoncd/pipeline/pull/2577)

1. Update examples and documentation to use the new API version.

1. Existing objects are persisted using the storage version at the time they were created.
One way to upgrade them to the new stored version is to write a
[StorageVersionMigrator](https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/#upgrade-existing-objects-to-a-new-stored-version),
although we have not previously done this.
13 changes: 13 additions & 0 deletions docs/developers/multi-tenant-support.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
# Support for running in multi-tenant configuration

In order to support potential multi-tenant configurations the roles of the
controller are split into two:

`tekton-pipelines-controller-cluster-access`: those permissions needed cluster-wide by the controller.
`tekton-pipelines-controller-tenant-access`: those permissions needed on a namespace-by-namespace basis.

By default the roles are cluster-scoped for backwards-compatibility and
ease-of-use. If you want to start running a multi-tenant service you are able to
bind `tekton-pipelines-controller-tenant-access` using a `RoleBinding` instead
of a `ClusterRoleBinding`, thereby limiting the access that the controller has
to specific tenant namespaces.
118 changes: 118 additions & 0 deletions docs/developers/pipelineresources.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,118 @@
# PipelineResources Logic

## How are resources shared between tasks

> :warning: **`PipelineResources` are [deprecated](deprecations.md#deprecation-table).**
>
> Consider using replacement features instead. Read more in [documentation](migrating-v1alpha1-to-v1beta1.md#replacing-pipelineresources-with-tasks)
> and [TEP-0074](https://github.com/tektoncd/community/blob/main/teps/0074-deprecate-pipelineresources.md).
`PipelineRun` uses PVC to share `PipelineResources` between tasks. PVC volume is
mounted on path `/pvc` by PipelineRun.

- If a resource in a task is declared as output then the `TaskRun` controller
adds a step to copy each output resource to the directory path
`/pvc/task_name/resource_name`.

- If an input resource includes `from` condition then the `TaskRun` controller
adds a step to copy from PVC directory path:
`/pvc/previous_task/resource_name`.

If neither of these conditions are met, the PVC will not be created nor will the
GCS storage / S3 buckets be used.

Another alternative is to use a GCS storage or S3 bucket to share the artifacts.
This can be configured using a ConfigMap with the name `config-artifact-bucket`.

See the
[installation docs](../install.md#how-are-resources-shared-between-tasks) for
configuration details.

Both options provide the same functionality to the pipeline. The choice is based
on the infrastructure used, for example in some Kubernetes platforms, the
creation of a persistent volume could be slower than uploading/downloading files
to a bucket, or if the the cluster is running in multiple zones, the access to
the persistent volume can fail.

## How inputs are handled

Input resources, like source code (git) or artifacts, are dumped at path
`/workspace/task_resource_name`.

- If input resource is declared as below, then resource will be copied to
`/workspace/task_resource_name` directory `from` depended task PVC directory
`/pvc/previous_task/resource_name`.

```yaml
kind: Task
metadata:
name: get-gcs-task
namespace: default
spec:
resources:
inputs:
- name: gcs-workspace
type: storage
```
- Resource definition in task can have custom target directory. If `targetPath`
is mentioned in task input resource as below then resource will be copied to
`/workspace/outputstuff` directory `from` depended task PVC directory
`/pvc/previous_task/resource_name`.

```yaml
kind: Task
metadata:
name: get-gcs-task
namespace: default
spec:
resources:
inputs:
- name: gcs-workspace
type: storage
targetPath: /workspace/outputstuff
```

## How outputs are handled

Output resources, like source code (git) or artifacts (storage resource), are
expected in directory path `/workspace/output/resource_name`.

- If resource has an output "action" like upload to blob storage, then the
container step is added for this action.
- If there is PVC volume present (TaskRun holds owner reference to PipelineRun)
then copy step is added as well.

- If the output resource is declared then the copy step includes resource being
copied to PVC to path `/pvc/task_name/resource_name` from
`/workspace/output/resource_name` like the following example.

```yaml
kind: Task
metadata:
name: get-gcs-task
namespace: default
spec:
resources:
outputs:
- name: gcs-workspace
type: storage
```

- Same as input, if the output resource is declared with `TargetPath` then the
copy step includes resource being copied to PVC to path
`/pvc/task_name/resource_name` from `/workspace/outputstuff` like the
following example.

```yaml
kind: Task
metadata:
name: get-gcs-task
namespace: default
spec:
resources:
outputs:
- name: gcs-workspace
type: storage
targetPath: /workspace/outputstuff
```
Loading

0 comments on commit b63c1e2

Please sign in to comment.