Skip to content

Commit

Permalink
Merge branch 'main' into HZX-230-performance
Browse files Browse the repository at this point in the history
  • Loading branch information
pollett authored Oct 9, 2024
2 parents 5213a6f + 278c87d commit 3a17ad3
Show file tree
Hide file tree
Showing 30 changed files with 379 additions and 151 deletions.
4 changes: 2 additions & 2 deletions .github/CONTRIBUTING.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ If no version number is displayed in the output, you need to {url-node}[install

=== Step 2. Clone a documentation project

Documentation is hosted in xref:{url-readme}#documentation-content[separate GitHub repositorites]. To work on a particular documentation project, you must fork it, clone it, and configure the `antora-playbook-local.yml` file to process your local version.
Documentation is hosted in xref:{url-readme}#documentation-content[separate GitHub repositories]. To work on a particular documentation project, you must fork it, clone it, and configure the `antora-playbook-local.yml` file to process your local version.

NOTE: You can find all content repositories in the `antora-playbook.yml` file under `content.sources`.

Expand Down Expand Up @@ -140,7 +140,7 @@ modules/ <2>
----
<1> This file tells Antora that the contents of the `modules/` folder should be processed and added to the documentation site. This file is called the {url-antora-yml}[component version descriptor file].
<2> This folder contains the content that Antora will process
<3> This folder contains any content that can't be categorized under a specfic module name. Unlike other modules, the name of this module is never displayed in the URL of the site.
<3> This folder contains any content that can't be categorized under a specific module name. Unlike other modules, the name of this module is never displayed in the URL of the site.
<4> In any module, this folder contains downloadable content such as ZIP files that a user can download through a link.
<5> In any module, this folder contains examples such as source code that you can include in Asciidoc pages.
<6> In any module, this folder contains images that you can include in Asciidoc pages.
Expand Down
4 changes: 2 additions & 2 deletions README.adoc
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
= {replace with your project} Documentation
= Flow Documentation
// Settings:
ifdef::env-github[]
:warning-caption: :warning:
Expand All @@ -15,7 +15,7 @@ endif::[]

image:https://img.shields.io/badge/Build-Staging-yellow[link="{url-staging}"]

This repository contains the Antora components for the {replace with your project} documentation.
This repository contains the Antora components for the Flow documentation.

The documentation source files are marked up with AsciiDoc.

Expand Down
5 changes: 3 additions & 2 deletions antora-playbook-local.yml
Original file line number Diff line number Diff line change
@@ -1,20 +1,21 @@
site:
title: Documentation
title: Documentation (with Kapa)
url: http:localhost:5000
start_page: hz-flow::index.adoc
robots: disallow
keys:
docsearch_id: 'QK2EAH8GB0'
docsearch_api: 'ef7bd9485eafbd75d6e8425949eda1f5'
docsearch_index: 'prod_hazelcast_docs'
ai_search_id: 'ad664bf0-07e2-42e7-9150-2e1b04b15cca'
content:
sources:
- url: .
branches: HEAD
start_path: docs
ui:
bundle:
url: https://github.com/hazelcast/hazelcast-docs-ui/releases/latest/download/ui-bundle.zip #../hazelcast-docs-ui/build/ui-bundle.zip
url: ../hazelcast-docs-ui/build/ui-bundle.zip
snapshot: true
asciidoc:
attributes:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -100,7 +100,7 @@ jdbc { # The root element for database connections
For the full specification, including supported database drivers and their connection details, see xref:describing-data-sources:configuring-connections.adoc[Configure connections]

=== Passing sensitive data
It may not always be desirable to specify sensistive connection information directly in the config file, especially
It may not always be desirable to specify sensitive connection information directly in the config file, especially
if these are being checked into source control.

Environment variables can be used anywhere in the config file, following the https://github.com/lightbend/config#uses-of-substitutions[HOCON standards].
Expand All @@ -113,7 +113,7 @@ jdbc {
connectionName = another-connection
jdbcDriver = POSTGRES # Defines the driver to use. See below for the possible options
connectionParameters {
# .. other params omitted for bevity ..
# .. other params omitted for brevity ..
password = ${postgres_password} # Reads the environment variable "postgres_password"
}
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -170,7 +170,7 @@ Cons:
* OpenAPI endpoints
* HTTP servers

This is a strong option for scenarios where sytems can't publish their own schemas (eg., databases),
This is a strong option for scenarios where systems can't publish their own schemas (eg., databases),
or for data sources that are otherwise structureless (eg., CSV files).

Additionally, using a git-backed repository for a shared glossary / taxonomy is a great way to
Expand Down
2 changes: 1 addition & 1 deletion docs/modules/data-formats/pages/avro.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -203,7 +203,7 @@ NOTE: 'Local' means local to the server.

This workflow adds a reference to an Avro file that's *on the disk of the server*. +
+
It's intended for developers who are running {short-product-name} in a Docker image on their local machine. +
It's intended for developers who are running {short-product-name} in a container on their local machine. +
+
This workflow isn't intended for uploading an Avro schema to a remote server. Instead, use a Git repository

Expand Down
320 changes: 320 additions & 0 deletions docs/modules/deploying/images/flow-components2.svg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
2 changes: 1 addition & 1 deletion docs/modules/deploying/pages/authentication.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -84,7 +84,7 @@ The presented JWT is expected to have the following attributes:

Below you can find example Docker Compose files where MC is the identity provider and a more complex scenario where MC is configured to use an external identity provider.

NOTE: MC is preconfigured as the identity provider for {short-product-name} with the configuration prefixed `flow.security` and `FLOW_SECURITY` in the Docker Compose files below. Please update <<modificaton-of-sec-preconfig,specified properties>> only when necessary; changing others may lead to unexpected results.
NOTE: MC is preconfigured as the identity provider for {short-product-name} with the configuration prefixed `flow.security` and `FLOW_SECURITY` in the Docker Compose files below. Please update specified properties only when necessary; changing others may lead to unexpected results.

[#modification-of-sec-preconfig]
=== Potential modification of security pre-configuration
Expand Down
4 changes: 3 additions & 1 deletion docs/modules/deploying/pages/components.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,9 @@
:description: A '{short-product-name} deployment consists of several components:


image:flow-components.png[]
// image:flow-components.png[]
image:flow-components2.svg[]


== {short-product-name} server
{short-product-name} server is a combination of several components:
Expand Down
5 changes: 2 additions & 3 deletions docs/modules/deploying/pages/configuring.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -65,11 +65,10 @@ There are several configuration settings referenced throughout these docs, which

All config settings can be passed in a variety of ways:

==== Docker
==== Container

In a Docker / Docker Compose file, pass variables using the `OPTIONS` environment variable:
In a container or Docker Compose file, pass variables using the `OPTIONS` environment variable:

//rebranded vyne to flow below and substituted the short name variable for 'flow' as this doesn't render in code snippets - check if correct
----
services:
flow:
Expand Down
2 changes: 1 addition & 1 deletion docs/modules/deploying/pages/data-policies.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,7 @@ Similar to how you can use claims from your IDP, to pass through to a data sourc

Policies can be defined against any data that is discoverable using a {short-product-name} query -
not just the values present on the inbound claim.
A seperate subquery is executed to discover data that is needed to evaluate the policy.
A separate subquery is executed to discover data that is needed to evaluate the policy.

For example:

Expand Down
5 changes: 3 additions & 2 deletions docs/modules/deploying/pages/development-deployments.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -6,15 +6,16 @@ To get up and running quickly, you can bring up a fully working {short-product-n
== Development setup

This guide will walk you through setting up {short-product-name} in a development environment.
This setup pre-configures Management Center in Dev Mode, without needing any security credentials for logging in or using the REST API. Dev Mode should not be used in production. For more information about Dev Mode, see https://docs.hazelcast.com/management-center/latest/deploy-manage/dev-mode
This setup pre-configures Management Center in Dev Mode, without needing any security credentials for logging in or using the REST API. Dev Mode should not be used in production. For more information about Dev Mode, see https://docs.hazelcast.com/management-center/latest/deploy-manage/dev-mode


=== Prerequisites

Before you get started, you'll need the following:

* https://docs.docker.com/engine/install/[Docker], and Docker Compose (installed by default with Docker)
* https://docs.hazelcast.com/clc/latest/install-clc[Hazelcast CLC]. CLC is the official Hazelcast commandline tool to interact with Hazelcast clusters and create Hazelcast projects.
* Hazelcast License Key
* https://hazelcast.com/get-started/[Hazelcast License Key]

=== Creating your project

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -40,8 +40,6 @@ Caches are not shared between worker nodes, so remote services may receive a hig
As the cluster size grows, the work can be parallelized across a greater number of nodes. While this provides improved
throughput, the work coordinator incurs a heavier workload in serialization and deserialization of work tasks and responses.

//rebranded vyne to flow below - check!

To account for this the `flow.projection.distributionRemoteBias` allows tuning at which point the work is preferentially distributed to remote nodes, versus the
query coordinator. Once this value is exceeded, the query coordinator node will perform a lower proportion of projection work in a query.

Expand All @@ -67,7 +65,7 @@ For projection work to be distributed across all {short-product-name} nodes in a

== Advanced cluster configuration

This section may only be required if you're configuring the Docker orchestration yourself and not using one of our preconfigured templates.
This section may only be required if you're configuring the container orchestration yourself and not using one of our preconfigured templates.

=== Cluster discovery types

Expand Down
4 changes: 2 additions & 2 deletions docs/modules/deploying/pages/production-deployments.adoc
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
= Deploying {short-product-name}
:description: How to deploy {short-product-name} to production

Our recommended deployment method is with `docker compose` or deploying the Docker container directly using the orchestrator of your choice.
Our recommended deployment method is with `docker compose` or deploying the container directly using the orchestrator of your choice.

== Deploying using Docker Compose

Expand All @@ -23,7 +23,7 @@ When deploying to production, there are common tasks you may wish to perform:
* Enable xref:deploying:authentication.adoc[authentication] and xref:deploying:authorization.adoc[authorization] in {short-product-name}
* Use a xref:workspace:overview.adoc#reading-workspace-conf-from-git[Git-backed workspace.conf file], to align with IAC
* Understand how xref:deploying:managing-secrets.adoc[secrets are handled] in {short-product-name}
* Understand how to configure {short-product-name} via xref:deploying:configuring.adoc#docker[application properties] or xref:deploying:configuring.adoc#passing-{short-product-name}-application-configuration[environment variables]
* Understand how to configure {short-product-name} via xref:deploying:configuring.adoc#container[application properties] or xref:deploying:configuring.adoc#passing-{short-product-name}-application-configuration[environment variables]

== Further deployment templates

Expand Down
6 changes: 3 additions & 3 deletions docs/modules/describing-data-sources/pages/aws-services.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -314,7 +314,7 @@ service AwsBucketService {
```

#### Filename patterns when writing to S3
When writing to S3 filenames, filename patterns are not supported (unlike when xref:aws-services.adoc#filename-patterns-when-writing-to-s3.adoc[reading]).
When writing to S3 filenames, filename patterns are not supported (unlike when reading).

If you declare a filename with a pattern, an error will be thrown.

Expand Down Expand Up @@ -478,7 +478,7 @@ import flow.aws.sqs.SqsOperation
@SqsService( connectionName = "moviesConnection" )
service MovieService {
@SqsOperation( queue = "movies" )
write operation publishMoveEvent(Movie):Movie
write operation publishMovieEvent(Movie):Movie
}
----

Expand All @@ -498,7 +498,7 @@ service MovieService {
operation newReleases():Stream<Movie>
@SqsOperation( queue = "moviesToReview" )
write operation publishMoveEvent(Movie):Movie
write operation publishMovieEvent(Movie):Movie
}
// Query: consume from the new releases queue, and publish to
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -194,8 +194,8 @@ Connection parameters are as follows:
[,HOCON]
----
jdbc {
mysql-docker {
connectionName=mysql-docker
mysql-db {
connectionName=mysql-db
connectionParameters {
database=test
host=localhost
Expand Down
2 changes: 1 addition & 1 deletion docs/modules/describing-data-sources/pages/databases.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -369,7 +369,7 @@ No schema migrations are performed.
=== Example queries

When writing data from one data source into a database, it's not
neccessary for the data to align with the format of the
necessary for the data to align with the format of the
persisted value.

{short-product-name} will automatically adapt the incoming data to the
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -25,9 +25,6 @@ launched the Schema Server from.

You may have overridden this when you launched the Schema Server, by specifying `+--vyne.repositories.config-file=...+`.

If you're running one of our demo tutorials, the config file is at `vyne/schema-server/schema-server.conf`, relative
to the docker-compose file you used.

Find and open the schema-server config file in your editor of choice.

=== Specify a new file-based repository
Expand Down
2 changes: 1 addition & 1 deletion docs/modules/describing-data-sources/pages/hazelcast.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ hazelcast {
}
```

NOTE: Assuming your docker-compose.yml contains a Hazelcast container and the container is up and running, here is an example of how the Hazelcast container configuration might look:
NOTE: Assuming you are using Docker Compose to run {short-product-name}, here is an example of how to configure an external Hazelcast container:
```yaml
hazelcast:
image: "docker.io/hazelcast/hazelcast:latest"
Expand Down
2 changes: 1 addition & 1 deletion docs/modules/describing-data-sources/pages/kafka.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -281,7 +281,7 @@ stream { StockPrice.filterEach( ( StockTicker ) -> StockTicker == 'AAPL' ) }
==== Streaming from Kafka to a database

Streams from Kafka can be inserted into a database (or any other writable source, such as
xref:describing-data-sources:hazelcast.adoc#writing-data-to-hazelcast[Hazelcast] or xref:describing-data-sources:kafka.adoc[Dynamo]) using a mutating query.
xref:describing-data-sources:hazelcast.adoc#writing-data-to-hazelcast[Hazelcast] or xref:describing-data-sources:aws-services.adoc#DynamoDb[Dynamo]) using a mutating query.

As with all mutating queries, it's not
necessary for the data from Kafka to align with the format of the
Expand Down
14 changes: 7 additions & 7 deletions docs/modules/guides/pages/apis-db-kafka.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,7 @@ which we'll use in our next steps.
If you run `docker ps`, you should see a collection of Docker containers now running.

|===
| Docker Image | Part of {short-product-name} stack or Demo? | Description
| Container Name | Part of {short-product-name} stack or Demo? | Description

| {code-product-name}
| {short-product-name}
Expand All @@ -69,7 +69,7 @@ If you run `docker ps`, you should see a collection of Docker containers now run
| Demo
| A Postgres DB, which contains the Postgres https://github.com/devrimgunduz/pagila[Pagila] demo database for a fake DVD rental store

| apache/kafka
| kafka
| Demo (Kafka)
| The Apache Kafka image
|===
Expand All @@ -85,7 +85,7 @@ The project is a https://taxilang.org[Taxi] project which gets edited locally, t

- From the sidebar, click http://localhost:9021/projects[Projects]
- Click *Add Project* and add a Local Disk project
- For the **Project Path**, enter `petflix`
- For the **Project Path**, enter `demo`
- The server will tell you it can't find an existing project, and let you create one - click **Create new project**
- Enter the package co-ordinates as follows (this is similar to npm, maven, etc.)

Expand All @@ -94,7 +94,7 @@ The project is a https://taxilang.org[Taxi] project which gets edited locally, t
| Field | Value

| *Organisation*
| `com.petflix`
| `com.hazelflix`

| *Name*
| `demo`
Expand Down Expand Up @@ -189,7 +189,7 @@ To import the schema:
| `film`

| Default namespace
| `com.petflix.filmsdatabase`
| `com.hazelflix.demo.filmsdatabase`
|===

Namespaces are used to help us group related content together, like packages in Java or namespaces in C# and Typescript.
Expand Down Expand Up @@ -262,7 +262,7 @@ Fill in the form with the following values:
| `+http://films-api/v3/api-docs+`

| Default namespace
| `com.petflix.listings`
| `com.hazelflix.listings`

| Base url
| Leave this blank
Expand Down Expand Up @@ -574,7 +574,7 @@ Fill out the rest of the form with the following details:
| `LATEST`

| Namespace
| `com.petflix.announcements`
| `com.hazelflix.announcements`

| Message Type
| `NewFilmReleaseAnnouncement`
Expand Down
6 changes: 3 additions & 3 deletions docs/modules/guides/pages/streaming-data.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ Our demo has a few services running, which we'll join together to create a bespo
Our services share a taxonomy used to describe the elements we can fetch:

```taxi taxonomy.taxi
namespace petflix
namespace hazelflix

type FilmId inherits String
type FilmTitle inherits String
Expand Down Expand Up @@ -52,8 +52,8 @@ syntax = "proto3";
import "taxi/dataType.proto";
message NewReviewPostedMessage {
int32 filmId = 1 [(taxi.dataType)="petflix.FilmId"];
string reviewText = 2 [(taxi.dataType)="petflix.ReviewText"];
int32 filmId = 1 [(taxi.dataType)="hazelflix.FilmId"];
string reviewText = 2 [(taxi.dataType)="hazelflix.ReviewText"];
}
----

Expand Down
Loading

0 comments on commit 3a17ad3

Please sign in to comment.