diff --git a/docs/modules/ROOT/pages/index.adoc b/docs/modules/ROOT/pages/index.adoc index 261598a52..cadae10c9 100644 --- a/docs/modules/ROOT/pages/index.adoc +++ b/docs/modules/ROOT/pages/index.adoc @@ -2,11 +2,9 @@ :!page-pagination: :description: Hazelcast Platform uniquely combines a distributed compute engine and a fast data store in one runtime. It offers unmatched performance, resilience and scale for real-time and AI-driven applications. -{description} - Hazelcast is a distributed computation and storage platform for consistently low-latency querying, aggregation and stateful computation against event -streams and traditional data sources. It allows you to quickly build +streams and traditional data sources. {description} It allows you to quickly build resource-efficient, real-time applications. You can deploy it at any scale from small edge devices to a large cluster of cloud instances. @@ -45,8 +43,6 @@ traditional databases are difficult to scale out and manage. They require additi processes for coordination and high availability. With Hazelcast, when you start another process to add more capacity, data and backups are automatically and evenly balanced. -NOTE: “Node” and “Member” are interchangeable, and both mean a Java Virtual Machine (“JVM”) on which one or more instances of Hazelcast software are in operation. - == What Can You Do with Hazelcast? You can request data, listen to events, submit data processing tasks using @@ -131,6 +127,8 @@ among the available cluster members. They can contain hundreds or thousands of d depending on the memory capacity of your system. Hazelcast also automatically creates backups of these partitions which are also distributed in the cluster. This makes Hazelcast resilient to data loss. +NOTE: _Node_ and _Member_ are interchangeable, and both mean a Java Virtual Machine (JVM) on which one or more instances of Hazelcast are in operation. + Hazelcast's *streaming engine* focuses on data transformation while it does all the heavy lifting of getting the data flowing and computation running across a cluster of members. It supports working with both bounded (batch) and unbounded (streaming) data.