Table of Contents
Kakarot RPC fits in the three-part architecture of the Kakarot zkEVM rollup (Kakarot EVM Cairo Programs, Kakarot RPC, Kakarot Indexer). It is the implementation of the Ethereum JSON-RPC specification made to interact with Kakarot zkEVM in a fully Ethereum-compatible way.
The Kakarot RPC layer's goal is to receive and output EVM-compatible payloads & calls while interacting with an underlying StarknetOS client. This enables Kakarot zkEVM to interact with the usual Ethereum tooling: Metamask, Hardhat, Foundry, etc.
Note that this is necessary because Kakarot zkEVM is implemented as a set of Cairo Programs that run on an underlying CairoVM (so-called StarknetOS) chain.
This adapter layer is based on:
Here is a high level overview of the architecture of Kakarot RPC.
Below is a lower level detailed overview of the internal architecture.
TL;DR:
- Run
make setup
to build dependencies. - Run
cargo build
to build Kakarot RPC. - Test with
make test
. - Run Kakarot RPC in dev mode:
- Run dev RPC:
make run-dev
(you'll need a StarknetOS instance running in another process and Kakarot contracts deployed)
- Run dev RPC:
- Run with Docker Compose:
make katana-rpc-up
- To kill these processes,
make docker-down
- Build the docker image for the RPC:
make docker-build
To set up the repository (pulling git submodule and building Cairo dependencies),run:
make setup
Caveats: the setup
make command uses linux (MacOs compatible)
commands to allow running the ./scripts/extract_abi.sh
.
This script is used to use strongly typed Rust bindings for Cairo programs.
If you encounter problems when building the project, try running ./scripts/extract_abi.sh
To build the project from source (in release mode):
cargo build --release
Note that there are sometimes issues with some dependencies (notably scarb or cairo related packages, there are sometimes needs to cargo clean
and cargo build
)
Copy the .env.example
file to a .env
file and populate each variable
cp .env.example .env
Meanwhile you can just use unit tests to dev.
make test
The binaries will be located in target/release/
.
Dev mode with Katana
To run a local StarknetOS client (Katana) and deploy Kakarot zkEVM on it, i.e. the set of Cairo smart contracts implementing the EVM:
make run-katana
To deploy Kakarot Core EVM (set of Cairo Programs):
make deploy-kakarot
To run the Kakarot RPC pointing to this local devnet:
STARKNET_NETWORK=katana make run-dev
Some notes on this local devnet:
-
this will run a devnet by running katana, with contracts automatically deployed, so you don't have to do them manually (see in
./lib/kakarot/kakarot_scripts/deploy_kakarot.py
for the list of contracts). -
the deployments and declarations for the devnet will be written to the
deployments/katana
folder inside your project root after a successful run of themake deploy-kakarot
command.
Running with Docker Compose
To orchestrate running a Katana/Madara devnet instance, deploy Kakarot contracts and initialize the RPC, you may use the following commands:
For Katana
make katana-rpc-up
For Madara
make madara-rpc-up
Building a Docker Image
In order to build a Docker Image for the RPC, you can run the below command which will setup the local environment and compile the binary:
make docker-build
Sending transactions to RPC using forge script
An example script to run which uses a pre-funded EOA account with private key
EVM_PRIVATE_KEY
forge script scripts/PlainOpcodes.s.sol --broadcast --legacy --slow
Kakarot RPC is configurable through environment variables.
Check out .env.example
file to see the environment variables.
You can take a look at rpc-call-examples
directory. Please note the following:
sendRawTransaction.hurl
: the raw transaction provided allows to call theinc()
function for the Counter contract. However, given that this transaction is signed for the EOA's nonce at the current devnet state (0x2), the call will only work once. If you want to keep incrementing (or decrementing) the counter, you need to regenerate the payload for the call with an updated nonce using the provided python script.
The Hive end-to-end test suite is set up in the Github Continuous Integration (CI) flow of the repository. This ensures a safe guard when modifying the current RPC implementation and/or the execution layer.
Due to the current existing differences between the Kakarot EVM implementation which aims to be a type 2 ZK-EVM (see the blog post from Vitalik for more details), some of the Hive tests need to be skipped or slightly modified in order to pass.
For the hive rpc tests, all the websockets related tests are skipped as websockets aren't currently supported by the Kakarot RPC.
For the hive rpc compatibility tests, the following tests are skipped:
- debug_getRawBlock/get-block-n: the Kakarot implementation currently doesn't compute the block hash following EVM standards.
- debug_getRawBlock/get-genesis: see
debug_getRawBlock/get-block-n
. - debug_getRawHeader/get-block-n: debug API is currently not supported by the Kakarot RPC.
- debug_getRawHeader/get-genesis: debug API is currently not supported by the Kakarot RPC.
- debug_getRawHeader/get-invalid-number: debug API is currently not supported by the Kakarot RPC.
- debug_getRawTransaction/get-invalid-hash: the Kakarot implementation of the
debug_getRawTransaction endpoint uses
reth_primitives::B256
type when deserializing the hash. This test is expected to fail as the provided hash in the query doesn't start with0x
. As this test doesn't bring much, we decide to skip it. - eth_createAccessList/create-al-multiple-reads: the createAccessList endpoint is currently not supported by the Kakarot RPC.
- eth_createAccessList/create-al-simple-contract: the createAccessList endpoint is currently not supported by the Kakarot RPC.
- eth_createAccessList/create-al-simple-transfer: the createAccessList endpoint is currently not supported by the Kakarot RPC.
- eth_feeHistory/fee-history: the Kakarot implementation doesn't currently set the block gas limit dynamically, which causes some disparity in the returned data. Additionally, the rewards of the blocks aren't available.
- eth_getBalance/get-balance-blockhash: see
debug_getRawBlock/get-block-n
. - eth_getBlockByHash/get-block-by-hash: see
debug_getRawBlock/get-block-n
. - eth_getBlockReceipts/get-block-receipts-by-hash: see
debug_getRawBlock/get-block-n
. - eth_getBlockTransactionCountByHash/get-block-n: see
debug_getRawBlock/get-block-n
. - eth_getBlockTransactionCountByHash/get-genesis: see
debug_getRawBlock/get-block-n
. - eth_getProof/get-account-proof-blockhash: the getProof endpoint is currently not supported by the Kakarot RPC.
- eth_getProof/get-account-proof-with-storage: the getProof endpoint is currently not supported by the Kakarot RPC.
- eth_getProof/get-account-proof: the getProof endpoint is currently not supported by the Kakarot RPC.
- eth_getStorage/get-storage-invalid-key-too-large: the Kakarot implementation
of the eth_getStorage endpoint uses
reth_primitives::U256
type when deserializing the number. This test is expected to fail as the provided block number in the query doesn't start with exceeds 32 bytes. As this test doesn't bring much, we decide to skip it. - eth_getStorage/get-storage-invalid-key: the Kakarot implementation uses the
jsonrpsee
crate's macrorpc
in order to generate the server implementation of the ETH API. This test passes an invalid block hash0xasdf
and expects the server to return with an error code-32000
which corresponds to an invalid input error. The code derived from therpc
macro returns an error code of-32602
which corresponds to an invalid parameters error, whenever it encounters issues when deserializing the input. We decide to ignore this test as the only issue is the error code returned. - eth_getTransactionByBlockHashAndIndex/get-block-n: see
debug_getRawBlock/get-block-n
.
In addition to the tests we skip, some of the objects fields need to be ignored in the passing tests:
- For blocks: the hash, parent hash, timestamp, base fee per gas, difficulty, gas limit, miner, size, state root, total difficulty and withdrawals are all skipped. Due to the difference between a type 1 and a type 2 ZK-EVM, these fields are currently not computed according to the EVM specifications and need to be skipped.
- For receipts, transactions and logs: the block hash is skipped.
If you which to run our hive test suite locally, the following steps should be taken:
-
Set up the repo:
make setup
. -
Build a local docker image of the RPC:
docker build --build-arg APIBARA_STARKNET_BIN_DIR=f7va4mjqww1kkpp4il6y295dgcwq147v --build-arg APIBARA_SINK_BIN_DIR=3iqnrcirqpg4s7zdy1wdh0dq17jwzmlc -t hive . -f docker/hive/Dockerfile
-
Checkout the Kakarot fork of hive:
git clone https://github.com/kkrt-labs/hive
-
Build the hive binary:
go build hive.go
-
Run the full rpc test suite against Kakarot:
./hive --sim "ethereum/rpc" --client kakarot
-
Additional filtering can be provided using
--sim.limit
if you which to run a certain limited set of tests.
If you want to say thank you or/and support active development of Kakarot RPC:
- Add a GitHub Star to the project.
- Tweet about the Kakarot RPC: https://twitter.com/KakarotZkEvm.
First off, thanks for taking the time to contribute! Contributions are what make the open-source community such an amazing place to learn, inspire, and create. Any contributions you make will benefit everybody else and are greatly appreciated.
Please read our contribution guidelines, and thank you for being involved!
- StarknetOS chain: also called CairoVM chain, or Starknet appchain, it is a full-node (or sequencer) that is powered by the Cairo VM (Cairo smart contracts can be deployed to it). It a chain that behaves in most ways similarly to Starknet L2.
- Kakarot Core EVM: The set of Cairo Programs that implement the Ethereum Virtual Machine instruction set.
- Katana: A StarknetOS sequencer developed by the Dojo team. Serves as the underlying StarknetOS client for Kakarot zkEVM locally. It is built with speed and minimalism in mind.
- Madara: A StarknetOS sequencer and full-node developed by the Madara (e.g. Pragma Oracle, Deoxys, etc.) and Starkware exploration teams. Based on the Substrate framework, it is built with decentralization and robustness in mind.
- Kakarot zkEVM: the entire system that forms the Kakarot zkRollup: the core EVM Cairo Programs and the StarknetOS chain they are deployed to, the RPC layer (this repository), and the Kakarot Indexer (the backend service that ingests Starknet data types and formats them in EVM format for RPC read requests).
For a full list of all authors and contributors, see the contributors page.
Kakarot RPC follows good practices of security, but 100% security cannot be assured. Kakarot RPC is provided "as is" without any warranty. Use at your own risk.
For more information and to report security issues, please refer to our security documentation.
This project is licensed under the MIT license.
See LICENSE for more information.
We warmly thank all the people who made this project possible.
- Reth (Rust Ethereum), Thank you for providing open source libraries for us to reuse.
- jsonrpsee
- Starkware and its exploration team, thank you for helping and providing a great test environment with Madara.
- Lambdaclass
- Dojo, thank you for providing great test utils.
- starknet-rs, thank you for a great SDK.
- All our contributors. This journey wouldn't be possible without you.
For now, Kakarot RPC provides a minimal benchmarking methodology. You'll need Bun installed locally.
- Run a Starknet node locally (Katana or Madara),
e.g.
katana --block-time 6000 --disable-fee
if you have the dojo binary locally, ormake madara-rpc-up
for Madara. - Deploy the Kakarot smart contract (
make deploy-kakarot
) - Run the Kakarot RPC binary (
make run-dev
) - Run
make benchmark-katana
ormake benchmark-madara
Thanks goes to these wonderful people (emoji key):
This project follows the all-contributors specification. Contributions of any kind welcome!