-
Notifications
You must be signed in to change notification settings - Fork 276
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: bring ZkTrie in #1076
feat: bring ZkTrie in #1076
Conversation
3525f71
to
d42eb5e
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Caution
Inline review comments failed to post. This is likely due to GitHub's limits when posting large numbers of comments.
🛑 Comments failed to post (46)
rollup/zktrie/byte32.go (1)
35-40:
⚠️ Potential issueClarify comment or adjust implementation for NewByte32FromBytesPaddingZero.
The implementation of
NewByte32FromBytesPaddingZero
correctly zero-pads shorter inputs. However, the comment suggests that the function also truncates longer inputs, which is not implemented in the function.Consider either:
- Updating the comment to accurately reflect the current implementation:
// create bytes32 with zero-padding for shorter bytes
- Or, implement the truncation as mentioned in the comment:
func NewByte32FromBytesPaddingZero(b []byte) *Byte32 { byte32 := new(Byte32) if len(b) > 32 { copy(byte32[:], b[:32]) } else { copy(byte32[:], b) } return byte32 }trie/zk_trie_impl_test.go (3)
85-85:
⚠️ Potential issueFix type mismatches in TryGet and TryDelete calls
The
TryGet
andTryDelete
methods have been updated to useZkTrie
, which is good. However, there are type mismatches in both method calls.Please update the following lines to match the expected types:
- return mt.ZkTrie.TryGet(zkt.NewHashFromBytes(key)) + return mt.ZkTrie.TryGet(zkt.NewHashFromBytes(key)) - return mt.ZkTrie.TryDelete(zkt.NewHashFromBytes(key)) + return mt.ZkTrie.TryDelete(zkt.NewHashFromBytes(key))Ensure that the
ZkTrie.TryGet
andZkTrie.TryDelete
method signatures are checked and the arguments are adjusted accordingly. It's likely that these methods expect*zktrie.Hash
instead of[]byte
.Also applies to: 89-89
🧰 Tools
🪛 GitHub Check: test
[failure] 85-85:
cannot use zkt.NewHashFromBytes(key) (value of type *zktrie.Hash) as []byte value in argument to mt.ZkTrie.TryGet
95-95:
⚠️ Potential issueFix type mismatch in TryUpdate call
The
TryUpdateAccount
method has been updated to useZkTrie
, which is good. However, there's a type mismatch in theTryUpdate
call.Please update line 95 to match the expected type:
- return mt.ZkTrie.TryUpdate(zkt.NewHashFromBytes(key), flag, value) + return mt.ZkTrie.TryUpdate(zkt.NewHashFromBytes(key), flag, value)Ensure that the
ZkTrie.TryUpdate
method signature is checked and the arguments are adjusted accordingly. It's likely that the first argument should be*zktrie.Hash
instead of[]byte
.Committable suggestion was skipped due to low confidence.
🧰 Tools
🪛 GitHub Check: test
[failure] 95-95:
cannot use zkt.NewHashFromBytes(key) (value of type *zktrie.Hash) as []byte value in argument to mt.ZkTrie.TryUpdate
72-72:
⚠️ Potential issueFix type mismatch in TryUpdate call
The
UpdateWord
method has been updated to useZkTrie
, which is good. However, there's a type mismatch in theTryUpdate
call, similar to the issue in theAddWord
method.Please update line 72 to match the expected type:
- return mt.ZkTrie.TryUpdate(zkt.NewHashFromBigInt(k), 1, []zkt.Byte32{*vPreimage}) + return mt.ZkTrie.TryUpdate(zkt.NewHashFromBigInt(k), 1, []zkt.Byte32{*vPreimage})Ensure that the
ZkTrie.TryUpdate
method signature is checked and the arguments are adjusted accordingly.Committable suggestion was skipped due to low confidence.
🧰 Tools
🪛 GitHub Check: test
[failure] 72-72:
cannot use zkt.NewHashFromBigInt(k) (value of type *zktrie.Hash) as []byte value in argument to mt.ZkTrie.TryUpdaterollup/zktrie/util.go (7)
7-9:
⚠️ Potential issueCorrect typos in the function documentation.
There are some typos in the comments:
- Line 8: "fieds" should be "fields".
- Line 9: "recursiving" should be "recursive".
Please apply the following diff to fix the typos:
// HashElemsWithDomain performs a recursive Poseidon hash over the array of ElemBytes, each hash -// reduce 2 fieds into one, with a specified domain field which would be used in -// every recursiving call +// reduces 2 fields into one, with a specified domain field which is used in +// every recursive call📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.// HashElemsWithDomain performs a recursive Poseidon hash over the array of ElemBytes, each hash // reduces 2 fields into one, with a specified domain field which is used in // every recursive call
64-66:
⚠️ Potential issueHandle empty
elems
slice to prevent index out of range panic.If
len(ret) == 0
, accessingret[0]
will cause an index out of range panic. Please add a check to handle the case whenelems
is empty.Apply this diff to fix the issue:
if len(ret) < 2 { + if len(ret) == 0 { + return nil, errors.New("elems cannot be empty") + } return NewHashFromBigInt(ret[0]), nil }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.if len(ret) < 2 { if len(ret) == 0 { return nil, errors.New("elems cannot be empty") } return NewHashFromBigInt(ret[0]), nil }
91-97: 🛠️ Refactor suggestion
Optimize loop in
BigEndianBitsToBigInt
to avoid unnecessary allocations.The loop repeatedly calls
big.NewInt(2)
on each iteration, which is inefficient. Consider initializing the constant outside the loop.Apply this diff to optimize the loop:
func BigEndianBitsToBigInt(bits []bool) *big.Int { result := big.NewInt(0) + two := big.NewInt(2) for _, bit := range bits { - result.Mul(result, big.NewInt(2)) + result.Mul(result, two) if bit { result.Add(result, BigOne) } } return result }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.result := big.NewInt(0) two := big.NewInt(2) for _, bit := range bits { result.Mul(result, two) if bit { result.Add(result, BigOne) } }
83-85:
⚠️ Potential issueAdd bounds checking in
TestBitBigEndian
to prevent out-of-range errors.Similar to
SetBitBigEndian
, ensure thatn
does not exceed the valid bit range to avoid panics.Here's the suggested fix:
func TestBitBigEndian(bitmap []byte, n uint) bool { + if n >= uint(len(bitmap))*8 { + // Handle error or return false + return false + } return bitmap[uint(len(bitmap))-n/8-1]&(1<<(n%8)) != 0 }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.func TestBitBigEndian(bitmap []byte, n uint) bool { if n >= uint(len(bitmap))*8 { // Handle error or return false return false } return bitmap[uint(len(bitmap))-n/8-1]&(1<<(n%8)) != 0 }
73-75:
⚠️ Potential issueAdd bounds checking in
SetBitBigEndian
to prevent out-of-range errors.Currently, there is no check to ensure that
n
is within the valid range. Ifn
exceeds the bit length ofbitmap
, this will cause an index out of range panic.Please add bounds checking:
func SetBitBigEndian(bitmap []byte, n uint) { + if n >= uint(len(bitmap))*8 { + // Handle error or log a warning + return + } bitmap[uint(len(bitmap))-n/8-1] |= 1 << (n % 8) }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.func SetBitBigEndian(bitmap []byte, n uint) { if n >= uint(len(bitmap))*8 { // Handle error or log a warning return } bitmap[uint(len(bitmap))-n/8-1] |= 1 << (n % 8) }
53-62:
⚠️ Potential issuePrevent overflow when shifting bits in
HandlingElemsAndByte32
.In the loop, shifting
1 << i
can overflow wheni >= 32
sinceflagArray
is auint32
. This may lead to incorrect behavior for larger indices. Please ensure thatlen(elems) <= 32
or consider using a larger type forflagArray
.Possible solutions:
- Limit the length of
elems
to 32 elements.- Change
flagArray
touint64
if larger arrays are needed.- Use bit manipulation techniques that handle larger indices.
Example modification:
-func HandlingElemsAndByte32(flagArray uint32, elems []Byte32) (*Hash, error) { +func HandlingElemsAndByte32(flagArray uint64, elems []Byte32) (*Hash, error) {Committable suggestion was skipped due to low confidence.
107-115:
⚠️ Potential issueHandle potential errors in
ToSecureKeyBytes
.In the conversion process, ensure that all errors are properly handled and that the returned
Byte32
is correctly formed even whenk.Bytes()
returns fewer than 32 bytes.Consider padding the bytes to ensure a consistent length:
func ToSecureKeyBytes(key []byte) (*Byte32, error) { k, err := ToSecureKey(key) if err != nil { return nil, err } - return NewByte32FromBytes(k.Bytes()), nil + return NewByte32FromBytesPaddingZero(k.Bytes()), nil }Committable suggestion was skipped due to low confidence.
rollup/zktrie/hash.go (1)
88-94:
⚠️ Potential issueValidate input length in
SetBytes
to prevent potential panicIn the
SetBytes
method, if the input sliceb
is longer thanHashByteLen
, it can cause an out-of-bounds panic.Apply this diff to fix the issue:
func (h *Hash) SetBytes(b []byte) { *h = HashZero + if len(b) > HashByteLen { + panic(fmt.Sprintf("input byte slice length %d exceeds HashByteLen %d", len(b), HashByteLen)) + } _ = h[len(b)-1] // eliminate range checks for i := 0; i < len(b); i++ { h[len(b)-i-1] = b[i] } }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.func (h *Hash) SetBytes(b []byte) { *h = HashZero if len(b) > HashByteLen { panic(fmt.Sprintf("input byte slice length %d exceeds HashByteLen %d", len(b), HashByteLen)) } _ = h[len(b)-1] // eliminate range checks for i := 0; i < len(b); i++ { h[len(b)-i-1] = b[i] } }
rollup/zktrie/zk_trie_deletionproof.go (11)
17-17:
⚠️ Potential issueAdjust the function comment to follow GoDoc conventions
The comment for
NewProofTracer
should start with the function name and be a complete sentence in the third person singular.Apply this diff to improve the comment:
-// NewProofTracer create a proof tracer object +// NewProofTracer creates a proof tracer object.📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.// NewProofTracer creates a proof tracer object.
28-28:
⚠️ Potential issueAdjust the function comment to follow GoDoc conventions
The comment for
Merge
should start with the function name and be a complete sentence.Apply this diff to improve the comment:
-// Merge merge the input tracer into current and return current tracer +// Merge merges the input tracer into the current tracer and returns it.📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.// Merge merges the input tracer into the current tracer and returns it.
115-115:
⚠️ Potential issueHandle the error returned by
NodeHash()
The error returned by
leafNode.NodeHash()
is being ignored. Handle the error to prevent potential issues.Apply this diff to handle the error:
- nodeHash, _ := leafNode.NodeHash() + nodeHash, err := leafNode.NodeHash() + if err != nil { + return err + }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.nodeHash, err := leafNode.NodeHash() if err != nil { return err }
70-70:
⚠️ Potential issueHandle the error returned by
NodeHash()
The error returned by
n.NodeHash()
is being ignored. To ensure robustness, handle the error appropriately.Apply this diff to handle the error:
- nodeHash, _ := n.NodeHash() + nodeHash, err := n.NodeHash() + if err != nil { + return nil, err + }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.nodeHash, err := n.NodeHash() if err != nil { return nil, err }
121-122:
⚠️ Potential issueClarify the function comment for
Prove
The comment should be clearer and follow GoDoc conventions for better understanding.
Apply this diff to improve the comment:
-// Prove act the same as zktrie.Prove, while also collect the raw path -// for collecting deletion proofs in a post-work +// Prove acts the same as `zktrie.Prove` but also collects the raw path +// for use in collecting deletion proofs in subsequent processing.📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.// Prove acts the same as `zktrie.Prove` but also collects the raw path // for use in collecting deletion proofs in subsequent processing.
51-56:
⚠️ Potential issueImprove the function comment for
GetDeletionProofs
The comment is hard to understand and contains grammatical errors. Rewriting it can enhance readability and maintainability.
Apply this diff to improve the comment:
-// GetDeletionProofs generate current deletionTracer and collect deletion proofs -// which is possible to be used from all rawPaths, which enabling witness generator -// to predict the final state root after executing any deletion -// along any of the rawpath, no matter of the deletion occurs in any position of the mpt ops -// Note the collected sibling node has no key along with it since witness generator would -// always decode the node for its purpose +// GetDeletionProofs generates deletion proofs based on the current deletion tracer. +// It collects proofs from all raw paths, enabling the witness generator to predict the final state root +// after executing deletions along any of the raw paths, regardless of the deletion point in the MPT operations. +// Note: The collected sibling nodes do not include keys, as the witness generator will decode the nodes as needed.📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.// GetDeletionProofs generates deletion proofs based on the current deletion tracer. // It collects proofs from all raw paths, enabling the witness generator to predict the final state root // after executing deletions along any of the raw paths, regardless of the deletion point in the MPT operations. // Note: The collected sibling nodes do not include keys, as the witness generator will decode the nodes as needed.
71-71:
⚠️ Potential issueAvoid using pointers as map keys
Using pointers as map keys can lead to unexpected behavior. Consider using the value instead of the pointer for the map key in
deletionTracer
.Apply this diff:
- t.deletionTracer[*nodeHash] = struct{}{} + t.deletionTracer[nodeHash] = struct{}{}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.t.deletionTracer[nodeHash] = struct{}{}
102-102:
⚠️ Potential issueAdjust the function comment to follow GoDoc conventions
The comment for
MarkDeletion
should start with the function name and be a complete sentence.Apply this diff:
-// MarkDeletion mark a key has been involved into deletion +// MarkDeletion marks a key as involved in a deletion.📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.// MarkDeletion marks a key as involved in a deletion.
33-33:
⚠️ Potential issueCorrect typo and improve clarity in panic message
The panic message contains typos and should be more precise. It should use "cannot" instead of "can not", and "based on" instead of "base on".
Apply this diff to correct the message:
- panic("can not merge two proof tracer base on different trie") + panic("cannot merge two ProofTracers based on different tries")📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.panic("cannot merge two ProofTracers based on different tries")
149-149: 🛠️ Refactor suggestion
Replace
panic
with error handlingUsing
panic
for unexpected node types can be disruptive. Consider returning an error instead.Apply this diff:
- panic(fmt.Errorf("unexpected node type %d", n.Type)) + return fmt.Errorf("unexpected node type %d", n.Type)Ensure that the calling function handles the returned error appropriately.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.return fmt.Errorf("unexpected node type %d", n.Type)
112-112: 🛠️ Refactor suggestion
Avoid using
panic
; consider returning an error insteadUsing
panic
here may not be appropriate. It's better to return an error to allow the caller to handle it gracefully.Apply this diff:
- panic("all path recorded in proofTrace should be ended with leafNode") + return fmt.Errorf("all paths recorded in ProofTracer should end with a leaf node")And update the function signature to return an error if necessary.
Committable suggestion was skipped due to low confidence.
rollup/zktrie/zk_trie_proof_test.go (5)
65-65: 🛠️ Refactor suggestion
Use
assert.NoError
instead ofassert.Nil
when checking for errors.When verifying that an error has not occurred, it's more appropriate to use
assert.NoError(t, err)
as it provides clearer failure messages and is semantically correct for error handling.Apply this change to all relevant lines:
- assert.Nil(t, err) + assert.NoError(t, err)This enhances the readability and correctness of the test assertions.
Also applies to: 152-152, 194-194, 196-196, 203-203
91-92:
⚠️ Potential issueCheck for errors before using returned values to prevent unexpected behavior.
After calling
root, err := mt.Root()
, ensure that you check for errors before proceeding to useroot
. This prevents potential nil pointer dereferences or unexpected behavior if an error occurs.Update the code to handle errors appropriately:
root, err := mt.Root() - assert.NoError(t, err) + if err != nil { + t.Fatalf("Failed to retrieve trie root: %v", err) + }This ensures that the test fails gracefully with a clear error message if
mt.Root()
encounters an issue.Also applies to: 114-115, 164-165
157-157: 🛠️ Refactor suggestion
Enhance test coverage by using more diverse key inputs.
The test uses repeated single-character keys, which might not adequately simulate real-world scenarios. Consider using a wider range of keys to ensure the trie handles various inputs correctly.
Modify the test keys to include more diversity:
- for i, key := range []string{"a", "j", "l", "z"} { + testKeys := []string{"a", "aa", "abc", "xyz", "longerkeyvalue", ""} + for i, key := range testKeys {This provides a broader spectrum of test cases, enhancing the robustness of your tests.
Committable suggestion was skipped due to low confidence.
44-47:
⚠️ Potential issueAvoid using
panic
in test helper functions; handle errors appropriately.Using
panic(err)
in tests can abruptly terminate the test suite and obscure the root cause of failures. Instead, consider returning the error or usingt.Fatalf
to report the error explicitly.Apply this diff to handle the error properly:
func makeSMTProvers(mt *ZkTrie) []func(key []byte) *memorydb.Database { var provers []func(key []byte) *memorydb.Database // Create a direct trie based Merkle prover provers = append(provers, func(key []byte) *memorydb.Database { proofDB := memorydb.New() err := mt.Prove(key, proofDB) if err != nil { - panic(err) + // Use t.Fatalf or handle the error as appropriate + // For closure, consider returning nil + return nil } return proofDB }) return provers }Alternatively, modify the function to return an error so that it can be handled by the calling test functions.
Committable suggestion was skipped due to low confidence.
287-290:
⚠️ Potential issueAdd bounds checking to prevent potential slice out-of-range errors.
Accessing
siblings[0][l-33:l-1]
without ensuring thatl >= 33
can lead to a runtime panic if the length is insufficient.Include a check before slicing:
l := len(siblings[0]) + if l < 33 { + t.Fatalf("Sibling length (%d) is too short for expected slice operation", l) + } match1 := bytes.Equal(siblings[0][l-33:l-1], nd)This safeguard prevents crashes and provides informative error messages.
Committable suggestion was skipped due to low confidence.
rollup/zktrie/zk_trie_node_test.go (2)
80-85:
⚠️ Potential issueHandle error for invalid node types in
NodeHash()
In the "Test Invalid Node" subtest, when creating a node with an invalid type (
Type: 99
), callingnode.NodeHash()
returns no error (assert.NoError(t, err)
). It would be more appropriate forNodeHash()
to return an error when invoked on an invalid node type to prevent unintended behavior. Consider updating theNodeHash()
method to return an error for invalid node types and adjust the test assertions accordingly.Apply this diff to modify the test and handle the expected error:
func TestNewNode(t *testing.T) { t.Run("Test Invalid Node", func(t *testing.T) { node := &Node{Type: 99} - invalidNodeHash, err := node.NodeHash() - assert.NoError(t, err) - assert.Equal(t, &HashZero, invalidNodeHash) + _, err := node.NodeHash() + assert.Error(t, err) + assert.Equal(t, ErrInvalidNodeType, err) }) }Ensure that
NodeHash()
returns an appropriate error, such asErrInvalidNodeType
, when encountering an invalid node type.📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.node := &Node{Type: 99} _, err := node.NodeHash() assert.Error(t, err) assert.Equal(t, ErrInvalidNodeType, err) })
112-136: 🛠️ Refactor suggestion
Add test cases for leaf nodes with multiple
ValuePreimage
elementsCurrently, the "LeafNode" test in
TestNewNodeFromBytes
uses aValuePreimage
slice with a single element. To ensure comprehensive testing, consider adding test cases whereValuePreimage
contains multiple elements. This will help validate that the node serialization and deserialization correctly handle leaf nodes with multiple values inValuePreimage
.Apply this diff to add a new test case:
func TestNewNodeFromBytes(t *testing.T) { t.Run("LeafNode", func(t *testing.T) { k := NewHashFromBytes(bytes.Repeat([]byte("a"), 32)) - vp := make([]Byte32, 1) + vp := []Byte32{ + *NewByte32FromBytes(bytes.Repeat([]byte("b"), 32)), + *NewByte32FromBytes(bytes.Repeat([]byte("c"), 32)), + } node := NewLeafNode(k, 1, vp) node.KeyPreimage = NewByte32FromBytes(bytes.Repeat([]byte("d"), 32)) nodeBytes := node.Value() newNode, err := NewNodeFromBytes(nodeBytes) assert.NoError(t, err) assert.Equal(t, node.Type, newNode.Type) assert.Equal(t, node.NodeKey, newNode.NodeKey) assert.Equal(t, node.ValuePreimage, newNode.ValuePreimage) assert.Equal(t, node.KeyPreimage, newNode.KeyPreimage) hash, err := node.NodeHash() assert.NoError(t, err) - assert.Equal(t, "0409b2168569a5af12877689ac08263274fe6cb522898f6964647fed88b861b2", hash.Hex()) + assert.Equal(t, "expected_hash_value_here", hash.Hex()) hash, err = node.ValueHash() assert.NoError(t, err) hashFromVp, err := vp[0].Hash() assert.Equal(t, NewHashFromBigInt(hashFromVp), hash) })Ensure to replace
"expected_hash_value_here"
with the actual expected hash value after updating the test.Committable suggestion was skipped due to low confidence.
rollup/zktrie/zk_trie_node.go (8)
146-227: 🛠️ Refactor suggestion
⚠️ Potential issueEnhance error handling and input validation in
SetBytes
method.The
SetBytes
method processes untrusted input and should robustly handle malformed data to prevent potential panics or security issues. Consider adding additional input validation and bounds checking to ensure all calculations are safe against malicious inputs.For example, ensure that
preimageLen
andpreImageSize
are within expected limits before proceeding with memory allocations and slicing operations.
42-42:
⚠️ Potential issueCorrect the function comment typo and grammatical errors.
The comment should match the function name and correct the grammar for clarity.
Apply this diff to fix the typo and grammar:
-// DeduceUploadType deduce a new branch type from current branch when one of its child become non-terminal +// DeduceUpgradeType deduces a new branch type from the current branch when one of its children becomes non-terminal.📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.// DeduceUpgradeType deduces a new branch type from the current branch when one of its children becomes non-terminal.
273-277: 🛠️ Refactor suggestion
Handle deprecated node types without using panic in
NodeHash
method.Similarly, in the
NodeHash
method, panicking upon encountering deprecated node types can be avoided to prevent unintended crashes.Consider returning an error instead of panicking:
- panic("encounter deprecated node types") + return nil, fmt.Errorf("encountered deprecated node type: %d", n.Type)Committable suggestion was skipped due to low confidence.
237-240: 🛠️ Refactor suggestion
Handle deprecated node types without using panic.
In the
IsTerminal
method, encountering deprecated node types results in a panic. To enhance the robustness of the code, consider handling these cases without panicking, possibly by returning an error or a default value.Modify the code to handle deprecated node types gracefully:
- panic("encounter deprecated node types") + return false // or handle appropriately📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.return false // or handle appropriately default: panic(fmt.Errorf("encounter unknown node types %d", n.Type)) }
67-67:
⚠️ Potential issueCorrect the grammatical error in the function comment.
The comment should read "when one of its children becomes terminal" to be grammatically correct.
Apply this diff to correct the grammar:
-// DeduceDowngradeType deduce a new branch type from current branch when one of its child become terminal +// DeduceDowngradeType deduces a new branch type from the current branch when one of its children becomes terminal.📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.// DeduceDowngradeType deduces a new branch type from the current branch when one of its children becomes terminal.
300-306: 🛠️ Refactor suggestion
Avoid using the
unsafe
package for slice manipulation.The use of
unsafe
to manipulate slice headers can lead to undefined behavior and security vulnerabilities. Consider alternative approaches that do not rely onunsafe
.Refactor the code to convert
[]Byte32
to[]byte
without usingunsafe
:- var data []byte - hdata := (*reflect.SliceHeader)(unsafe.Pointer(&data)) - hdata.Data = uintptr(unsafe.Pointer(&n.ValuePreimage[0])) - hdata.Len = 32 * len(n.ValuePreimage) - hdata.Cap = hdata.Len - return data + data := make([]byte, 32*len(n.ValuePreimage)) + for i, b32 := range n.ValuePreimage { + copy(data[i*32:(i+1)*32], b32[:]) + } + return dataThis approach avoids
unsafe
and ensures memory safety.📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.data := make([]byte, 32*len(n.ValuePreimage)) for i, b32 := range n.ValuePreimage { copy(data[i*32:(i+1)*32], b32[:]) } return data default:
390-398: 🛠️ Refactor suggestion
Avoid using panic for error handling in
ForEach
method.Using panic in
ForEach
when an error occurs can cause the application to crash unexpectedly. It's preferable to handle errors gracefully.Modify the method to return an error:
- func (r ChildResolver) ForEach(node []byte, onChild func(common.Hash)) { + func (r ChildResolver) ForEach(node []byte, onChild func(common.Hash)) error { n, err := NewNodeFromBytes(node) if err != nil { - panic(fmt.Sprintf("node %x: %v", node, err)) + return fmt.Errorf("failed to parse node %x: %v", node, err) } // ... rest of the code ... + return nil }Update callers of this method to handle the error accordingly.
Committable suggestion was skipped due to low confidence.
191-197:
⚠️ Potential issuePrevent potential out-of-bounds slice access in
SetBytes
method.When parsing
valuePreimage
, there is a risk of out-of-bounds access ifpreimageLen
is large. Ensure that the code safely handles large values ofpreimageLen
.Add a check to limit
preimageLen
to a reasonable maximum value.preimageLen := int(mark & 255) + if preimageLen > MaxPreimageLen { + return ErrNodeBytesBadSize + }Define
MaxPreimageLen
appropriately based on application constraints.Committable suggestion was skipped due to low confidence.
rollup/zktrie/zk_trie_impl_test.go (2)
649-693: 🛠️ Refactor suggestion
Eliminate duplicate test function to reduce redundancy
The test function
TestMerkleTree_AddUpdateGetWord_2
appears to duplicate the functionality ofTestMerkleTree_AddUpdateGetWord
. Maintaining duplicate tests can lead to increased maintenance effort and potential confusion.Consider consolidating the duplicated tests into a single function to streamline the test suite.
774-774:
⚠️ Potential issueInitialize undefined variable
magicSMTBytes
The variable
magicSMTBytes
used inDecodeSMTProof(magicSMTBytes)
is not defined in the code, which will result in a compilation error.Define
magicSMTBytes
before using it in the test to ensure the code compiles correctly. For example:var magicSMTBytes = []byte{ /* appropriate byte values */ }rollup/zktrie/zk_trie_impl.go (6)
1155-1181:
⚠️ Potential issuePossible infinite loop in
prove
methodIn the
prove
method, there is a loop from lines 1155-1181 that could result in an infinite loop if not properly handled. Ensure that there are conditions that eventually terminate the loop.Review the loop termination conditions and consider adding safeguards or additional logging to prevent potential infinite loops.
1210-1238:
⚠️ Potential issueError handling in
VerifyProof
functionIn the
VerifyProof
function, lines 1210-1238, error messages could be more descriptive, and error handling could be improved to cover all possible failure points.Consider reviewing the error handling in this function to ensure that all errors are properly caught and communicated.
I can help refactor this function to enhance error handling. Would you like me to assist with this?
37-38:
⚠️ Potential issueIncorrect comment for
ErrInvalidField
The comment on line 37 incorrectly describes
ErrInvalidField
asErrNodeKeyAlreadyExists
. Please update the comment to accurately reflect the purpose ofErrInvalidField
.Apply this diff to fix the comment:
-// ErrNodeKeyAlreadyExists is used when a node key already exists. +// ErrInvalidField is used when a key is not inside the finite field. ErrInvalidField = errors.New("Key not inside the Finite Field")📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.// ErrInvalidField is used when a key is not inside the finite field. ErrInvalidField = errors.New("Key not inside the Finite Field")
1203-1205:
⚠️ Potential issueUnimplemented
NodeIterator
methodThe
NodeIterator
method is not implemented and returns an error. If this functionality is planned, consider implementing it or providing more detailed error handling.Apply this diff to add a
TODO
comment:// NodeIterator returns an iterator that returns nodes of the trie. Iteration // starts at the key after the given start key. And error will be returned // if fails to create node iterator. func (mt *ZkTrie) NodeIterator(start []byte) (trie.NodeIterator, error) { + // TODO: Implement NodeIterator functionality return nil, errors.New("not implemented") }
Would you like assistance in implementing this method or opening a GitHub issue to track this task?
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.func (mt *ZkTrie) NodeIterator(start []byte) (trie.NodeIterator, error) { // TODO: Implement NodeIterator functionality return nil, errors.New("not implemented") }
39-40:
⚠️ Potential issueRedundant comment for
ErrNodeKeyAlreadyExists
The comment on line 39 is a duplicate of the previous comment and does not add new information. Ensure that each error variable has a unique and accurate comment.
Apply this diff to improve the comment:
-// ErrNodeKeyAlreadyExists is used when a node key already exists. +// ErrNodeKeyAlreadyExists indicates that the node key already exists in the ZkTrie. ErrNodeKeyAlreadyExists = errors.New("key already exists")📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.// ErrNodeKeyAlreadyExists indicates that the node key already exists in the ZkTrie. ErrNodeKeyAlreadyExists = errors.New("key already exists")
211-213:
⚠️ Potential issueUnimplemented method
UpdateContractCode
The method
UpdateContractCode
is currently unimplemented and returnsnil
. If this method is intended to be implemented later, consider adding aTODO
comment or returning an appropriate error to indicate that it's not yet supported.Apply this diff to add a
TODO
comment:func (mt *ZkTrie) UpdateContractCode(_ common.Address, _ common.Hash, _ []byte) error { + // TODO: Implement contract code update functionality return nil }
Alternatively, return an error to indicate unimplemented functionality:
func (mt *ZkTrie) UpdateContractCode(_ common.Address, _ common.Hash, _ []byte) error { - return nil + return errors.New("UpdateContractCode is not yet implemented") }Would you like assistance in implementing this method or opening a GitHub issue to track this task?
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.func (mt *ZkTrie) UpdateContractCode(_ common.Address, _ common.Hash, _ []byte) error { // TODO: Implement contract code update functionality return nil } ``` Option 2 (Returning an error): ```suggestion func (mt *ZkTrie) UpdateContractCode(_ common.Address, _ common.Hash, _ []byte) error { return errors.New("UpdateContractCode is not yet implemented") }
@coderabbitai review |
✅ Actions performedReview triggered.
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 38
🧹 Outside diff range and nitpick comments (64)
trie/byte32.go (1)
37-42
: LGTM: NewByte32FromBytesPaddingZero function is correctly implemented. Minor comment improvement suggested.The function correctly implements the described behavior.
Consider updating the comment to more explicitly state the difference in padding behavior compared to
NewByte32FromBytes
:-// create bytes32 with zeropadding to shorter bytes, or truncate it +// create bytes32 with zero-padding at the end for shorter bytes, or truncate longer bytestrie/byte32_test.go (3)
10-30
: LGTM: Well-structured table-driven tests.The test function is well-structured using a table-driven approach, which is efficient for testing multiple scenarios. The test cases cover different input sizes, which is good for edge case testing.
Consider adding a test case for an empty input ([]byte{}) to ensure the methods handle this edge case correctly.
32-43
: LGTM: Comprehensive test execution and assertions.The test execution loop is well-structured, covering all necessary assertions to validate the functionality of both methods and their
Hash
operations.Consider adding descriptive messages to the assertions to improve test failure diagnostics. For example:
assert.Equal(t, tt.expected, byte32Result.Bytes(), "NewByte32FromBytes result mismatch")
1-44
: Overall: Well-structured and comprehensive test file.This test file provides good coverage for the
Byte32
functionality, including both byte array results and their hash representations. The use of table-driven tests with hard-coded expected values ensures robustness against implementation changes.To improve maintainability, consider extracting the test cases into a separate variable or even a separate function. This would make it easier to add or modify test cases in the future without cluttering the main test function. For example:
var byte32TestCases = []struct { input []byte expected []byte expectedPaddingZero []byte expectedHash string expectedHashPadding string }{ // ... (existing test cases) ... } func TestNewByte32(t *testing.T) { for _, tt := range byte32TestCases { // ... (existing test logic) ... } }trie/hash_test.go (5)
14-23
: LGTM: TestCheckBigIntInField covers key edge cases.The test function effectively checks the
CheckBigIntInField
function for important edge cases: zero, the largest valid value (Q-1), and the first invalid value (Q). The use ofassert
from the testify package is appropriate for these unit tests.Consider adding a few more test cases:
- A small positive number (e.g., 1 or 10) to test a typical valid input.
- A negative number to ensure it's handled correctly (likely should return false).
- A very large number well beyond Q to test upper bound handling.
25-62
: LGTM: Comprehensive testing of hash and big integer conversions.This test function covers a wide range of scenarios for hash and big integer conversions, including normal cases and error handling. The use of both
assert
andrequire
is appropriate for different test cases.To improve readability and maintainability, consider breaking this large test function into smaller, more focused test functions. For example:
TestNewHashFromBytes
TestNewHashFromCheckedBytes
TestNewBigIntFromHashBytes
TestHashMarshalUnmarshal
TestNewHashFromBytesErrors
TestNewBigIntFromHashBytesErrors
This separation would make it easier to understand and maintain each specific test case.
64-75
: LGTM: Good coverage of hash conversions from big integers and strings.The test function effectively checks the conversion of both small integers and large string representations to hash objects. The assertions for both hexadecimal and string representations are appropriate.
Consider adding the following test cases to improve coverage:
- Test with a negative big integer to ensure proper handling.
- Test with a string that's not a valid number to check error handling in
NewHashFromString
.- Test with a big integer that's out of the valid field range to ensure proper error handling in
NewHashFromBigInt
.
77-83
: LGTM: Good test for NewHashFromBytes with random input.The test function effectively checks the
NewHashFromBytes
function using random input, ensuring the correct number of bytes are read and that the resulting hash matches the input.Consider adding an error check for the
NewHashFromBytes
function call:h2 := NewHashFromBytes(h.Bytes()) require.NotNil(t, h2, "NewHashFromBytes should not return nil") require.Equal(t, h, *h2)This addition would ensure that
NewHashFromBytes
doesn't return an unexpected nil value.
1-83
: Overall: Excellent test coverage with minor improvement suggestions.This new test file provides comprehensive coverage for hash-related functions in the trie package. The tests are well-structured, follow Go conventions, and effectively use the testify package for assertions. They cover both normal cases and error scenarios, which is crucial for robust testing.
To further improve this already solid test file:
- Consider breaking down larger test functions (e.g.,
TestNewHashAndBigIntFromBytes
) into smaller, more focused tests for better maintainability.- Add a few more edge cases to some test functions as suggested in previous comments.
- Ensure consistent error handling across all test functions.
These minor enhancements will make the test suite even more robust and easier to maintain in the future.
trie/util_test.go (7)
23-47
: LGTM: TestBitManipulations is comprehensive.The test provides excellent coverage for both
TestBit
andTestBitBigEndian
functions, correctly handling the difference between standard and big-endian bit ordering.Consider using a table-driven test to reduce code duplication and make it easier to add more test cases in the future. For example:
func TestBitManipulations(t *testing.T) { bitmap := []byte{0b10101010, 0b01010101} testCases := []struct { name string testFunc func([]byte, uint) bool expected []bool }{ { name: "TestBit", testFunc: TestBit, expected: []bool{false, true, false, true, false, true, false, true, true, false, true, false, true, false, true, false}, }, { name: "TestBitBigEndian", testFunc: TestBitBigEndian, expected: []bool{true, false, true, false, true, false, true, false, false, true, false, true, false, true, false, true}, }, } for _, tc := range testCases { t.Run(tc.name, func(t *testing.T) { results := make([]bool, 16) for i := uint(0); i < 16; i++ { results[i] = tc.testFunc(bitmap, i) } assert.Equal(t, tc.expected, results) }) } }This approach makes the test more maintainable and easier to extend.
49-54
: LGTM: TestBigEndianBitsToBigInt is correct.The test effectively verifies the conversion from big-endian bit representation to integer.
Consider adding an additional test case with a different bit pattern to increase coverage. For example:
func TestBigEndianBitsToBigInt(t *testing.T) { testCases := []struct { name string bits []bool expected *big.Int }{ { name: "Alternating bits", bits: []bool{true, false, true, false, true, false, true, false}, expected: big.NewInt(170), }, { name: "Random pattern", bits: []bool{true, true, false, true, false, false, true, true}, expected: big.NewInt(211), }, } for _, tc := range testCases { t.Run(tc.name, func(t *testing.T) { result := BigEndianBitsToBigInt(tc.bits) assert.Equal(t, tc.expected, result) }) } }This approach provides better coverage and makes it easier to add more test cases in the future.
56-60
: LGTM: TestToSecureKey verifies basic functionality.The test correctly checks for the absence of errors and the expected output for a given input.
Consider adding the following improvements:
- Test with multiple input values to ensure consistent behavior.
- Include a test case with an empty input to verify edge case handling.
- Add a negative test case to check error handling for invalid inputs.
Example:
func TestToSecureKey(t *testing.T) { testCases := []struct { name string input []byte expected string expectError bool }{ { name: "Valid input", input: []byte("testKey"), expected: "10380846131134096261855654117842104248915214759620570252072028416245925344412", }, { name: "Empty input", input: []byte{}, expected: "0", // Adjust this based on the expected behavior for empty input }, { name: "Invalid input", input: []byte{0xFF, 0xFF, 0xFF}, // Adjust based on what constitutes an invalid input expectError: true, }, } for _, tc := range testCases { t.Run(tc.name, func(t *testing.T) { secureKey, err := ToSecureKey(tc.input) if tc.expectError { assert.Error(t, err) } else { assert.NoError(t, err) assert.Equal(t, tc.expected, secureKey.String()) } }) } }These additional test cases will provide more comprehensive coverage of the
ToSecureKey
function.
62-66
: LGTM: TestToSecureKeyBytes verifies basic functionality.The test correctly checks for the absence of errors and the expected output for a given input.
Consider applying similar improvements as suggested for
TestToSecureKey
:
- Test with multiple input values.
- Include a test case with an empty input.
- Add a negative test case for invalid inputs.
Example:
func TestToSecureKeyBytes(t *testing.T) { testCases := []struct { name string input []byte expected []byte expectError bool }{ { name: "Valid input", input: []byte("testKey"), expected: []byte{0x16, 0xf3, 0x59, 0xc7, 0x30, 0x7e, 0x8, 0x97, 0xdc, 0x7c, 0x6b, 0x99, 0x53, 0xd1, 0xe1, 0xd8, 0x3, 0x6d, 0xc3, 0x83, 0xd4, 0xa, 0x0, 0x19, 0x9e, 0xda, 0xf0, 0x65, 0x27, 0xda, 0xf4, 0x9c}, }, { name: "Empty input", input: []byte{}, expected: make([]byte, 32), // Adjust based on expected behavior for empty input }, { name: "Invalid input", input: []byte{0xFF, 0xFF, 0xFF}, // Adjust based on what constitutes an invalid input expectError: true, }, } for _, tc := range testCases { t.Run(tc.name, func(t *testing.T) { secureKeyBytes, err := ToSecureKeyBytes(tc.input) if tc.expectError { assert.Error(t, err) } else { assert.NoError(t, err) assert.Equal(t, tc.expected, secureKeyBytes.Bytes()) } }) } }These additional test cases will provide more comprehensive coverage of the
ToSecureKeyBytes
function.
68-79
: LGTM: TestHashElems provides a good basic check.The test effectively verifies the
HashElems
function with a mix of fixed and generated input values.Consider the following improvements to enhance test coverage:
- Add test cases with different numbers of elements (e.g., 0, 1, 31, 33) to verify edge cases.
- Include a test case with very large
big.Int
values to ensure proper handling.- Add a negative test case to check error handling for invalid inputs.
Example:
func TestHashElems(t *testing.T) { testCases := []struct { name string fst *big.Int snd *big.Int elems []*big.Int expected string expectError bool }{ { name: "32 elements", fst: big.NewInt(5), snd: big.NewInt(3), elems: generateBigIntSlice(32, 1), expected: "0746c424799ef6ad7916511016a5b8e30688fa6d62664eeb97d9f2ba07685ed8", }, { name: "No elements", fst: big.NewInt(5), snd: big.NewInt(3), elems: []*big.Int{}, expected: "expectedHashForNoElements", // Replace with actual expected hash }, { name: "Large values", fst: new(big.Int).Exp(big.NewInt(2), big.NewInt(256), nil), snd: new(big.Int).Exp(big.NewInt(2), big.NewInt(256), nil), elems: []*big.Int{new(big.Int).Exp(big.NewInt(2), big.NewInt(256), nil)}, expected: "expectedHashForLargeValues", // Replace with actual expected hash }, { name: "Invalid input", fst: nil, snd: big.NewInt(3), elems: []*big.Int{big.NewInt(1)}, expectError: true, }, } for _, tc := range testCases { t.Run(tc.name, func(t *testing.T) { result, err := HashElems(tc.fst, tc.snd, tc.elems...) if tc.expectError { assert.Error(t, err) } else { assert.NoError(t, err) assert.Equal(t, tc.expected, result.Hex()) } }) } } func generateBigIntSlice(length int, start int64) []*big.Int { slice := make([]*big.Int, length) for i := range slice { slice[i] = big.NewInt(start + int64(i)) } return slice }These additional test cases will provide more comprehensive coverage of the
HashElems
function, including edge cases and error handling.
81-96
: LGTM: TestPreHandlingElems provides basic coverage.The test effectively checks the
HandlingElemsAndByte32
function with both full and partial input arrays.Consider the following improvements to enhance test coverage and clarity:
- Use a table-driven test to make it easier to add more test cases.
- Add test cases with different flag arrays to verify the function's behavior under various conditions.
- Include a test case with an empty input array to check edge case handling.
- Add a negative test case to verify error handling for invalid inputs.
- Use more descriptive names for test cases to improve readability.
Example:
func TestHandlingElemsAndByte32(t *testing.T) { testCases := []struct { name string flagArray uint32 elems []Byte32 expected string expectError bool }{ { name: "Full array with alternating flags", flagArray: 0b10101010101010101010101010101010, elems: generateByte32Array(32), expected: "0c36a6406e35e1fec2a5602fcd80ab04ec24d727676a953673c0850d205f9378", }, { name: "Single element with all flags set", flagArray: 0xFFFFFFFF, elems: generateByte32Array(1), expected: "0000000000000000000000000000000000000000000000000000007465737431", }, { name: "Empty array with flags set", flagArray: 0xFFFFFFFF, elems: []Byte32{}, expected: "expectedHashForEmptyArray", // Replace with actual expected hash }, { name: "Invalid input: mismatched flag and elem count", flagArray: 0b1111, elems: generateByte32Array(32), expectError: true, }, } for _, tc := range testCases { t.Run(tc.name, func(t *testing.T) { result, err := HandlingElemsAndByte32(tc.flagArray, tc.elems) if tc.expectError { assert.Error(t, err) } else { assert.NoError(t, err) assert.Equal(t, tc.expected, result.Hex()) } }) } } func generateByte32Array(length int) []Byte32 { elems := make([]Byte32, length) for i := range elems { elems[i] = *NewByte32FromBytes([]byte("test" + strconv.Itoa(i+1))) } return elems }These improvements will provide more comprehensive coverage of the
HandlingElemsAndByte32
function, including edge cases and error handling, while also improving the test's readability and maintainability.
1-96
: Overall, trie/util_test.go provides good test coverage with room for improvement.The test file contains well-structured tests for various utility functions in the trie package. Each test function provides good basic coverage and uses appropriate assertions. The use of predetermined expected values ensures deterministic behavior of the utility functions.
To further enhance the quality and coverage of the tests, consider implementing the following improvements across all test functions:
- Use table-driven tests to improve readability and make it easier to add more test cases.
- Add edge cases (e.g., empty inputs, maximum values) to ensure robust function behavior.
- Include negative test cases to verify error handling for invalid inputs.
- Use more descriptive names for test cases to improve readability.
- Consider testing with a wider range of input values to ensure consistent behavior across different scenarios.
Implementing these suggestions will result in more comprehensive test coverage, improved maintainability, and increased confidence in the correctness of the trie package's utility functions.
core/vm/gas_table_test.go (2)
140-172
: Test logic adapted to new structure.The core test logic has been successfully adapted to the new sub-test structure. The conditional setting of
config
based ontt.eip3860
is a good approach for testing both scenarios.For improved readability, consider extracting the
config
initialization into a separate function:func getConfig(eip3860 bool) Config { config := Config{} if eip3860 { config.ExtraEips = []int{3860} } return config }Then use it as:
config := getConfig(tt.eip3860)
178-182
: Improved gas usage check for successful deployments.The addition of a condition to check gas usage only for successful deployments is a good improvement. It prevents unnecessary checks and aligns the test behavior more closely with real-world scenarios.
For added clarity, consider adding a comment explaining the significance of the 100_000 value:
// Only check gas usage if deployment succeeded (minGas < 100_000) // 100_000 is the maximum gas allowed for these test cases if minGas < 100_000 { if gasUsed != tt.gasUsed { t.Errorf("test %d: gas used mismatch: have %v, want %v", i, gasUsed, tt.gasUsed) } }core/chain_makers_test.go (1)
Line range hint
201-279
: Consider updating ExampleGenerateChain to reflect current consensus mechanismWhile
ExampleGenerateChain
remains unchanged, it's worth noting that it still uses the legacy Proof of Work (PoW) consensus mechanism (ethash.NewFaker()
). If the project is moving away from PoW, consider updating this example to reflect the current consensus mechanism used in the project.If applicable, update the example to use the current consensus mechanism or add a comment explaining why it still uses PoW for demonstration purposes.
🧰 Tools
🪛 Gitleaks
38-38: Detected a Generic API Key, potentially exposing access to various services and sensitive operations.
(generic-api-key)
core/state_processor.go (1)
Line range hint
147-151
: Improved error handling for EVM tracers.The changes enhance error reporting by including errors from all tracers, not just those in debug mode. This is a good improvement that aligns with best practices for comprehensive error handling.
Consider adding a brief comment explaining the purpose of this error handling block, e.g.:
// Incorporate any errors from the EVM tracer into the result if evm.Config.Tracer != nil { // ... (existing code) }This would help future maintainers understand the intent behind this error handling logic.
core/state/state_test.go (1)
Line range hint
1-324
: Summary: Address test coverage gap introduced by ZkTrie integrationThe changes in this file involve skipping two tests (
TestDump
andTestIterativeDump
) due to ZkTrie not supporting iterators. While this addresses the immediate compatibility issue, it introduces a gap in test coverage for ZkTrie's Dump and IterativeDump functionalities.To maintain the integrity of the test suite and ensure proper functionality of ZkTrie, consider the following actions:
- Create technical debt tickets to track these skipped tests.
- Develop alternative testing strategies for ZkTrie that work within its current limitations.
- Update the existing tests or create new ones specifically for ZkTrie to ensure its Dump and IterativeDump functionalities are adequately tested.
- Consolidate the skip logic to improve maintainability.
By addressing these points, you can ensure that the integration of ZkTrie doesn't compromise the overall test coverage and quality of the codebase.
core/blockchain_snapshot_test.go (1)
Line range hint
3-1010
: Comprehensive snapshot test suite implementedThe test suite covers a wide range of scenarios for blockchain snapshot functionality, including:
- Normal restarts with valid snapshots
- Crashes with various commit points
- Gapped snapshots
- SetHead operations
- Wiping crashes
Each test case is well-documented with clear expectations and covers both the HashScheme and PathScheme.
The test suite appears to be thorough and well-implemented. It should provide good coverage for the snapshot functionality.
Some suggestions for potential improvements:
- Consider adding more edge cases, such as extremely long chains or chains with complex state.
- It might be beneficial to add some performance benchmarks for snapshot operations.
- Consider adding tests for concurrent operations on snapshots, if applicable.
core/vm/runtime/runtime_test.go (1)
Line range hint
273-702
: Evaluate the overall impact of Scroll-specific modifications.The changes in this file consistently relate to Scroll's lack of support for SELFDESTRUCT and its different implementation of blockhash. While these modifications appear intentional, they represent significant deviations from standard Ethereum behavior.
Consider documenting these Scroll-specific differences prominently in the project documentation. It would be beneficial to provide a comprehensive list of all deviations from standard Ethereum behavior, their rationales, and any potential implications for developers and users migrating from Ethereum to Scroll.
trie/util.go (7)
9-11
: Correct typos in function documentation commentsThe documentation for
HashElemsWithDomain
contains typographical errors. The word "fieds" should be "fields," and "recursiving" should be "recursive."Apply this diff to fix the typos:
-// reduce 2 fieds into one, with a specified domain field which would be used in -// every recursiving call +// reduces 2 fields into one, with a specified domain field which is used in +// every recursive call
41-46
: Add documentation comment forHashElems
functionThe
HashElems
function is exported but lacks a documentation comment. Including a comment enhances code readability and adheres to Go documentation standards.Apply this diff to add the documentation comment:
+// HashElems calls HashElemsWithDomain with a calculated domain based on the number of elements. func HashElems(fst, snd *big.Int, elems ...*big.Int) (*Hash, error) {
74-77
: Clarify byte order calculation inSetBitBigEndian
Consider adding an explanatory comment or adjusting the variable names to make the calculation
uint(len(bitmap))-n/8-1
clearer. This will help future maintainers understand the big-endian bit-setting logic.
79-82
: Add documentation comment forTestBit
functionThe
TestBit
function is exported but lacks a documentation comment. Adding a comment improves readability and follows Go best practices.Apply this diff:
+// TestBit checks whether the bit at position n in the bitmap is set to 1. func TestBit(bitmap []byte, n uint) bool {
84-87
: Add documentation comment forTestBitBigEndian
functionSimilarly,
TestBitBigEndian
lacks a documentation comment. Including one enhances readability and maintains consistency.Apply this diff:
+// TestBitBigEndian checks whether the bit at position n in the bitmap is set to 1, using big-endian order. func TestBitBigEndian(bitmap []byte, n uint) bool {
103-107
: Add documentation comment forToSecureKey
functionThe
ToSecureKey
function is exported but lacks a documentation comment. Documenting it enhances clarity and adheres to Go standards.Apply this diff:
+// ToSecureKey converts a byte slice key into its "secured" big.Int representation. func ToSecureKey(key []byte) (*big.Int, error) {
109-117
: Add documentation comment forToSecureKeyBytes
functionIncluding a documentation comment for
ToSecureKeyBytes
improves readability and follows best practices.Apply this diff:
+// ToSecureKeyBytes converts a byte slice key into a 32-byte "secured" key represented as a big-endian integer. func ToSecureKeyBytes(key []byte) (*Byte32, error) {
rollup/tracing/proof_test.go (2)
70-71
: Improve comment clarity and professionalismThe comment uses informal language ("a hacking"), which may not be appropriate. Consider rephrasing it for clarity and professionalism.
Suggested rephrased comment:
-// a hacking to grep the value part directly from the encoded leaf node, -// notice the sibling of key `k*32`` is just the leaf of key `m*32` +// Extract the value part directly from the encoded leaf node. +// Note that the sibling of key `k*32` is just the leaf of key `m*32`.
72-72
: Avoid magic numbers in slice indexingThe slice indices
l-33:l-1
use magic numbers, which can reduce code readability and maintainability. Define constants or add comments to clarify the meaning of these values.For example:
const valueSize = 32 // Size of the value in bytes startIndex := l - valueSize - 1 endIndex := l - 1 assert.Equal(t, siblings[0][startIndex:endIndex], nd)trie/hash.go (2)
104-104
: Correct typos in commentsThere are typographical errors in the comments:
- Line 104: "has ben generated" should be "has been generated".
- Line 133: "has ben generated" should be "has been generated".
Also applies to: 133-133
91-91
: Fix spacing in commentsIn the comments for the
Set
andClone
methods, "in to" should be "into":
- Line 91: "Set copies the given hash into this".
- Line 95: "Clone copies the given hash into this".
Also applies to: 95-95
rollup/tracing/proof.go (3)
29-30
: Enhance the function comment for clarityThe function comment contains grammatical errors and can be improved for better readability.
Apply this diff to improve the function comment:
-// Merge merge the input tracer into current and return current tracer +// Merge merges the input tracer into the current one and returns the current tracer
Line range hint
146-148
: Avoid usingpanic
in library code; return an error insteadPanicking on unexpected node types can be replaced with returning an error to handle it gracefully.
Modify the code to return an error:
default: - panic(fmt.Errorf("unexpected node type %d", n.Type)) + return fmt.Errorf("unexpected node type %d", n.Type)Ensure that the anonymous function signature allows for error return.
Line range hint
126-155
: UpdateProve
method to handle errors from callbacksSince the callback functions may now return errors, the
Prove
method should handle these errors appropriately.Modify the code to capture and return the error:
-func (t *ProofTracer) Prove(key []byte, proofDb ethdb.KeyValueWriter) error { +func (t *ProofTracer) Prove(key []byte, proofDb ethdb.KeyValueWriter) error { fromLevel := uint(0) var mptPath []*trie.Node -return t.trie.ProveWithDeletion(key, fromLevel, +err := t.trie.ProveWithDeletion(key, fromLevel, func(n *trie.Node) error { // Callback logic... }, func(n *trie.Node, _ *trie.Node) { // Callback logic... }, ) +if err != nil { + return err +} +return nil }trie/secure_trie.go (3)
64-72
: Consider exportingnewStateTrie
if intended for public useThe function
newStateTrie
is unexported due to the lowercase initial letter. If this constructor is meant to replaceNewSecure
for external packages, consider renaming it toNewStateTrie
to make it accessible outside the package.
Line range hint
166-173
: Avoid shadowing theerr
variable inUpdateAccount
In the
if
statement,err
is redeclared, which shadows the outererr
. This can lead to confusion and potential bugs.Consider renaming the inner
err
variable:func (t *stateTrie) UpdateAccount(address common.Address, acc *types.StateAccount) error { hk := t.hashKey(address.Bytes()) data, err := rlp.EncodeToBytes(acc) if err != nil { return err } - if err := t.trie.Update(hk, data); err != nil { - return err + if updateErr := t.trie.Update(hk, data); updateErr != nil { + return updateErr } t.getSecKeyCache()[string(hk)] = address.Bytes() return nil }
Line range hint
226-236
: Ensure safe conversion betweenstring
and[]byte
inCommit
Converting
hk
fromstring
back to[]byte
using[]byte(hk)
may lead to issues if the original[]byte
contains non-UTF8 data. Since keys are hashed and may contain arbitrary bytes, this conversion can corrupt data.Consider storing
hk
as[]byte
in thesecKeyCache
to avoid conversions:-t.secKeyCache map[string][]byte +secKeyCache map[[common.HashLength]byte][]byte func (t *stateTrie) getSecKeyCache() map[[common.HashLength]byte][]byte { ... - t.secKeyCache = make(map[string][]byte) + t.secKeyCache = make(map[[common.HashLength]byte][]byte) } func (t *stateTrie) MustUpdate(key, value []byte) { hk := t.hashKey(key) t.trie.MustUpdate(hk, value) - t.getSecKeyCache()[string(hk)] = common.CopyBytes(key) + t.getSecKeyCache()[hk] = common.CopyBytes(key) } // Similar changes for other methods using `secKeyCache`This change avoids the need to cast between
string
and[]byte
, ensuring data integrity.trie/zk_trie_node.go (1)
42-42
: Fix typo in function comment: "DeduceUploadType" should be "DeduceUpgradeType"The function comment for
DeduceUpgradeType
incorrectly spells "DeduceUploadType" instead of "DeduceUpgradeType". Correcting this enhances code readability and maintains consistency.rollup/tracing/tracing.go (1)
488-491
: Consider consolidating deletion marking to avoid redundancyThe code marks deletions on both
zktrieTracer
andenv.ZkTrieTracer[addrStr]
, then mergeszktrieTracer
intoenv.ZkTrieTracer[addrStr]
. To avoid redundancy, consider marking deletions directly onenv.ZkTrieTracer[addrStr]
, or ensure that both steps are necessary.trie/stacktrie_test.go (2)
429-431
: Ensure the 'cmp' method handles nil slices appropriatelyWhile
bytes.Compare
handlesnil
slices correctly, consider adding a comment or check to clarify thatk.k
andother.k
are expected to be non-nil
to prevent unexpected behavior.Optionally, add a comment:
func (k *kv) cmp(other *kv) int { + // Assume k.k and other.k are non-nil return bytes.Compare(k.k, other.k) }
Line range hint
433-473
: Set a fixed seed for the random number generator to ensure test reproducibilityIn
TestPartialStackTrie
, usingrand.Intn
without setting a seed can lead to non-deterministic test results. Setting a fixed seed ensures that tests are reproducible and failures can be consistently investigated.Apply the following diff to set a fixed seed:
func TestPartialStackTrie(t *testing.T) { + rand.Seed(1) // Set a fixed seed for reproducibility for round := 0; round < 100; round++ { var (
Alternatively, if you prefer true randomness but want to reproduce failures, you can capture and log the seed used in each test run.
trie/sync_test.go (1)
145-145
: Remove unnecessary commented-out codeThe line
// emptyD, _ := New(TrieID(types.EmptyRootHash), dbD)
is commented out. If this code is no longer needed, consider removing it to enhance code readability and maintainability.core/txpool/blobpool/blobpool_test.go (6)
Line range hint
262-262
: Typo in comment: "veirfy" should be "verify"There's a typo in the comment on line 262. The word "veirfy" should be corrected to "verify".
Apply this diff to fix the typo:
-// Insert a sequence of transactions with already passed nonces to veirfy that the entire set will get dropped. +// Insert a sequence of transactions with already passed nonces to verify that the entire set will get dropped.
Line range hint
307-307
: Typo in comment: "veirfy" should be "verify"There's a typo in the comment on line 307. The word "veirfy" should be corrected to "verify".
Apply this diff to fix the typo:
-// Insert a sequence of transactions fully overdrafted to veirfy that the entire set will get invalidated. +// Insert a sequence of transactions fully overdrafted to verify that the entire set will get invalidated.
Line range hint
391-391
: Missing error handling forstatedb.Commit
The call to
statedb.Commit(0, true)
does not handle the returned error. It's important to check and handle errors to ensure that state commits are successful and to prevent potential issues during testing.Apply this diff to handle the error:
-statedb.Commit(0, true) +if _, err := statedb.Commit(0, true); err != nil { + t.Fatalf("failed to commit state: %v", err) +}
Line range hint
483-483
: Missing error handling forstatedb.Commit
The call to
statedb.Commit(0, true)
does not handle the returned error. It's important to check and handle errors to ensure that state commits are successful and to prevent potential issues during testing.Apply this diff to handle the error:
-statedb.Commit(0, true) +if _, err := statedb.Commit(0, true); err != nil { + t.Fatalf("failed to commit state: %v", err) +}
Line range hint
945-945
: Typo in comment: "tumpe" should be "tuple"There's a typo in the comment on line 945. The word "tumpe" should be corrected to "tuple".
Apply this diff to fix the typo:
-// seed is a helper tumpe to seed an initial state db and pool +// seed is a helper tuple to seed an initial state db and pool
Line range hint
220-452
: Consider refactoringTestOpenDrops
into smaller functionsThe
TestOpenDrops
function spans over 230 lines, making it difficult to read and maintain. Breaking it down into smaller, focused helper functions can improve readability and maintainability.trie/zk_trie.go (2)
508-530
: Simplify error handling inTryGet
In the
TryGet
method, when the key is not found, it returnsnil
:if err == ErrKeyNotFound { // according to https://github.com/ethereum/go-ethereum/blob/... return nil, nil } else if err != nil { return nil, err }Consider simplifying the error handling by returning early when the key is not found:
if err == ErrKeyNotFound { return nil, nil } if err != nil { return nil, err }This improves readability by reducing the need for
else
after a return.
1318-1319
: Implement or properly handleGetAccountByHash
The
GetAccountByHash
method is unimplemented:func (mt *ZkTrie) GetAccountByHash(addrHash common.Hash) (*types.StateAccount, error) { return nil, errors.New("not implemented") }If this method is intended for future implementation, consider returning
ErrNotImplemented
or documenting its unavailability to inform users appropriately.core/state/snapshot/generate_test.go (8)
Line range hint
65-67
: Handle errors returned bymakeStorageTrie
.In the
testGeneration
function, the error returned bymakeStorageTrie
is ignored. Ignoring errors can lead to unexpected behavior or panics if the function fails.Modify the code to handle the error:
- stRoot := helper.makeStorageTrie(common.Hash{}, []string{"key-1", "key-2", "key-3"}, []string{"val-1", "val-2", "val-3"}, false) + stRoot, err := helper.makeStorageTrie(common.Hash{}, []string{"key-1", "key-2", "key-3"}, []string{"val-1", "val-2", "val-3"}, false) + if err != nil { + t.Fatalf("Failed to create storage trie: %v", err) + }
Line range hint
101-103
: Check for errors when creating a new state trie.In the
newHelper
function, the error returned bytrie.NewStateTrie
is ignored. Proper error handling ensures that any issues during trie initialization are caught early.Update the code to handle the error:
- accTrie, _ := trie.NewStateTrie(trie.StateTrieID(types.EmptyRootHash), triedb) + accTrie, err := trie.NewStateTrie(trie.StateTrieID(types.EmptyRootHash), triedb) + if err != nil { + log.Fatalf("Failed to create new state trie: %v", err) + }
Line range hint
153-155
: Handle errors fromCommit
operations.In the
Commit
method, the error returned byt.accTrie.Commit(true)
is ignored. This could suppress critical errors that need attention.Modify the code to handle the error:
- root, nodes, _ := t.accTrie.Commit(true) + root, nodes, err := t.accTrie.Commit(true) + if err != nil { + log.Fatalf("Failed to commit account trie: %v", err) + }
Line range hint
72-74
: Avoid hardcoding expected hash values in tests.Comparing the computed root hash to a hardcoded value can cause tests to fail if the underlying data or hashing algorithm changes.
Consider computing the expected root dynamically or removing the hardcoded hash comparison to improve test resilience.
Line range hint
220-225
: Uset.Error
instead oft.Fatal
within helper functions.In the
checkSnapRoot
function, usingt.Fatal
may not behave as expected because it's called from within a helper function, not the main test function.Replace
t.Fatal
witht.Error
to log the error without stopping the test abruptly. Ensure the error is properly handled in the test flow.- t.Fatal(err) + t.Error(err)
Line range hint
421-423
: Increase test timeout durations for reliability.Hardcoded timeouts like
time.After(3 * time.Second)
may cause tests to fail on slower systems or under heavy load.Consider increasing the timeout durations or making them configurable to enhance test reliability across different environments.
Line range hint
375-378
: Avoid modifying global variables concurrently in tests.Modifying
accountCheckRange
in concurrent tests can lead to race conditions and unpredictable behavior.Use local variables within the test functions or synchronize access to shared variables to prevent data races.
Line range hint
1070-1075
: Handle errors when creating new state tries in helper functions.In several helper functions, errors returned by
trie.NewStateTrie
are ignored. Proper error handling is crucial for debugging and maintaining code quality.Update the helper functions to handle errors appropriately.
core/state/statedb.go (3)
Line range hint
750-755
: Inconsistent method naming: 'Selfdestruct6780' should be 'SelfDestruct6780'The method
Selfdestruct6780
does not follow Go's naming conventions for exported methods. For consistency and readability, please rename it toSelfDestruct6780
.Apply this diff to correct the method name:
-func (s *StateDB) Selfdestruct6780(addr common.Address) { +func (s *StateDB) SelfDestruct6780(addr common.Address) {
Line range hint
1967-2139
: Consider refactoring the 'Commit' method to improve maintainabilityThe
Commit
method spans over 170 lines, making it lengthy and potentially difficult to maintain. Refactoring it into smaller, modular functions would enhance readability and maintainability. Breaking down complex methods helps isolate functionality and makes the codebase easier to understand and test.
Line range hint
1914-1917
: Typographical error in commentThe comment contains a grammatical error:
-// The fast approach can be failed if the snapshot is not fully +// The fast approach can fail if the snapshot is not fully
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
⛔ Files ignored due to path filters (1)
go.sum
is excluded by!**/*.sum
📒 Files selected for processing (82)
- cmd/evm/runner.go (0 hunks)
- cmd/utils/flags.go (1 hunks)
- core/blockchain.go (3 hunks)
- core/blockchain_repair_test.go (2 hunks)
- core/blockchain_sethead_test.go (1 hunks)
- core/blockchain_snapshot_test.go (1 hunks)
- core/blockchain_test.go (44 hunks)
- core/chain_makers.go (2 hunks)
- core/chain_makers_test.go (1 hunks)
- core/forkid/forkid_test.go (1 hunks)
- core/genesis.go (1 hunks)
- core/genesis_test.go (2 hunks)
- core/state/database.go (0 hunks)
- core/state/iterator_test.go (1 hunks)
- core/state/snapshot/difflayer_test.go (1 hunks)
- core/state/snapshot/disklayer_test.go (1 hunks)
- core/state/snapshot/generate_test.go (1 hunks)
- core/state/snapshot/holdable_iterator_test.go (1 hunks)
- core/state/snapshot/iterator_test.go (1 hunks)
- core/state/snapshot/snapshot_test.go (1 hunks)
- core/state/state_object.go (4 hunks)
- core/state/state_prove.go (0 hunks)
- core/state/state_test.go (2 hunks)
- core/state/statedb.go (1 hunks)
- core/state/statedb_fuzz_test.go (1 hunks)
- core/state/statedb_test.go (5 hunks)
- core/state/sync_test.go (1 hunks)
- core/state_processor.go (1 hunks)
- core/state_processor_test.go (2 hunks)
- core/txpool/blobpool/blobpool_test.go (2 hunks)
- core/txpool/legacypool/legacypool_test.go (1 hunks)
- core/types/hashes.go (2 hunks)
- core/types/hashing_test.go (1 hunks)
- core/types/state_account_marshalling.go (1 hunks)
- core/types/state_account_marshalling_test.go (4 hunks)
- core/vm/gas_table_test.go (1 hunks)
- core/vm/logger.go (0 hunks)
- core/vm/runtime/runtime_test.go (3 hunks)
- eth/filters/filter_system_test.go (1 hunks)
- eth/filters/filter_test.go (1 hunks)
- eth/tracers/js/goja.go (0 hunks)
- eth/tracers/logger/access_list_tracer.go (0 hunks)
- eth/tracers/logger/logger.go (0 hunks)
- eth/tracers/logger/logger_json.go (0 hunks)
- eth/tracers/native/4byte.go (0 hunks)
- eth/tracers/native/call.go (0 hunks)
- eth/tracers/native/call_flat.go (0 hunks)
- eth/tracers/native/mux.go (0 hunks)
- eth/tracers/native/noop.go (0 hunks)
- go.mod (0 hunks)
- rollup/ccc/async_checker_test.go (1 hunks)
- rollup/ccc/logger.go (0 hunks)
- rollup/tracing/proof.go (5 hunks)
- rollup/tracing/proof_test.go (1 hunks)
- rollup/tracing/tracing.go (6 hunks)
- trie/byte32.go (1 hunks)
- trie/byte32_test.go (1 hunks)
- trie/database.go (1 hunks)
- trie/database_supplement.go (0 hunks)
- trie/hash.go (1 hunks)
- trie/hash_test.go (1 hunks)
- trie/iterator_test.go (9 hunks)
- trie/node_test.go (1 hunks)
- trie/proof.go (1 hunks)
- trie/proof_test.go (1 hunks)
- trie/secure_trie.go (13 hunks)
- trie/secure_trie_test.go (3 hunks)
- trie/stacktrie.go (1 hunks)
- trie/stacktrie_test.go (1 hunks)
- trie/sync_test.go (6 hunks)
- trie/tracer_test.go (5 hunks)
- trie/trie_test.go (14 hunks)
- trie/triedb/hashdb/database.go (2 hunks)
- trie/triedb/hashdb/database_supplement.go (0 hunks)
- trie/util.go (1 hunks)
- trie/util_test.go (1 hunks)
- trie/zk_trie.go (3 hunks)
- trie/zk_trie_database.go (0 hunks)
- trie/zk_trie_database_test.go (0 hunks)
- trie/zk_trie_impl_test.go (0 hunks)
- trie/zk_trie_node.go (1 hunks)
- trie/zk_trie_node_test.go (1 hunks)
⛔ Files not processed due to max files limit (2)
- trie/zk_trie_proof_test.go
- trie/zk_trie_test.go
💤 Files with no reviewable changes (20)
- cmd/evm/runner.go
- core/state/database.go
- core/state/state_prove.go
- core/vm/logger.go
- eth/tracers/js/goja.go
- eth/tracers/logger/access_list_tracer.go
- eth/tracers/logger/logger.go
- eth/tracers/logger/logger_json.go
- eth/tracers/native/4byte.go
- eth/tracers/native/call.go
- eth/tracers/native/call_flat.go
- eth/tracers/native/mux.go
- eth/tracers/native/noop.go
- go.mod
- rollup/ccc/logger.go
- trie/database_supplement.go
- trie/triedb/hashdb/database_supplement.go
- trie/zk_trie_database.go
- trie/zk_trie_database_test.go
- trie/zk_trie_impl_test.go
✅ Files skipped from review due to trivial changes (7)
- core/blockchain_test.go
- core/chain_makers.go
- core/state/iterator_test.go
- core/state/snapshot/iterator_test.go
- core/state/statedb_fuzz_test.go
- trie/node_test.go
- trie/proof_test.go
🚧 Files skipped from review as they are similar to previous changes (7)
- core/types/hashes.go
- trie/iterator_test.go
- trie/secure_trie_test.go
- trie/stacktrie.go
- trie/tracer_test.go
- trie/trie_test.go
- trie/triedb/hashdb/database.go
🧰 Additional context used
🔇 Additional comments (103)
trie/byte32.go (5)
1-7
: LGTM: Package declaration and imports are appropriate.The package declaration and imports are correct and align with the functionality provided in this file.
9-9
: LGTM: Byte32 type definition is clear and efficient.The
Byte32
type is well-defined as a fixed-size 32-byte array, which is an efficient representation for the intended purpose.
21-35
: LGTM: Bytes method and NewByte32FromBytes function are well-implemented.The
Bytes
method efficiently returns the underlying byte array. TheNewByte32FromBytes
function correctly implements the described behavior, handling both longer and shorter input slices appropriately.
1-42
: Overall, the implementation in byte32.go is well-structured and efficient.This new file introduces a solid foundation for handling 32-byte arrays in the trie package. The
Byte32
type and its associated methods provide necessary functionality for hashing and byte manipulation. The implementations are correct, with appropriate error handling and clear comments. Minor suggestions have been made for improvement, and one verification task has been requested regarding a constant definition.
11-19
: LGTM: Hash method implementation looks correct. Verify constant definition.The
Hash
method correctly implements a custom hashing scheme using the Poseidon algorithm, with appropriate error handling.Please verify the definition and value of the
HASH_DOMAIN_BYTE32
constant, as it's not defined in this file.✅ Verification successful
Further Verification Needed: Check definition of
HASH_DOMAIN_ELEMS_BASE
.Please verify the definition and value of the
HASH_DOMAIN_ELEMS_BASE
constant to ensure it's correctly implemented.
Verified:
HASH_DOMAIN_BYTE32
andHASH_DOMAIN_ELEMS_BASE
are correctly defined.The
Hash
method correctly utilizes the defined constants, ensuring proper implementation of the hashing scheme.
HASH_DOMAIN_BYTE32
is set to512
(2 * 256
).HASH_DOMAIN_ELEMS_BASE
is correctly defined as256
intrie/hash.go
.🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Search for the definition of HASH_DOMAIN_BYTE32 rg -n "HASH_DOMAIN_BYTE32\s*=\s*" --type goLength of output: 114
Script:
# #!/bin/bash # Search for the definition of HASH_DOMAIN_ELEMS_BASE rg -n "HASH_DOMAIN_ELEMS_BASE\s*=\s*" --type goLength of output: 95
trie/byte32_test.go (1)
1-8
: LGTM: Appropriate package and imports.The package declaration and imports are correct and suitable for this test file. The use of
testify/assert
is a good practice for writing clear and concise tests.rollup/ccc/async_checker_test.go (1)
33-33
: LGTM! Verify consistency across the codebase.The change from
trie.HashDefaultsWithZktrie
totrie.HashDefaults
aligns with the broader refactoring effort to standardize trie handling across the project. This modification ensures that the test suite accurately reflects the updates made in the main codebase.To ensure consistency, let's verify that similar changes have been made across the codebase:
This will help confirm that the refactoring has been applied consistently throughout the project.
✅ Verification successful
Refactoring Verified Successfully.
All instances of
HashDefaultsWithZktrie
have been removed and replaced withHashDefaults
consistently across the codebase.🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Check for any remaining instances of HashDefaultsWithZktrie and compare with HashDefaults usage # Search for any remaining instances of HashDefaultsWithZktrie echo "Searching for remaining instances of HashDefaultsWithZktrie:" rg "HashDefaultsWithZktrie" echo "---" # Search for instances of HashDefaults echo "Searching for instances of HashDefaults:" rg "HashDefaults"Length of output: 4466
trie/hash_test.go (1)
3-12
: LGTM: Import statements are well-organized and appropriate.The import statements are correctly structured, with standard library imports followed by third-party packages. All imported packages are utilized in the test functions.
core/types/state_account_marshalling.go (1)
45-46
: LGTM. Consider verifying the impact of type change.The change from
[]zkt.Byte32
to[][32]byte
is a good move towards using standard Go types. This should improve compatibility and reduce dependencies.To ensure this change doesn't introduce any issues, please run the following script to check for any remaining uses of
zkt.Byte32
:trie/util_test.go (1)
11-21
: LGTM: TestSetBitBigEndian is well-implemented.The test covers various bit positions across the entire byte array, providing good coverage for the
SetBitBigEndian
function. The expected result correctly represents the set bits in big-endian format.core/state/snapshot/holdable_iterator_test.go (2)
1-2
: LGTM: Build constraint additionThe addition of the build constraint
//go:build all_tests
along with its legacy format is a good practice. It allows for better control over which tests are built and run in different scenarios.
1-2
: Summary: Build constraint addition enhances test organizationThe only change in this file is the addition of the
//go:build all_tests
constraint. This modification improves the organization of tests by ensuring that these specific tests are only built and run when theall_tests
tag is specified. The existing test functions remain unchanged and continue to provide good coverage for the holdable iterator functionality.core/types/state_account_marshalling_test.go (6)
41-45
: Improved byte slice access in assertions.The changes from
Bytes()
to direct slice access[:]
are consistent and correct. This modification potentially improves efficiency by eliminating unnecessary method calls while maintaining the same functionality.
52-52
: Consistent byte slice access in flattening loop.The change from
Bytes()
to[:]
is consistent with earlier modifications and maintains the same functionality while potentially improving efficiency.
78-82
: Consistent byte slice access in zero account test.The changes from
Bytes()
to[:]
are consistent with earlier modifications and across all assertions in this test function. They maintain the same functionality while potentially improving efficiency.
102-106
: Consistent byte slice access in non-empty account test.The changes from
Bytes()
to[:]
are consistent with earlier modifications and across all assertions in this test function. They maintain the same functionality while potentially improving efficiency.
113-113
: Consistent byte slice access in max fields account test.The changes from
Bytes()
to[:]
are consistent with earlier modifications, both in assertions and the flattening loop. They maintain the same functionality while potentially improving efficiency.Also applies to: 141-145, 152-152
Line range hint
1-190
: Summary of changes in state_account_marshalling_test.goThe modifications in this file consistently replace
Bytes()
method calls with direct slice access[:]
across all test functions. These changes:
- Maintain existing test logic and functionality.
- Potentially improve efficiency by eliminating unnecessary method calls.
- Align with a broader refactoring effort to streamline byte slice handling.
The consistency of these changes across the file suggests a systematic approach to the refactoring process.
core/vm/gas_table_test.go (3)
136-137
: Improved test structure with sub-tests.The introduction of
t.Run
for each test case is a positive change. This approach enhances test organization, provides clearer output for failures, and allows for better isolation of test cases.Also applies to: 183-183
137-139
: Clarification needed on skipped tests.The addition of the skip condition for non-EIP-3860 tests is noted. This change aligns with Scroll's default behavior. However, could you please clarify:
- Are there any implications of not running these skipped tests?
- How is backwards compatibility ensured if EIP-3860 is always enabled?
136-183
: Overall improvements to TestCreateGas function.The changes to the
TestCreateGas
function represent a significant improvement in test structure, clarity, and alignment with Scroll's implementation. The introduction of sub-tests, conditional execution based on EIP-3860, and refined gas usage checks all contribute to a more robust and maintainable test suite.While there are a few minor suggestions for further enhancements, the overall direction of these changes is positive and aligns well with best practices in Go testing.
core/genesis_test.go (5)
47-47
: Verify the intention behind commenting out the PathScheme test.The PathScheme test case has been commented out. This could potentially reduce test coverage for different database schemes. Can you confirm if this is intentional and if there's an alternative way to ensure PathScheme is still being tested?
52-52
: Verify the correctness of the new customghash value.The
customghash
value has been updated. Please confirm that this new hash (0xc96ed5df64e683d5af1b14ec67126e31b914ca828021c330efa00572a61ede8f
) correctly corresponds to the current custom genesis block configuration. It's important to ensure this value is accurate to maintain the integrity of the tests.
Line range hint
77-95
: Verify the intention behind commenting out multiple test cases.Several test cases have been commented out, including scenarios for nil genesis blocks and mainnet comparisons. This significantly reduces the test coverage for genesis block setups. Can you explain the rationale behind this change? If these scenarios are no longer relevant, consider adding new test cases to maintain comprehensive coverage of genesis block configurations.
192-192
: Simplification of trie configuration looks good.The change to consistently use
trie.HashDefaults
for the trie configuration simplifies the code and aligns with the broader refactoring of trie configurations. This is a good improvement.However, please verify that this change doesn't negatively impact any scenarios that previously relied on the
ZktrieEnabled
condition. Ensure that all relevant test cases still pass with this new configuration.
Line range hint
1-268
: Overall changes simplify testing, but warrant a review of test coverage.The changes in this file align with the broader refactoring of trie configurations in the codebase. While they simplify the testing process, they also reduce test coverage in some areas, particularly for different genesis block scenarios.
To ensure the continued robustness of the testing suite:
- Review the overall test coverage for genesis block configurations and setups.
- Consider adding new test cases to cover any scenarios that might have been lost due to the commented-out tests.
- Verify that the simplification of trie configuration doesn't overlook any edge cases previously covered by the more complex setup.
These steps will help maintain the integrity and comprehensiveness of the test suite while adapting to the new codebase structure.
trie/database.go (1)
116-116
: LGTM. Verify impact on trie database behavior.The change from
mptResolver{}
toZkChildResolver{}
aligns with the PR objective of integrating the ZkTrie implementation. This modification is part of the broader refactoring effort to remove legacy trie references.To ensure this change doesn't introduce unexpected behavior:
- Verify that
ZkChildResolver
implements all necessary methods of the previousmptResolver
.- Check if this change affects the performance or functionality of the trie database.
- Update any relevant documentation or comments that might reference the old
mptResolver
.Run the following script to verify the implementation of
ZkChildResolver
:✅ Verification successful
Verified:
ZkChildResolver
correctly implements all required methods.The
ZkChildResolver
struct intrie/zk_trie_node.go
implements theForEach
method, matching themptResolver
's implementation. This ensures that the replacement aligns with the expected interface, maintaining consistent behavior for the trie database.
- Confirmed that
ZkChildResolver
provides the necessaryForEach
method asmptResolver
did.- No additional methods are required for the resolver interface based on the current implementation.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Verify the implementation of ZkChildResolver # Test: Check for the definition of ZkChildResolver rg -A 10 'type ZkChildResolver struct' # Test: Compare methods of ZkChildResolver with mptResolver rg -A 5 'func \((\w+) (ZkChildResolver|mptResolver)\)'Length of output: 1544
core/state/snapshot/difflayer_test.go (2)
1-2
: LGTM: Build constraints added correctly.The addition of build constraints for
all_tests
is appropriate. This allows for selective test execution, potentially improving build times for regular development cycles while ensuring comprehensive testing when needed.
Line range hint
1-479
: Summary: Minor change to test file configuration.The only change to this file is the addition of build constraints for
all_tests
. The existing test functions (TestMergeBasics
,TestMergeDelete
,TestInsertAndMerge
, and various benchmarks) remain unchanged and appear to provide good coverage of thediffLayer
functionality, including merging, deletion, and insertion operations.core/state/state_object.go (4)
146-146
: Improved empty root hash checkThe update to use
types.EmptyRootHash
instead ofs.db.db.TrieDB().EmptyRoot()
is a good change. It standardizes the empty root hash check across the codebase, potentially improves performance by avoiding a database call, and enhances code readability.
256-256
: Consistent use of EmptyRootHash in finalise methodThe update to use
types.EmptyRootHash
in the prefetching condition of thefinalise
method is a good change. It maintains consistency with the earlier updates in the file and aligns with the broader changes in the codebase for standardizing empty root hash checks.
Line range hint
1-577
: Overall assessment of changes in state_object.goThe modifications in this file consistently update the handling of empty root hashes and simplify some methods. These changes align with the broader refactoring effort mentioned in the PR summary, aiming to streamline the trie implementation. The updates improve code consistency and potentially enhance performance by reducing unnecessary database calls.
Key improvements:
- Standardized use of
types.EmptyRootHash
- Simplified
GetCommittedState
method- Consistent empty root hash checks in
getTrie
andfinalise
methodsThese changes contribute positively to the codebase's maintainability and readability.
202-202
: Simplified GetCommittedState methodThe simplification of this method by directly assigning the snapshot value to
common.BytesToHash(enc)
reduces complexity and improves readability. This change aligns with the removal of ZK trie specific logic.Please verify that the snapshot always returns data in the correct format for this direct assignment. Run the following script to check the usage of snapshots in the codebase:
✅ Verification successful
Verified: GetCommittedState Method Simplification
The direct assignment using
common.BytesToHash(enc)
has been verified across the codebase, ensuring that snapshots return data in the correct format.🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Check snapshot usage and data format handling # Search for snapshot data retrieval and usage rg -n "snap\.Storage\(" -A 5 rg -n "common\.BytesToHash\(" -A 2 -B 2Length of output: 70104
core/state_processor_test.go (4)
256-256
: Base fee update approvedThe base fee in the error message has been updated from 38100000 to 39370000, consistent with the change in the ErrFeeCapTooLow test case.
To ensure the consistency of this change across the codebase, please run the following verification:
#!/bin/bash # Search for other occurrences of the old and new base fee values echo "Searching for old base fee value (38100000):" rg "38100000" --type go echo "Searching for new base fee value (39370000):" rg "39370000" --type go
203-203
: Verify base fee consistencyThe base fee in the error message has been updated from 38100000 to 39370000. Please ensure this change is consistent with any modifications in the fee calculation logic elsewhere in the codebase.
#!/bin/bash # Search for other occurrences of the old and new base fee values echo "Searching for old base fee value (38100000):" rg "38100000" --type go echo "Searching for new base fee value (39370000):" rg "39370000" --type go
250-250
: Transaction hash update approvedThe transaction hash in the error message has been updated. This change is consistent with the previous update and is likely due to modifications in the transaction creation or hashing process.
To ensure the consistency of this change, please verify the transaction creation process in the test setup:
✅ Verification successful
Transaction hash update verified
The change in the transaction hash corresponds directly to the updated parameters in the transaction creation process.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Search for the mkDynamicCreationTx function usage rg "mkDynamicCreationTx\(" --type goLength of output: 839
Script:
#!/bin/bash # Search for the mkDynamicCreationTx function usage in Go files rg "mkDynamicCreationTx\(" --type goLength of output: 303
244-244
: Transaction hash update approvedThe transaction hash in the error message has been updated. This change is likely due to modifications in the transaction creation or hashing process.
To ensure the consistency of this change, please verify the transaction creation process in the test setup:
✅ Verification successful
Transaction hash update verified
The updated base fee values confirm that the transaction creation process has been modified, resulting in the updated transaction hash in the error message.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Search for the mkDynamicCreationTx function definition rg "func mkDynamicCreationTx" --type goLength of output: 839
core/blockchain_snapshot_test.go (2)
1-2
: New build constraints addedThe new build constraints
//go:build all_tests
and// +build all_tests
have been added to the file. This ensures that these tests are only built and run when theall_tests
build tag is specified.These constraints are appropriate for snapshot tests, which may be more resource-intensive and not necessary for every build.
Line range hint
1-1010
: Overall assessment: Well-implemented and comprehensive test suiteThe changes to this file, including the new build constraints and the comprehensive test suite for blockchain snapshot functionality, are well-implemented and thorough. The test cases cover a wide range of scenarios and should provide good coverage for the snapshot feature.
Key points:
- Appropriate build constraints added
- Comprehensive test suite covering various snapshot scenarios
- Clear documentation and expectations for each test case
- Support for both HashScheme and PathScheme
No significant issues were found, and the implementation appears to be of high quality.
core/state/snapshot/disklayer_test.go (1)
1-2
: LGTM: Build constraint added for test isolation.The addition of the build constraint
all_tests
is a good practice. It allows for better control over which tests are run in different scenarios, potentially speeding up the regular test suite by excluding these tests unless explicitly requested.core/state/sync_test.go (8)
1-2
: LGTM: Build tags added for test isolation.The addition of build tags is a good practice. It ensures that these potentially resource-intensive tests are only compiled and run when specifically requested, improving build times for regular development.
Line range hint
32-70
: LGTM: Comprehensive test state creation.The
makeTestState
function creates a diverse set of test accounts, covering various scenarios including different balances, nonces, code, and storage. This provides a robust foundation for testing state synchronization.
Line range hint
74-97
: LGTM: Thorough state verification.The
checkStateAccounts
function provides a comprehensive check of the reconstructed state, verifying balances, nonces, and code for each account. It also correctly uses the provided scheme for configuration, ensuring flexibility in testing different trie implementations.
Line range hint
121-132
: LGTM: Comprehensive empty state sync test.The
TestEmptyStateSync
function effectively tests the edge case of empty state synchronization. It's particularly good that it checks both HashScheme and PathScheme, ensuring consistency across different trie implementations.
Line range hint
137-258
: LGTM: Comprehensive iterative state sync tests.The
testIterativeStateSync
function and its variants provide thorough coverage of different synchronization scenarios. The tests cover individual and batched syncing, syncing from memory and disk, and syncing by path. This comprehensive approach ensures robustness of the state synchronization process across various conditions.
Line range hint
401-484
: LGTM: Thorough random order state sync tests.The
TestIterativeRandomStateSync
function and its variants effectively test the robustness of the synchronization process when nodes are synced in a random order. This is an important test case as it simulates real-world scenarios where data might not always arrive sequentially. The tests cover both individual and batched syncing, providing good coverage of different sync strategies.
Line range hint
570-664
: LGTM: Crucial incomplete state sync test.The
TestIncompleteStateSync
function is a critical test that ensures data integrity during the synchronization process. By verifying that only complete sub-tries are present in the database at any point during the sync, it guards against potential inconsistencies that could arise from partial or interrupted syncs. The separate checks for code and trie nodes provide comprehensive coverage of all sync scenarios.
Line range hint
1-670
: Overall: Excellent improvements to state sync testing.The changes to this file significantly enhance the test coverage for state synchronization. The new tests cover a wide range of scenarios including empty states, iterative syncing, random order syncing, and incomplete syncs. These improvements will help ensure the robustness and reliability of the state synchronization process across different conditions and edge cases.
core/forkid/forkid_test.go (1)
1-2
: Build tags added for test file inclusion.The addition of build tags
//go:build all_tests
and// +build all_tests
ensures that this test file is only included when running all tests. This is a good practice for organizing and optimizing test execution.core/vm/runtime/runtime_test.go (3)
273-273
: Verify the implications of skipping the TestBlockhash test.Skipping this test due to Scroll's different implementation of blockhash may have implications for compatibility with standard Ethereum clients.
Could you provide more details on how Scroll's blockhash implementation differs and how it maintains compatibility with the Ethereum ecosystem?
702-702
: Consider the implications of not supporting SELFDESTRUCT.Skipping this test due to lack of SELFDESTRUCT support in Scroll is a significant deviation from standard Ethereum behavior. This could have far-reaching implications for contract compatibility and gas refunds.
Please clarify the rationale behind not supporting SELFDESTRUCT and how Scroll plans to handle existing contracts that use this opcode.
676-683
: Address the removal of SELFDESTRUCT test case.The removal of the SELFDESTRUCT test case is consistent with Scroll's lack of support for this opcode. However, this might affect the coverage of cold account access cost testing.
How does Scroll plan to ensure comprehensive testing of cold account access costs without the SELFDESTRUCT case? Are there alternative test cases that could be added to maintain coverage?
eth/filters/filter_system_test.go (1)
471-471
: LGTM. Verify test results after ZkTrie removal.The change aligns with the broader goal of phasing out ZkTrie configurations. This simplification is good for maintainability.
Please ensure that this change doesn't negatively impact the test results. Run the following command to verify:
core/state/statedb_test.go (4)
33-33
: LGTM: Import statement reorganized.The import for "github.com/holiman/uint256" has been moved to the top of the import block, improving code organization.
799-799
: Clarify the reason for commenting out the PathScheme test.The test case using
rawdb.PathScheme
has been commented out. This aligns with similar changes in other files, but it's important to understand the reasoning behind this modification.Could you please explain why this test case has been disabled? Is this a temporary change or part of a larger refactoring effort?
1054-1054
: Verify the impact of reduced cache size.The cache size in the trie database configuration has been reduced from 1024 to 128. This change might affect test performance or memory usage.
Could you please explain the reasoning behind this cache size reduction? Have you observed any impact on test performance or memory usage?
1109-1109
: Address skipped tests due to ZkTrie incompatibility.The
TestResetObject
andTestDeleteStorage
tests have been skipped with the message "Snapshot doesn't support ZkTrie". This change is consistent with the removal of ZkTrie-related functionality in other files.Could you please provide information on the plans to address this incompatibility? Are there ongoing efforts to make these tests compatible with ZkTrie, or will they be replaced with alternative tests?
Also applies to: 1144-1144
core/blockchain_sethead_test.go (1)
1957-1957
: Verify the impact of removing PathScheme from testsThe
rawdb.PathScheme
has been commented out, reducing the number of schemes tested. While this simplifies the testing process, it might lead to missing issues specific to thePathScheme
.To ensure this change doesn't introduce unintended consequences, please run the following verification:
If
PathScheme
is still used elsewhere in the codebase, consider keeping the tests for both schemes to maintain comprehensive coverage.✅ Verification successful
PathScheme usage remains widespread across the codebase. Removing it from
core/blockchain_sethead_test.go
does not impact overall coverage.🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Check for any remaining references to PathScheme in test files # and verify if PathScheme is still used elsewhere in the codebase. echo "Checking for PathScheme references in test files:" rg "PathScheme" --type go --glob "*_test.go" echo "\nChecking for PathScheme usage in non-test files:" rg "PathScheme" --type go --glob "!*_test.go"Length of output: 14708
core/blockchain.go (4)
Line range hint
156-171
: Simplified triedbConfig methodThe
triedbConfig
method has been refactored to remove theisUsingZktrie
parameter, simplifying its implementation. This change aligns with the removal of explicit zkTrie handling throughout the codebase.
179-180
: Updated default cache configuration for zkTrie compatibilityThe
defaultCacheConfig
has been modified to disable snapshots (SnapshotLimit
set to 0) and prevent waiting for snapshot construction on startup (SnapshotWait
set to false). These changes are due to the current incompatibility between snapshots and zkTrie.
275-275
: Simplified blockchain initialization processThe
NewBlockChain
function has been updated to reflect the changes in thetriedbConfig
method. The removal of the zkTrie-specific conditional check simplifies the trie database initialization process, moving towards a more unified approach.
Line range hint
1-1890
: Overall simplification and removal of zkTrie-specific configurationsThe changes in this file are part of a broader effort to streamline the codebase by removing explicit zkTrie handling. The modifications to the
triedbConfig
method,defaultCacheConfig
, andNewBlockChain
function all contribute to this goal. These changes simplify the code and make it more consistent, while also addressing the current incompatibility between snapshots and zkTrie.trie/util.go (1)
12-39
: Ensure proper handling of base cases inHashElemsWithDomain
In the
HashElemsWithDomain
function, whenlen(elems) == 0
, the function returnsNewHashFromBigInt(baseH)
(line 20). Please verify that this behavior correctly handles scenarios where onlyfst
andsnd
are provided without additional elements.rollup/tracing/proof_test.go (1)
81-86
:⚠️ Potential issue
key3
andkey4
have the same value, potential duplicationBoth
key3
andkey4
are assigned the same value (bytes.Repeat([]byte("x"), 32)
), which may not be intentional and could lead to duplicate keys in your tests. Verify ifkey3
should have a different value.If
key3
should be unique, consider updating its value:-key3 := bytes.Repeat([]byte("x"), 32) +key3 := bytes.Repeat([]byte("y"), 32)Likely invalid or redundant comment.
trie/zk_trie_node_test.go (7)
12-23
: LGTM:TestNewEmptyNode
is correctly implemented.The test for
NewEmptyNode
correctly asserts the node type and verifies both the node and value hashes.
25-42
: LGTM:TestNewLeafNode
is correctly implemented.The test appropriately verifies the node type, compressed flags, value preimage, and checks the correctness of node and value hashes.
44-58
: LGTM:TestNewParentNode
is correctly implemented.The test checks the node type, child nodes, and validates node and value hashes accurately.
60-77
: LGTM:TestNewParentNodeWithEmptyChild
is correctly implemented.The test ensures that parent nodes with empty children are handled properly and verifies hashes accordingly.
88-197
: LGTM:TestNewNodeFromBytes
is comprehensive.The test covers deserialization of parent, leaf, and empty nodes from bytes, including handling of invalid byte arrays and node types.
199-223
: LGTM:TestNodeValueAndData
is well-implemented.The test correctly verifies the canonical value and data methods for leaf, parent, empty, and invalid nodes.
225-240
: LGTM:TestNodeString
correctly tests string representations.The test appropriately checks the string outputs for leaf, parent, empty, and invalid nodes.
trie/secure_trie.go (19)
51-56
: StructstateTrie
is well-definedThe
stateTrie
struct fields are correctly defined, and the addition ofsecKeyCacheOwner
for managing the key cache consistency is appropriate.
80-81
: MethodMustGet
correctly retrieves valuesThe
MustGet
method properly wraps the underlying trie'sMustGet
, handling key hashing internally.
Line range hint
88-92
: Verify error handling inGetStorage
Ensure that the error returned by
rlp.Split(enc)
is appropriately handled. Since the function returnscontent, err
, any error fromrlp.Split
will be propagated, but it's crucial to confirm that this aligns with the intended error handling strategy.
Line range hint
100-106
:GetAccount
method functions as expectedThe method successfully retrieves and decodes the account data associated with the given address.
Line range hint
113-119
:GetAccountByHash
correctly retrieves accounts by hashThe method handles address hashes appropriately, providing flexibility in account retrieval.
127-129
:GetNode
method delegates correctlyThe implementation correctly calls
t.trie.GetNode
, ensuring node retrieval functionality is maintained.
Line range hint
140-145
:MustUpdate
updates trie and caches preimagesThe method correctly updates the trie with the hashed key and caches the preimage for future retrieval.
Line range hint
154-161
: Ensure RLP encoding errors are handled inUpdateStorage
While encoding
value
using RLP, any potential errors fromrlp.EncodeToBytes
should be checked and handled to prevent unintended behavior.Apply this diff to handle possible encoding errors:
func (t *stateTrie) UpdateStorage(_ common.Address, key, value []byte) error { hk := t.hashKey(key) - v, _ := rlp.EncodeToBytes(value) + v, err := rlp.EncodeToBytes(value) + if err != nil { + return err + } err := t.trie.Update(hk, v) if err != nil { return err }
179-181
: Confirm intentional no-op inUpdateContractCode
The
UpdateContractCode
method returnsnil
without performing any action. If this is intended (e.g., not applicable forstateTrie
), consider adding a comment explaining why it's a no-op to improve code clarity.
Line range hint
185-189
:MustDelete
appropriately removes keysThe method correctly deletes the hashed key from both the cache and the trie, ensuring data consistency.
194-198
:DeleteStorage
method functions correctlyThe method properly handles the deletion of storage entries, maintaining the integrity of the trie.
201-205
:DeleteAccount
correctly removes account entriesAccount deletion is handled correctly by hashing the address and updating the trie and key cache.
Line range hint
209-215
:GetKey
retrieves preimages effectivelyThe method first checks the local cache before querying the preimage store, optimizing retrieval.
244-246
:Hash
method correctly retrieves trie root hashThe method delegates to the underlying trie, ensuring consistency in hash computation.
249-253
:Copy
method maintains trie state accuratelyThe
Copy
method creates a newstateTrie
instance with shared references where appropriate, which is acceptable.
259-261
:NodeIterator
returns iterator as expectedThe method provides access to trie nodes starting from a given key, facilitating traversal.
265-267
:MustNodeIterator
aligns with conventionSuppressing errors in
MustNodeIterator
is consistent with otherMust
methods.
Line range hint
272-278
: Concurrency considerations inhashKey
The
hashKey
method uses a buffer that's not thread-safe. SincestateTrie
is documented as not safe for concurrent use, this is acceptable. Ensure that any future modifications maintain this constraint or add synchronization.
Line range hint
284-289
:getSecKeyCache
ensures key cache consistencyThe method correctly resets the cache when ownership changes, preventing stale data usage.
core/state/snapshot/snapshot_test.go (1)
1-2
: Build constraint added to include test in specific buildsThe
all_tests
build constraint has been correctly added at the top of the file to include this test in builds that specify theall_tests
tag.rollup/tracing/tracing.go (8)
30-31
: Appropriate imports added for ZkTrie functionalityThe added imports for
trie
andzkproof
are necessary for the new ZkTrie features implemented in the code.
83-83
: Addition ofZkTrieTracer
field toTraceEnv
structureThe
ZkTrieTracer
field is correctly added to theTraceEnv
structure to track ZkTrie proofs per address.
122-122
: InitializeZkTrieTracer
inCreateTraceEnvHelper
The
ZkTrieTracer
map is properly initialized in theCreateTraceEnvHelper
function.
440-442
: Ensure safe handling when opening storage trie and type assertionThe code safely opens the storage trie and checks whether it is a ZkTrie, handling errors appropriately.
448-448
: InstantiatezktrieTracer
with the ZkStorageTrieThe
zktrieTracer
is correctly instantiated using thezkStorageTrie
.
464-465
: CachezktrieTracer
inenv.ZkTrieTracer
The code checks for existing tracers and caches the
zktrieTracer
inenv.ZkTrieTracer
appropriately.
471-472
: Mark deletions inenv.ZkTrieTracer
when necessaryThe code correctly marks deletions in the ZkTrie tracer when a key is being deleted.
479-480
: Generate storage proofs usingzkStorageTrie.Prove
methodThe code generates storage proofs with
zkStorageTrie.Prove
and handles errors properly.trie/sync_test.go (2)
817-817
: Consistent use oftypes.EmptyRootHash
The initialization of
srcTrie
withtypes.EmptyRootHash
is correct and consistent with the rest of the codebase.
851-851
: Correct usage oftypes.EmptyRootHash
in database updateThe
Update
function appropriately usestypes.EmptyRootHash
, ensuring consistency across updates.core/state/statedb.go (1)
360-360
: Simplification of 'GetProof' method improves clarityThe removal of the
IsUsingZktrie
check in theGetProof
method simplifies the code and enhances readability. By directly callingGetProofByHash
, the method becomes more straightforward and aligns with the updated trie implementation.cmd/utils/flags.go (1)
286-286
:⚠️ Potential issueConsider the impact of changing the default GC mode to 'full'
Changing the default value of
GCModeFlag
fromGCModeArchive
toGCModeFull
will cause nodes to prune old state data by default. This could affect users who require access to historical state data that is only available in archive nodes. Please ensure that this change is intentional and that it is communicated to users, possibly by updating documentation or release notes.Run the following script to verify that there are no hard-coded dependencies on
GCModeArchive
elsewhere in the codebase:✅ Verification successful
No hard-coded dependencies on
GCModeArchive
found outsideflags.go
.🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Search for hard-coded references to `GCModeArchive` in the codebase. # Expected result: No occurrences of `GCModeArchive` being used as a default value elsewhere. rg 'GCModeArchive'Length of output: 559
|
a2092c2
da-codec breaks it. fixed. |
1. Purpose or design rationale of this PR
This PR moves in the ZkTrie implementation from its standalone repo and implements the
Commit()
function to behave as the Geth expects it to behave.2. PR title
Your PR title must follow conventional commits (as we are doing squash merge for each PR), so it must start with one of the following types:
3. Deployment tag versioning
Has the version in
params/version.go
been updated?4. Breaking change label
Does this PR have the
breaking-change
label?Summary by CodeRabbit
Release Notes
New Features
Byte32
type for managing 32-byte data structures, including hashing and conversion methods.ZkTrie
functionality with improved error handling and state management.Bug Fixes
Tests
Byte32
, trie nodes, andZkTrie
operations.Chores
ZkTrie
implementation.go.mod
to reflect current library usage.