-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Mining] refactor: difficulty in terms of target hash #690
Conversation
WalkthroughThe changes primarily focus on updating the relay difficulty calculation mechanism across various components. The Changes
Recent review detailsConfiguration used: CodeRabbit UI Files selected for processing (1)
Files skipped from review as they are similar to previous changes (1)
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (invoked as PR comments)
Additionally, you can add CodeRabbit Configuration File (
|
The CI will now also run the e2e tests on devnet, which increases the time it takes to complete all CI checks. You may need to run GCP workloads (requires changing the namespace to 690) |
The image is going to be pushed after the next commit. You can use If you also want to run E2E tests, please add |
3732581
to
145d67c
Compare
145d67c
to
ffb6902
Compare
af91607
to
ceb283d
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
Outside diff range, codebase verification and nitpick comments (2)
pkg/relayer/miner/options.go (1)
5-8
: Approve the renaming and parameter change.The function
WithRelayDifficultyTargetHash
correctly updates the miner's configuration to use a hash-based difficulty target. This aligns with the PR's objectives.Suggest adding explanatory comments.
Consider adding comments to explain the significance of using a hash-based approach for the difficulty target, which could help future maintainers understand the rationale behind this design choice.
pkg/crypto/rand/integer.go (1)
Line range hint
1-29
: Approve the current implementation but recommend addressing the concerns.The function
SeededInt63
is implemented correctly according to current standards. However, the comments raise important issues about language-agnosticism and entropy reduction, which are critical for maintaining robustness and security in a blockchain environment.Recommend addressing the TODO comments.
These concerns should be addressed in future development cycles to ensure that the pseudo-random number generator is both robust and compatible across different programming environments.
Would you like me to open a GitHub issue to track the resolution of these concerns?
Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Files ignored due to path filters (1)
x/proof/types/params.pb.go
is excluded by!**/*.pb.go
Files selected for processing (27)
- api/poktroll/proof/params.pulsar.go (15 hunks)
- e2e/tests/parse_params_test.go (1 hunks)
- e2e/tests/update_params_test.go (1 hunks)
- pkg/client/interface.go (1 hunks)
- pkg/crypto/protocol/difficulty.go (1 hunks)
- pkg/crypto/protocol/hash.go (1 hunks)
- pkg/crypto/rand/integer.go (1 hunks)
- pkg/relayer/miner/miner.go (5 hunks)
- pkg/relayer/miner/miner_test.go (3 hunks)
- pkg/relayer/miner/options.go (1 hunks)
- proto/poktroll/proof/params.proto (1 hunks)
- telemetry/event_counters.go (1 hunks)
- tests/integration/tokenomics/relay_mining_difficulty_test.go (1 hunks)
- testutil/testrelayer/relays.go (3 hunks)
- x/proof/keeper/msg_server_submit_proof.go (4 hunks)
- x/proof/keeper/msg_server_submit_proof_test.go (8 hunks)
- x/proof/keeper/msg_server_update_param.go (1 hunks)
- x/proof/keeper/msg_server_update_param_test.go (2 hunks)
- x/proof/keeper/msg_update_params_test.go (1 hunks)
- x/proof/keeper/params_test.go (1 hunks)
- x/proof/types/message_update_param.go (2 hunks)
- x/proof/types/message_update_param_test.go (1 hunks)
- x/proof/types/params.go (7 hunks)
- x/service/types/relay.go (4 hunks)
- x/tokenomics/keeper/update_relay_mining_difficulty.go (4 hunks)
- x/tokenomics/keeper/update_relay_mining_difficulty_test.go (3 hunks)
- x/tokenomics/module/abci.go (2 hunks)
Additional comments not posted (53)
pkg/crypto/protocol/hash.go (1)
1-10
: Approve the new utility function for hash computation.The function
GetHashFromBytes
is well-implemented and documented. It provides a centralized way to compute SHA256 hashes, which is crucial for consistency across the codebase.pkg/crypto/protocol/difficulty.go (1)
9-24
: Approve the changes and suggest verification of the new difficulty calculation method.The transition from
CountHashDifficultyBits
toGetDifficultyFromHash
aligns with the PR's objectives to enhance precision and security in difficulty assessments. The use of constants for representing difficulty levels is appropriate and well-implemented.Suggest additional verification.
Given the critical nature of difficulty calculation in blockchain contexts, consider adding more comprehensive tests or peer reviews to ensure the accuracy and security of the new method.
Verification successful
Test coverage for
GetDifficultyFromHash
confirmed.The function
GetDifficultyFromHash
is utilized in multiple locations, including test files such asx/proof/keeper/msg_server_submit_proof_test.go
. This indicates that the function's accuracy and behavior are being verified through tests.
x/proof/keeper/msg_server_submit_proof_test.go
: Contains test cases forGetDifficultyFromHash
.Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Verify the accuracy of hash difficulty calculation. # Test: Search for test cases related to `GetDifficultyFromHash`. Expect: Comprehensive test coverage. rg --type python -A 5 $'GetDifficultyFromHash'Length of output: 76
Script:
#!/bin/bash # Description: Verify the accuracy of hash difficulty calculation. # Test: Search for test cases related to `GetDifficultyFromHash`. Expect: Comprehensive test coverage. rg --type go -A 5 'GetDifficultyFromHash'Length of output: 2580
x/proof/types/message_update_param_test.go (1)
40-40
: Updated test cases to reflect new parameter names and types.The test cases correctly reflect the new parameter
ParamRelayDifficultyTargetHash
and its associated types (AsString
andAsBytes
). This is consistent with the PR's objective to refactor difficulty in terms of the target hash.
- Line 40: The test case checks for an invalid string type, which is appropriate as it ensures robustness in type handling.
- Lines 48-49: The valid case with the correct byte array representation of the hash is tested, ensuring the system handles valid inputs correctly.
Also applies to: 48-49
x/proof/keeper/msg_update_params_test.go (1)
45-46
: Integration ofRelayDifficultyTargetHash
intoParams
struct.The addition of
RelayDifficultyTargetHash
to theParams
struct is correctly implemented. This change aligns with the overall objective of the PR to refactor the difficulty measurement system.
- Line 45: The field is initialized with its default value, which is a good practice to ensure that all parameters have sensible defaults.
- Line 46: The test case "valid: send minimal params" includes this new parameter, which helps verify that the system behaves as expected when provided with valid input.
proto/poktroll/proof/params.proto (1)
15-16
: Updated field definition inParams
message.The new field
relay_difficulty_target_hash
is correctly defined as a byte array, which is suitable for representing hash values. The use of annotations likejsontag
ensures that the field is serialized correctly in JSON, which is crucial for interoperability and debugging.x/proof/keeper/msg_server_update_param.go (1)
26-37
: RefactoredUpdateParam
function to handle newRelayDifficultyTargetHash
.The changes in the
UpdateParam
function are correctly implemented to handle the newRelayDifficultyTargetHash
parameter. The function now properly checks the type of the parameter, validates it, and updates the system's parameters accordingly.
- Lines 26-27: The type check ensures that the parameter is provided as a byte array, which is appropriate for a hash value.
- Line 33: The validation logic is crucial to ensure that the hash meets any necessary criteria (e.g., length, format).
- Line 37: The parameter is correctly assigned to the system's parameters, ensuring that the new difficulty measurement is integrated into the system.
x/tokenomics/module/abci.go (2)
4-4
: Import ofcrypto/sha256
is appropriate for hash operations.The import is necessary for the new functionality to calculate difficulty based on a hash. This aligns with the PR's objectives.
86-90
: Refactored difficulty calculation logic inEndBlocker
.The new logic uses a hash-based method to calculate difficulty, which is a significant change from the previous bit-based approach. This should enhance precision and security as mentioned in the PR summary. However, ensure that the
GetDifficultyFromHash
function is thoroughly tested, especially since it involves critical telemetry data.Would you like me to assist in writing additional unit tests for this function?
x/proof/types/message_update_param.go (2)
63-73
: New methodparamTypeIsBytes
added for byte slice validation.This method enhances the validation logic by checking if the parameter type is a byte slice, which is critical for parameters like
ParamRelayDifficultyTargetHash
. This addition helps ensure data integrity and type safety.
50-51
: Updated validation logic inValidateBasic
.The updated validation ensures that the
ParamRelayDifficultyTargetHash
is correctly typed as a byte slice. This is a crucial check for the integrity of relay difficulty parameters.pkg/relayer/miner/miner.go (2)
121-121
: Refactored relay eligibility assessment logic.The new byte comparison logic for assessing relay eligibility is a critical change. Ensure that this logic correctly compares the relay hash against the target hash to determine eligibility.
Verification successful
Refactored relay eligibility assessment logic verified.
The byte comparison logic for assessing relay eligibility correctly compares the relay hash against the target hash to determine eligibility. The implementation aligns with the intended purpose of the function.
pkg/relayer/miner/miner.go
: The logic correctly returns different results based on the comparison outcome.Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Verify that the new byte comparison logic is correctly assessing relay eligibility. # Test: Search for usage of `bytes.Compare` in relay eligibility checks. Expect: Correct implementation and usage. rg --type go -A 5 $'bytes.Compare'Length of output: 2244
Script:
#!/bin/bash # Description: Gather more context around the usage of `bytes.Compare` in `pkg/relayer/miner/miner.go` and its related functions. # We will search for the function definition and its surrounding code. # Search for the function definition containing the `bytes.Compare` usage rg --type go -A 20 -B 10 'bytes.Compare' pkg/relayer/miner/miner.goLength of output: 758
29-33
: Updatedminer
struct to userelayDifficultyTargetHash
.The change from a bit-based to a hash-based difficulty target is a significant refactor that aligns with the PR's objectives. Ensure that this new field is correctly initialized and used throughout the system.
pkg/relayer/miner/miner_test.go (3)
18-18
: Added import forprotocol
.This import is necessary for the new hash-based difficulty calculation tests. It ensures that the tests can utilize the necessary functions from the
protocol
package.
26-26
: Changed test setup to usetestTargetHash
.The update from a constant difficulty value to a variable target hash reflects the changes in the main codebase. This ensures that the tests remain relevant and accurate for the new difficulty calculation method.
138-138
: Updated hash calculation in tests.The change to use
protocol.GetHashFromBytes
in tests aligns with the changes in the main codebase. This ensures consistency and accuracy in testing the new difficulty calculation logic.x/proof/keeper/params_test.go (2)
23-34
: Updated test cases to userelayDifficultyTargetHash
.The test cases have been correctly updated to use the new parameter
relayDifficultyTargetHash
instead ofminRelayDifficultyBits
. This change aligns with the PR's objective to refactor the difficulty in terms of the target hash.
40-40
: Ensure proper error handling in validation logic.The validation logic correctly checks for errors and matches error messages as expected. This is crucial for ensuring that the parameter validation behaves as intended under different scenarios.
x/service/types/relay.go (2)
20-20
: Centralized hashing logic implemented correctly.The changes correctly utilize the
protocol.GetHashFromBytes
function for hashing operations in theRelay
,RelayRequest
, andRelayResponse
structs. This centralization should improve maintainability and reusability of the hashing logic.Also applies to: 37-37, 79-79
20-20
: Verify the new hashing logic integration.While the changes are approved, it's important to verify that the new hashing logic integrates smoothly with the rest of the system, especially considering the critical role of hashing in security and data integrity.
Verification successful
New hashing logic integration verified.
The new hashing logic using
GetHashFromBytes
is integrated correctly across the system. The function is utilized in multiple files, ensuring consistent hashing of relay bytes using the SHA-256 algorithm.
x/service/types/relay.go
x/proof/keeper/msg_server_submit_proof.go
pkg/relayer/miner/miner_test.go
pkg/relayer/miner/miner.go
testutil/testrelayer/relays.go
pkg/crypto/protocol/hash.go
Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Verify that the new hashing logic is integrated correctly across the system. # Test: Search for the usage of `GetHashFromBytes` in the codebase. Expect: Correct integration without issues. rg --type python -A 5 $'GetHashFromBytes'Length of output: 71
Script:
#!/bin/bash # Description: Verify that the new hashing logic is integrated correctly across the system. # Test: Search for the usage of `GetHashFromBytes` in the Go codebase. Expect: Correct integration without issues. rg --type go -A 5 $'GetHashFromBytes'Length of output: 3522
x/proof/keeper/msg_server_update_param_test.go (1)
19-19
: Updated test cases and assertions forRelayDifficultyTargetHash
.The test cases and assertions have been correctly updated to reflect the new
RelayDifficultyTargetHash
parameter. This ensures that the tests are aligned with the updated system parameters.Also applies to: 27-27, 32-33, 38-39, 42-42
telemetry/event_counters.go (1)
153-161
: Updated telemetry function to use newdifficulty
parameter.The
RelayMiningDifficultyGauge
function has been correctly updated to use the newdifficulty
parameter of typeint64
. This change should provide more accurate tracking of relay mining difficulty.x/proof/types/params.go (5)
56-56
: Default parameters setup.The
DefaultParams
function now uses the newDefaultRelayDifficultyTargetHash
. This is consistent with the refactoring goal and should work assuming the default hash is correctly set.
68-69
: Validation function setup for new parameter.The
ParamSetPairs
method now includesValidateRelayDifficultyTargetHash
for the newRelayDifficultyTargetHash
parameter. This is a crucial setup for ensuring the validity of the new parameter across the system.
92-92
: Basic validation logic for new parameters.The
ValidateBasic
function now includes a call toValidateRelayDifficultyTargetHash
, ensuring that the new relay difficulty target hash is validated at a basic level before being accepted into the system.
113-128
: Detailed validation logic forRelayDifficultyTargetHash
.The
ValidateRelayDifficultyTargetHash
function provides a detailed check for the length of the target hash, ensuring it is exactly 32 bytes. This is a critical validation step for the security and correctness of the relay difficulty mechanism.
40-46
: Review the function signature change inNewParams
.The change from
minRelayDifficultyBits
torelayDifficultyTargetHash
in theNewParams
function is a critical alteration. This change should be verified across the codebase to ensure that all calls toNewParams
have been updated accordingly.Verification successful
All calls to
NewParams
inx/proof/types/params.go
are updated.The function signature change from
minRelayDifficultyBits
torelayDifficultyTargetHash
inNewParams
has been verified, and all calls to this function withinx/proof/types/params.go
match the new signature.
x/proof/types/params.go
: All calls toNewParams
are updated to use the new parameters.Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Verify all function calls to `NewParams` match the new signature. # Test: Search for the function usage. Expect: Only occurrences of the new signature. rg --type go -A 5 $'NewParams'Length of output: 5596
Script:
#!/bin/bash # Description: Verify all calls to `NewParams` in the x/proof/types/params.go file match the new signature. # Search for the function calls to `NewParams` within the x/proof/types/params.go file. rg --type go -A 5 'NewParams' x/proof/types/params.goLength of output: 429
testutil/testrelayer/relays.go (2)
15-15
: Updated import statement for hash function.The import statement for
protocol
has been added, which is necessary for the newGetHashFromBytes
function usage. This change is consistent with the refactor and should be checked to ensure no conflicts with existing imports.
60-60
: Updated hash function calls inNewUnsignedMinedRelay
andNewSignedMinedRelay
.The replacement of
servicetypes.GetHashFromBytes
withprotocol.GetHashFromBytes
in these functions is aligned with the refactor goals. It's important to ensure that the new function provides the same or improved functionality and that all related tests are updated.Also applies to: 114-114
Verification successful
Updated hash function calls in
NewUnsignedMinedRelay
andNewSignedMinedRelay
.The replacement of
servicetypes.GetHashFromBytes
withprotocol.GetHashFromBytes
in these functions is aligned with the refactor goals. The new hash functionprotocol.GetHashFromBytes
is used correctly in bothNewUnsignedMinedRelay
andNewSignedMinedRelay
functions. Additionally, the old hash functionservicetypes.GetHashFromBytes
is no longer present in the codebase.
NewUnsignedMinedRelay
function implementation:
- Correctly uses
protocol.GetHashFromBytes
to generaterelayHashArr
.NewSignedMinedRelay
function implementation:
- Correctly uses
protocol.GetHashFromBytes
to generaterelayHashArr
.Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Verify that the new hash function is used consistently and correctly across the test suite. # Test: Search for the old hash function usage. Expect: No occurrences. rg --type go $'servicetypes.GetHashFromBytes'Length of output: 44
Script:
#!/bin/bash # Description: Verify the usage of the new hash function and check the relevant function implementations. # Search for the usage of the new hash function `protocol.GetHashFromBytes` rg --type go 'protocol.GetHashFromBytes' # Check the implementation of `NewUnsignedMinedRelay` function rg --type go -A 10 'func NewUnsignedMinedRelay' # Check the implementation of `NewSignedMinedRelay` function rg --type go -A 10 'func NewSignedMinedRelay'Length of output: 2010
Script:
#!/bin/bash # Description: Extract the full implementation of the functions to verify the usage of the new hash function. # Extract the full implementation of `NewUnsignedMinedRelay` function rg --type go -A 50 'func NewUnsignedMinedRelay' # Extract the full implementation of `NewSignedMinedRelay` function rg --type go -A 50 'func NewSignedMinedRelay'Length of output: 6133
x/tokenomics/keeper/update_relay_mining_difficulty.go (2)
8-8
: Updated import to usemath/big
instead ofmath/bits
.The switch to
math/big
is crucial for handling large numbers in the new difficulty calculation method. This change should be verified to ensure it doesn't introduce any precision issues or performance degradation.
57-57
: New logic for computing and scaling relay mining difficulty.The new function
scaleDifficultyTargetHash
and the updated logic inComputeNewDifficultyTargetHash
reflect the shift to a more dynamic difficulty adjustment mechanism. These changes are significant and should be thoroughly tested to ensure they behave as expected under various network conditions.Also applies to: 124-124, 127-154
Verification successful
New logic for computing and scaling relay mining difficulty is implemented and tested.
The new functions
scaleDifficultyTargetHash
andComputeNewDifficultyTargetHash
are used in the codebase, and there are no occurrences of the old functionRelayMiningTargetHashToDifficulty
.
scaleDifficultyTargetHash
is found inx/tokenomics/keeper/update_relay_mining_difficulty.go
.ComputeNewDifficultyTargetHash
is found inx/tokenomics/keeper/update_relay_mining_difficulty.go
andx/tokenomics/keeper/update_relay_mining_difficulty_test.go
.These changes should be thoroughly tested to ensure they behave as expected under various network conditions.
Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Verify that the new difficulty calculation is used consistently and correctly across the system. # Test: Search for the old difficulty function usage. Expect: No occurrences. rg --type go $'RelayMiningTargetHashToDifficulty'Length of output: 48
Script:
#!/bin/bash # Description: Verify that the new difficulty calculation functions are used consistently and correctly across the system. # Test: Search for the new difficulty function `scaleDifficultyTargetHash` usage. rg --type go 'scaleDifficultyTargetHash' # Test: Search for the new difficulty function `ComputeNewDifficultyTargetHash` usage. rg --type go 'ComputeNewDifficultyTargetHash'Length of output: 1054
tests/integration/tokenomics/relay_mining_difficulty_test.go (1)
123-124
: Updated assertions in test case for new relay mining difficulty.The test case
TestUpdateRelayMiningDifficulty_NewServiceSeenForTheFirstTime
now correctly asserts againstprooftypes.DefaultRelayDifficultyTargetHashHex
for both previous and new target hash hex encoded values. This change ensures that the tests are aligned with the new difficulty calculation mechanism.x/tokenomics/keeper/update_relay_mining_difficulty_test.go (3)
130-131
: Update toTargetHash
initialization is consistent with new difficulty approach.The modification to initialize
TargetHash
with at least 8 leading zero bits aligns with the updated difficulty mechanism described in the PR summary.
219-223
: Validation of computed difficulty hash inTestComputeNewDifficultyHash
.The use of
bytes.Compare
in the assertion checks is consistent with the new difficulty mechanism. Ensure that the functionComputeNewDifficultyTargetHash
is correctly implementing the expected logic.Verification successful
Validation of computed difficulty hash in
TestComputeNewDifficultyHash
.The implementation of the
ComputeNewDifficultyTargetHash
function correctly follows the new difficulty standards. The use ofbytes.Compare
in the assertion checks is consistent with the new difficulty mechanism.
x/tokenomics/keeper/update_relay_mining_difficulty.go
:ComputeNewDifficultyTargetHash
function implementation.Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Verify the implementation of `ComputeNewDifficultyTargetHash`. # Test: Search for the function implementation. Expect: Correct logic according to new difficulty standards. ast-grep --lang go --pattern $'func ComputeNewDifficultyTargetHash($_, $_) { $$$ }'Length of output: 86
Script:
#!/bin/bash # Description: Search for any function named `ComputeNewDifficultyTargetHash` in Go files. # Use ripgrep to search for the function definition. rg 'func ComputeNewDifficultyTargetHash' --glob '*.go' -A 10Length of output: 1123
Script:
#!/bin/bash # Description: Extract the complete implementation of the `ComputeNewDifficultyTargetHash` function. # Use ripgrep to search for the function definition and print the complete function. rg 'func ComputeNewDifficultyTargetHash' --glob '*.go' -A 50Length of output: 5029
148-154
: Ensure the correctness of theTargetHash
comparison.The use of
bytes.Compare
to ensure that thedifficulty.TargetHash
is less than or equal toexpectedRelayMiningDifficulty.TargetHash
is appropriate for the new hash-based difficulty verification. However, ensure that the comparison logic is thoroughly tested to handle all edge cases.Verification successful
The
TargetHash
comparison logic is adequately tested.The test cases in
TestUpdateRelayMiningDifficulty_FirstDifficulty
cover various scenarios, including edge cases, ensuring the correctness of thebytes.Compare
logic forTargetHash
.
update_relay_mining_difficulty_test.go
:
TestUpdateRelayMiningDifficulty_FirstDifficulty
Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Verify the test coverage for the comparison logic in `TestUpdateRelayMiningDifficulty_FirstDifficulty`. # Test: Search for test cases related to the comparison logic. Expect: Adequate coverage of edge cases. rg --type go -A 5 $'bytes.Compare'Length of output: 2244
Script:
#!/bin/bash # Description: List all test function definitions in `update_relay_mining_difficulty_test.go` to understand the coverage of `bytes.Compare`. # Find all test function definitions in the file rg --type go --line-number --no-filename '^func Test' x/tokenomics/keeper/update_relay_mining_difficulty_test.goLength of output: 368
Script:
#!/bin/bash # Description: Extract the content of `TestUpdateRelayMiningDifficulty_FirstDifficulty` from `update_relay_mining_difficulty_test.go`. # Extract lines from the start of `TestUpdateRelayMiningDifficulty_FirstDifficulty` to the start of the next function sed -n '/^func TestUpdateRelayMiningDifficulty_FirstDifficulty/,/^func Test/p' x/tokenomics/keeper/update_relay_mining_difficulty_test.goLength of output: 2357
e2e/tests/parse_params_test.go (1)
135-136
: Updated handling ofParamRelayDifficultyTargetHash
innewProofMsgUpdateParams
.The change to handle
ParamRelayDifficultyTargetHash
as a byte slice instead of an integer reflects the new approach to difficulty verification. Ensure that the type conversion and assignment are correctly implemented throughout the system.Verification successful
Verified handling of
ParamRelayDifficultyTargetHash
as[]byte
.The type conversion and assignment for
ParamRelayDifficultyTargetHash
are consistently implemented across the codebase. The following files confirm the correct handling:
x/proof/types/message_update_param.go
x/proof/types/message_update_param_test.go
x/proof/types/params.go
x/proof/keeper/msg_server_update_param.go
x/proof/keeper/msg_server_update_param_test.go
e2e/tests/parse_params_test.go
e2e/tests/update_params_test.go
No issues were found with the type handling and assignment.
Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Verify the correct implementation and usage of `ParamRelayDifficultyTargetHash` across the system. # Test: Search for usages of `ParamRelayDifficultyTargetHash`. Expect: Correct type handling and assignment. rg --type go -A 5 $'ParamRelayDifficultyTargetHash'Length of output: 4284
pkg/client/interface.go (1)
338-338
: Updated method signature inProofParams
interface.The change from
GetMinRelayDifficultyBits()
toGetRelayDifficultyTargetHash()
with a return type of[]byte
is consistent with the new difficulty mechanism. Ensure that all implementations of this interface and its callers handle the new method correctly.Verification successful
Updated method signature in
ProofParams
interface.The change from
GetMinRelayDifficultyBits()
toGetRelayDifficultyTargetHash()
with a return type of[]byte
is consistent with the new difficulty mechanism. All implementations and usages of this method have been correctly updated.
- Verified implementations in:
x/proof/types/params.pb.go
api/poktroll/proof/params.pulsar.go
- Verified usage in:
pkg/relayer/miner/miner.go
- Verified interface definition in:
pkg/client/interface.go
No lingering references to
GetMinRelayDifficultyBits
were found.Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Verify correct implementation and usage of `GetRelayDifficultyTargetHash` in the codebase. # Test: Search for implementations and usages of `GetRelayDifficultyTargetHash`. Expect: Correct handling and integration. rg --type go -A 5 $'GetRelayDifficultyTargetHash'Length of output: 1297
Script:
#!/bin/bash # Description: Verify that there are no lingering references to the old method `GetMinRelayDifficultyBits`. # Test: Search for any remaining references to `GetMinRelayDifficultyBits`. Expect: No results. rg --type go $'GetMinRelayDifficultyBits'Length of output: 40
e2e/tests/update_params_test.go (1)
373-375
: Update to parameter assignment inassertExpectedModuleParamsUpdated
.The change from
MinRelayDifficultyBits
toRelayDifficultyTargetHash
with a type conversion fromuint64
to[]byte
reflects the updated difficulty mechanism. Ensure that this change is consistently applied across related functions and modules.Verification successful
Update to parameter assignment in
assertExpectedModuleParamsUpdated
is consistent.The change from
MinRelayDifficultyBits
toRelayDifficultyTargetHash
with the type conversion fromuint64
to[]byte
has been applied consistently across related functions and modules.
x/tokenomics/keeper/update_relay_mining_difficulty.go
x/proof/keeper/msg_server_submit_proof_test.go
x/proof/types/params.pb.go
x/proof/types/message_update_param_test.go
x/proof/keeper/msg_server_update_param.go
x/proof/keeper/msg_server_update_param_test.go
e2e/tests/update_params_test.go
api/poktroll/proof/params.pulsar.go
Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Verify the consistent application of the `RelayDifficultyTargetHash` parameter across the system. # Test: Search for the usage of `RelayDifficultyTargetHash`. Expect: Consistent application and correct type handling. rg --type go -A 5 $'RelayDifficultyTargetHash'Length of output: 26817
x/proof/keeper/msg_server_submit_proof.go (2)
10-10
: Added import forcrypto/sha256
.This is necessary for the new hash-based difficulty verification logic.
211-211
: Updated difficulty validation logic.The function
validateRelayDifficulty
is now used to ensure the relay's difficulty meets the new hash-based criteria.api/poktroll/proof/params.pulsar.go (10)
21-25
: Field descriptors added for new and updated fields.The addition of
fd_Params_relay_difficulty_target_hash
and updates to other descriptors are necessary to reflect the changes in theParams
struct.
102-104
: UpdatedRange
method to handle new hash field.The method now correctly handles the new
RelayDifficultyTargetHash
field, ensuring it is processed when populated.
141-142
: UpdatedHas
method for new field.The method now correctly checks for the presence of the
RelayDifficultyTargetHash
field.
165-166
: UpdatedClear
method for new field.The method now correctly clears the
RelayDifficultyTargetHash
field when required.
189-191
: UpdatedGet
method for new field.The method now correctly retrieves the value of the
RelayDifficultyTargetHash
field.
221-222
: UpdatedSet
method for new field.The method now correctly sets the value of the
RelayDifficultyTargetHash
field.
254-255
: Correct handling of immutability forRelayDifficultyTargetHash
.The
Mutable
method correctly identifies that theRelayDifficultyTargetHash
field is not mutable, which is appropriate given its usage.
273-274
: Correct initialization for new field inNewField
method.The method now correctly initializes the
RelayDifficultyTargetHash
field to an empty byte slice when needed.
351-353
: Correct handling of size and marshaling for new field.The
ProtoMethods
implementation now correctly handles the size calculation and marshaling for theRelayDifficultyTargetHash
field.Also applies to: 419-424
Line range hint
476-507
: Correct unmarshaling logic for new field.The
unmarshal
function now correctly handles theRelayDifficultyTargetHash
field, ensuring it is properly read from the input buffer.x/proof/keeper/msg_server_submit_proof_test.go (4)
5-6
: New imports added:crypto/sha256
andencoding/hex
.These imports are necessary for handling cryptographic functions and encoding related to the new hash-based difficulty calculation.
49-53
: Refactor oftestProofParams
struct to use hash-based difficulty.The change from
MinRelayDifficultyBits
toRelayDifficultyTargetHash
aligns with the PR's objective to shift from bit-based to hash-based difficulty calculations. This is a critical change and should be carefully tested to ensure it integrates well with the rest of the system.Verification successful
Verification of
RelayDifficultyTargetHash
IntegrationThe field
RelayDifficultyTargetHash
is well-integrated across the codebase, including parameter validation, protobuf definitions, and test cases. The changes align with the shift from bit-based to hash-based difficulty calculations and appear to be consistently implemented.
- Files with notable changes:
x/tokenomics/keeper/update_relay_mining_difficulty.go
x/proof/types/params.pb.go
x/proof/types/params.go
x/proof/keeper/msg_server_update_param.go
x/proof/keeper/msg_server_submit_proof_test.go
Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Verify the integration of the new field `RelayDifficultyTargetHash` in the system. # Test: Search for the usage of `RelayDifficultyTargetHash`. Expect: Proper usage in relevant contexts. rg --type go 'RelayDifficultyTargetHash'Length of output: 8451
570-574
: Introduction oflowTargetHashArr
andhighExpectedTargetDifficulty
calculation.This new variable and the logic to calculate
highExpectedTargetDifficulty
are crucial for handling the new hash-based difficulty. It's important to ensure that these calculations are accurate and optimized for performance.Verification successful
Verification successful: The new difficulty calculation using
lowTargetHashArr
andhighExpectedTargetDifficulty
is accurate and optimized for performance.
- The
GetDifficultyFromHash
function uses big integer arithmetic to ensure accuracy.- The
Difficulty1HashInt
is precomputed, ensuring efficient performance.Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Verify the accuracy and performance of the new difficulty calculation. # Test: Search for the usage of `lowTargetHashArr` and `highExpectedTargetDifficulty`. Expect: Optimal and correct usage. rg --type go -A 5 'lowTargetHashArr|highExpectedTargetDifficulty'Length of output: 1172
Script:
#!/bin/bash # Locate the implementation of the GetDifficultyFromHash function to verify its accuracy and performance. ast-grep --lang go --pattern 'func GetDifficultyFromHash($_) $_'Length of output: 545
Script:
#!/bin/bash # Locate the declaration and initialization of Difficulty1HashInt to verify it is precomputed. rg --type go 'Difficulty1HashInt'Length of output: 228
Line range hint
1410-1432
: Updated functiongetClosestRelayDifficulty
to handle new hash-based difficulty.The function now returns an
int64
instead ofuint64
, reflecting the change in how difficulty is calculated. This update is crucial for the system's integrity and should be thoroughly tested.Verification successful
Verification successful: The function
getClosestRelayDifficulty
has been updated to returnint64
and is correctly integrated and tested in the codebase. No other instances of the function were found outside the test file.
Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Verify the correct functionality of the updated `getClosestRelayDifficulty` function. # Test: Search for the usage of `getClosestRelayDifficulty`. Expect: Correct implementation and integration. rg --type go -A 5 'getClosestRelayDifficulty'Length of output: 1443
Script:
#!/bin/bash # Description: Verify all instances where `getClosestRelayDifficulty` is called and ensure correct handling of `int64` return type. # Find all instances of `getClosestRelayDifficulty` and show surrounding lines for context. rg --type go -A 10 'getClosestRelayDifficulty'Length of output: 2410
Script:
#!/bin/bash # Description: Verify all instances where `getClosestRelayDifficulty` is called and ensure correct handling of `int64` return type. # Find all instances of `getClosestRelayDifficulty` and show surrounding lines for context. rg --type go 'getClosestRelayDifficulty'Length of output: 405
x/proof/types/params.go
Outdated
KeyMinRelayDifficultyBits = []byte("MinRelayDifficultyBits") | ||
ParamRelayDifficultyTargetHash = "relay_difficulty_target_hash" | ||
DefaultRelayDifficultyTargetHashHex = "ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff" | ||
DefaultRelayDifficultyTargetHash, _ = hex.DecodeString(DefaultRelayDifficultyTargetHashHex) // TODO_MAINNET(#142, #401): Determine the default value. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Introduced new constants and parameters for relay difficulty target hash.
The introduction of ParamRelayDifficultyTargetHash
and DefaultRelayDifficultyTargetHashHex
aligns with the PR's objective to refactor the difficulty mechanism. However, the use of a hardcoded default hash (ffffffff...
) in DefaultRelayDifficultyTargetHashHex
should be reviewed to ensure it's appropriate for all network conditions.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Files selected for processing (4)
- e2e/tests/parse_params_test.go (2 hunks)
- e2e/tests/update_params.feature (2 hunks)
- e2e/tests/update_params_test.go (2 hunks)
- x/proof/types/message_update_param_test.go (2 hunks)
Files skipped from review as they are similar to previous changes (3)
- e2e/tests/parse_params_test.go
- e2e/tests/update_params_test.go
- x/proof/types/message_update_param_test.go
Additional comments not posted (2)
e2e/tests/update_params.feature (2)
93-93
: Ensure correct handling of updated parameter 'proof_request_probability'.The parameter 'proof_request_probability' is used in an authorization context, which is sensitive. Ensure that the handling of this parameter, especially its type as a float, is consistent throughout the codebase.
Verification successful
The handling of the
proof_request_probability
parameter is consistent across the codebase.The parameter
proof_request_probability
is consistently defined and used as a float in various contexts, including configuration files, test files, and documentation. No inconsistencies were found.
config.yml
: Defined with a float value.Makefile
: Referenced in update commands.x/tokenomics/keeper/settle_pending_claims.go
: Used in probabilistic checks.x/proof/types/params.go
: Defined as a float parameter.proto/poktroll/proof/params.proto
: Defined as a float in protobuf.tools/scripts/params/proof_proof_request_probability.json
: Defined with a float value.docs/static/openapi.yml
: Documented as a float parameter.Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Verify the consistent handling of the 'proof_request_probability' parameter. # Test: Search for occurrences and handling of 'proof_request_probability' across the codebase. rg --type python -A 5 $'proof_request_probability'Length of output: 80
Script:
#!/bin/bash # Description: Verify the consistent handling of the 'proof_request_probability' parameter. # Test: Search for occurrences and handling of 'proof_request_probability' across the codebase. rg --glob '*.py' -A 5 'proof_request_probability'Length of output: 198
Script:
#!/bin/bash # Description: Verify the consistent handling of the 'proof_request_probability' parameter. # Test: Search for occurrences and handling of 'proof_request_probability' across the codebase. # Find all Python files and search for the 'proof_request_probability' parameter within those files fd --type f --extension py --exec rg 'proof_request_probability' -A 5Length of output: 69
Script:
#!/bin/bash # Description: Verify the consistent handling of the 'proof_request_probability' parameter. # Test: Search for occurrences and handling of 'proof_request_probability' across the codebase. # Search for the 'proof_request_probability' parameter in all file types rg 'proof_request_probability' -A 5Length of output: 19701
40-44
: Update to "proof" module parameters appears correct but requires verification.The changes to parameter names and types in the "proof" module seem consistent with the PR objectives. It's crucial to ensure that the changes are reflected across all relevant parts of the codebase.
However, I recommend verifying that the new parameter types and values are handled correctly in all related code paths.
Verification successful
Update to "proof" module parameters is verified and correct.
The changes to parameter names and types in the "proof" module are correctly reflected across the codebase. The parameters
relay_difficulty_target_hash
andproof_missing_penalty
are appropriately defined, used in tests, and documented.
- Files Verified:
Makefile
x/proof/types/message_update_param_test.go
x/proof/types/params.go
x/proof/types/params.pb.go
x/proof/keeper/params_test.go
tools/scripts/params/proof_proof_missing_penalty.json
tools/scripts/params/proof_all.json
proto/poktroll/proof/params.proto
e2e/tests/update_params.feature
config.yml
api/poktroll/proof/params.pulsar.go
docs/static/openapi.yml
Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Verify that the new parameter types and values are handled correctly. # Test: Search for handling of 'relay_difficulty_target_hash' and 'proof_missing_penalty' across the codebase. rg --type python -A 5 $'relay_difficulty_target_hash|proof_missing_penalty'Length of output: 107
Script:
#!/bin/bash # Description: Verify that the new parameter types and values are handled correctly. # Test: Search for handling of 'relay_difficulty_target_hash' and 'proof_missing_penalty' across the codebase. rg --type py -A 5 'relay_difficulty_target_hash|proof_missing_penalty'Length of output: 221
Script:
#!/bin/bash # Description: Diagnose why rg is skipping files. # Test: Search for handling of 'relay_difficulty_target_hash' and 'proof_missing_penalty' across the codebase with debug information. rg --type py -A 5 'relay_difficulty_target_hash|proof_missing_penalty' --debugLength of output: 191650
Script:
#!/bin/bash # Description: Verify that the new parameter types and values are handled correctly. # Test: Search for handling of 'relay_difficulty_target_hash' and 'proof_missing_penalty' across the codebase without file type filters. rg -A 5 'relay_difficulty_target_hash|proof_missing_penalty'Length of output: 24929
pkg/crypto/protocol/difficulty.go
Outdated
// TODO_MAINNET: Consider generalizing difficulty to a target hash. See: | ||
// Difficulty1Hash represents the "easiest" difficulty. | ||
var ( | ||
Difficulty1HashHex = "ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thoughts on MinRelayDifficultyHashHex
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Adding godoc comments here. "Difficulty 1" is the "highest" possible hash, which corresponds to the lowest possible difficulty. I'm reluctant to use terms like min
or max
to avoid confusion as a result of this bizarre juxtaposition of highest and lowest.
This is a value that we choose which is independent of the DefaultRelayTargetHash
. It effectively calibrates the difficulty number (the return of GetDifficultyFromHash
) by defining the hash which corresponds to the lowest difficulty: 1.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Files ignored due to path filters (1)
x/proof/types/params.pb.go
is excluded by!**/*.pb.go
Files selected for processing (9)
- api/poktroll/proof/params.pulsar.go (15 hunks)
- pkg/crypto/protocol/difficulty.go (1 hunks)
- pkg/crypto/protocol/difficulty_test.go (1 hunks)
- proto/poktroll/proof/params.proto (1 hunks)
- telemetry/event_counters.go (1 hunks)
- x/proof/keeper/msg_server_submit_proof.go (4 hunks)
- x/proof/types/params.go (7 hunks)
- x/tokenomics/keeper/scale_difficulty_test.go (1 hunks)
- x/tokenomics/keeper/update_relay_mining_difficulty.go (4 hunks)
Files skipped from review as they are similar to previous changes (7)
- api/poktroll/proof/params.pulsar.go
- pkg/crypto/protocol/difficulty.go
- proto/poktroll/proof/params.proto
- telemetry/event_counters.go
- x/proof/keeper/msg_server_submit_proof.go
- x/proof/types/params.go
- x/tokenomics/keeper/update_relay_mining_difficulty.go
Additional comments not posted (2)
pkg/crypto/protocol/difficulty_test.go (1)
12-52
: Review ofTestGetDifficultyFromHash
function:The test function has been renamed and refactored to test the new
GetDifficultyFromHash
function. The tests are well-structured and use descriptive names for test cases. The use of hexadecimal strings for hash values is appropriate and improves readability.However, there are a few points to consider:
- Line 36: The use of
new(big.Int).SetBytes(Difficulty1HashBz).Int64()
seems to rely on an undefined variableDifficulty1HashBz
. If this is defined elsewhere in the codebase, ensure it is imported or declared in this test file.- General: Ensure that the hash values used in the tests are representative of realistic scenarios that might be encountered in production. This is crucial for the accuracy of the tests.
Overall, the refactoring appears to align with the objectives of the PR, focusing on code health and cleanup.
x/tokenomics/keeper/scale_difficulty_test.go (1)
12-97
: Review ofTestScaleDifficultyTargetHash
function:This new test function is well-structured and covers a variety of scaling scenarios. The descriptions are clear, and the hexadecimal representations of hashes make the tests readable and easy to understand.
Key observations:
- Lines 86, 89: Proper error handling with
require.NoError
ensures that the test fails early if there are issues with decoding the hex strings, which is good practice.- Line 94: The use of
bytes.Compare
for comparing byte slices is appropriate, and the detailed error message inrequire.Equalf
is helpful for debugging.However, ensure that the function
scaleDifficultyTargetHash
being tested is implemented correctly and handles edge cases, especially with extreme ratios as seen in lines 65-74.Overall, the tests seem robust and should effectively validate the functionality of scaling difficulty targets.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Files selected for processing (8)
- pkg/crypto/protocol/difficulty.go (1 hunks)
- pkg/crypto/protocol/difficulty_test.go (1 hunks)
- pkg/crypto/protocol/hash.go (1 hunks)
- pkg/crypto/protocol/hasher.go (1 hunks)
- x/proof/keeper/msg_server_submit_proof.go (3 hunks)
- x/proof/keeper/msg_server_submit_proof_test.go (8 hunks)
- x/proof/types/params.go (6 hunks)
- x/tokenomics/module/abci.go (2 hunks)
Files skipped from review as they are similar to previous changes (7)
- pkg/crypto/protocol/difficulty.go
- pkg/crypto/protocol/difficulty_test.go
- pkg/crypto/protocol/hash.go
- x/proof/keeper/msg_server_submit_proof.go
- x/proof/keeper/msg_server_submit_proof_test.go
- x/proof/types/params.go
- x/tokenomics/module/abci.go
Additional comments not posted (3)
pkg/crypto/protocol/hasher.go (3)
1-3
: LGTM!The package declaration and import statement are correct.
5-7
: LGTM!The constant
RelayHasherSize
is correctly defined to represent the size of the SHA-256 hash.
9-11
: LGTM!The variable
NewRelayHasher
is correctly defined to create a new SHA-256 hasher usingsha256.New
.
pkg/crypto/protocol/difficulty.go
Outdated
// Difficulty1HashBz is the chosen "highest" (easiest) target hash, which | ||
// corresponds to the lowest possible difficulty. It effectively calibrates | ||
// the difficulty number (which is returned by GetDifficultyFromHash) by defining | ||
// the hash which corresponds to difficulty 1. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
#PUC I still don't fully understand what "difficulty 1" is.
Is it the first difficulty?
Is something here equal to / greater than / less than the number 1?
Is the difficulty itself 1?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We have to define a target hash which corresponds to the easiest difficulty so that we have a way to talk about the relative difficulty between it and any other target hash.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why don't we call it BaseRelayDifficultyHash
or MinRelayDifficultyHash
?
ratio := float64(targetNumRelays) / float64(newRelaysEma) | ||
|
||
// Compute the new target hash by scaling the default target hash based on the ratio | ||
newTargetHash := scaleDifficultyTargetHash(prooftypes.DefaultRelayDifficultyTargetHash, ratio) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm still pretty confused by what's happening here.
-
What kind of distribution does this result in?
-
If the number of relays is VERY high (e.g. 10B per session), but is constant (has been that way for a long time), would we make the difficulty low because you're always scaling the default?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Apologies, I think this should actually be scaling the current target hash rather than the default. 😅
My understanding is that newRelaysEma
represents on-chain/mined/payable relays, which means that given a constant high number of relays, as target hash adjustments are made, T/R
should eventually approach 1
. This means that we need to scale the current target hash proportionally and it should stabilize as T/R
approaches 1
.
Co-authored-by: Daniel Olshansky <[email protected]>
Co-authored-by: Redouane Lakrache <[email protected]>
…get-hash * pokt/main: [TODO] chore: update `smt.MerkleRoot#Sum()` error handling (#672)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Having concerns about the global difficulty hash, which seems to be per-service. Otherwise, great refactor.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 7
Outside diff range, codebase verification and nitpick comments (3)
proto/poktroll/proof/params.proto (1)
16-17
: Update the comment to reflect the new field's purpose.The comment should clearly describe the purpose of the
relay_difficulty_target_hash
field.- // relay_difficulty_target_hash is the maximum value a relay hash must be less than to be volume/reward applicable. + // relay_difficulty_target_hash is the target hash value that a relay's hash must be less than to be considered for volume/reward applicability.pkg/crypto/protocol/difficulty.go (1)
24-33
: Potential integer overflow inGetDifficultyFromHash
function.The function
GetDifficultyFromHash
can cause an integer overflow when dividingBaseRelayDifficultyHashBz
by a very smallhashBz
value. This is evident from the test case wherehashHex
is "0000000000000000000000000000000000000000000000000000000000000001".
- Location:
pkg/crypto/protocol/difficulty.go
, lines 24-33To prevent this, consider using a larger integer type or a different approach to handle large numbers safely.
Analysis chain
Verify the integer overflow concern.
The TODO comment suggests a potential integer overflow issue. Verify if this can occur and address it if necessary.
Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Verify if integer overflow can occur in the GetDifficultyFromHash function. # Test: Search for the function usage and analyze the input values. rg --type go -A 5 $'GetDifficultyFromHash'Length of output: 3971
Script:
#!/bin/bash # Description: Gather information about BaseRelayDifficultyHashBz and the values passed to GetDifficultyFromHash. # Find the definition of BaseRelayDifficultyHashBz rg --type go 'BaseRelayDifficultyHashBz' # Find the values of hashBz used in tests rg --type go -A 3 'GetDifficultyFromHash' pkg/crypto/protocol/difficulty_test.goLength of output: 1130
Script:
#!/bin/bash # Description: Analyze the values of hashBz used in the tests to check for potential integer overflow. # Find the values of hashBz used in tests rg --type go -A 3 'hashHex' pkg/crypto/protocol/difficulty_test.goLength of output: 825
api/poktroll/proof/params.pulsar.go (1)
21-25
: Incomplete field renaming detected.The field renaming from
MinRelayDifficultyBits
torelay_difficulty_target_hash
is incomplete. The old field name is still present in several files:
x/tokenomics/keeper/msg_server_update_param_test.go
x/proof/types/params.go
x/proof/keeper/msg_server_update_param_test.go
Please ensure that all occurrences of the old field name are updated to the new field name.
Analysis chain
Ensure the correctness of the field renaming.
The field renaming should be verified to ensure it aligns with the intended changes.
Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Verify the field renaming. # Test: Search for the old field name usage. Expect: No occurrences. rg --type go -A 5 $'MinRelayDifficultyBits'Length of output: 1718
Script:
#!/bin/bash # Description: Verify the new field name usage. # Test: Search for the new field name usage in the same files where the old name was found. rg --type go -A 5 'relay_difficulty_target_hash' x/tokenomics/keeper/msg_server_update_param_test.go x/proof/types/params.go x/proof/keeper/msg_server_update_param_test.goLength of output: 757
TargetHash: prooftypes.DefaultRelayDifficultyTargetHash, | ||
} | ||
} | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Consider computing the smoothing factor dynamically.
The TODO comment suggests computing the smoothing factor dynamically based on the block height. This should be considered for better accuracy.
Do you want me to open a GitHub issue to track this task?
Going to merge this in with one flaky test that happened here |
<!-- README: DELETE THIS COMMENT BLOCK after: 5. Specify _Epic_ and _Iteration_ under _Project_ --> ## Summary - Relayer is updated to query the tokenmoics module to get the relay difficulty target hash specific to the service corresponding to the relay. This is used to determine if a relay is volume applicable. - Relayer is updated to query the service module to get the compute units per relay specific to the service corresponding to the relay. This is used as the weight of the relay when updating the Relayer's SMT for the session corresponding to the relay. - Claims are validated at the time of submission to ensure the total claimed compute units is equal to the number of relays multiplied by the compute units per relay for the corresponding service. - All #705 references in the code have been addressed. - Relay test fixture generator has been updated to use the difficulty threshold hash to be consistent with how the relayer now operates. ## Issue - Relayer should use a service-specific difficulty target hash. - Relayer should use the service-specific ComputeUnitsPerRelay when adding a relay to the SMT for claiming. - #705 ## UPNEXT - Remove the relay_difficulty_target_hash parameter - Addressing any remaining instances of `TODO_FOLLOWUP(@Olshansk, #690)` These have been skipped to: 1) align on the PR before making further changes and 2) restrict the PR size and scope, as these were not strictly required for the above enhancements to function. Depending on the review results, I can add commits to address the above in this PR as well. ## Type of change Select one or more: - [x] New feature, functionality or library - [ ] Bug fix - [x] Code health or cleanup - [ ] Documentation - [ ] Other (specify) ## Testing **Documentation changes** (only if making doc changes) - [ ] `make docusaurus_start`; only needed if you make doc changes **Local Testing** (only if making code changes) - [x] **Unit Tests**: `make go_develop_and_test` - [ ] **LocalNet E2E Tests**: `make test_e2e` - See [quickstart guide](https://dev.poktroll.com/developer_guide/quickstart) for instructions **PR Testing** (only if making code changes) - [ ] **DevNet E2E Tests**: Add the `devnet-test-e2e` label to the PR. - **THIS IS VERY EXPENSIVE**, so only do it after all the reviews are complete. - Optionally run `make trigger_ci` if you want to re-trigger tests without any code changes - If tests fail, try re-running failed tests only using the GitHub UI as shown [here](https://github.com/pokt-network/poktroll/assets/1892194/607984e9-0615-4569-9452-4c730190c1d2) ## Sanity Checklist - [x] I have tested my changes using the available tooling - [x] I have commented my code - [x] I have performed a self-review of my own code; both comments & source code - [x] I create and reference any new tickets, if applicable - [x] I have left TODOs throughout the codebase, if applicable <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit - **New Features** - Added new interfaces for querying tokenomics data, enhancing the app's ability to handle tokenomics-related queries. - Introduced a structured approach to query service information and tokenomics data, improving the overall querying functionality. - Implemented functionality to retrieve compute units per relay for services, optimizing session management. - **Bug Fixes** - Improved error handling for service retrieval and compute units mismatch scenarios. - **Tests** - Enhanced test coverage for tokenomics and service querying functionalities, ensuring robust testing of new features. - **Chores** - Updated test utilities and dependencies to support new service management capabilities. <!-- end of auto-generated comment: release notes by coderabbit.ai --> --------- Co-authored-by: Bryan White <[email protected]> Co-authored-by: Daniel Olshansky <[email protected]>
Summary
Issue
Type of change
Select one or more:
Testing
Documentation changes (only if making doc changes)
make docusaurus_start
; only needed if you make doc changesLocal Testing (only if making code changes)
make go_develop_and_test
make test_e2e
PR Testing (only if making code changes)
devnet-test-e2e
label to the PR.make trigger_ci
if you want to re-trigger tests without any code changesSanity Checklist
Summary by CodeRabbit
New Features
Refactor
Tests
Chores