-
Notifications
You must be signed in to change notification settings - Fork 152
BM-1466: Ec2/claim digest #1004
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
contracts/src/BoundlessMarket.sol
Outdated
AssessorCallback calldata callback = assessorReceipt.callbacks[callbackIndexPlusOne - 1]; | ||
_executeCallback(fill.id, callback.addr, callback.gasLimit, fill.imageId, fill.journal, fill.seal); | ||
_executeCallback(fill.id, callback.addr, callback.gasLimit, imageId, journal, fill.seal); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If I'm reading this correctly, it seems that imageId and journal here are not authenticated.
It is true that we are using the correct values in the assessor guest to compute the claimDigest, however on the contract side an adversary could replace their values. If my line of thinking makes sense we could either commit to it as part of the assessor journal, or alternatively, we could recompute the claimDigest from those values in case the predicate type is != than PredicateType.ClaimDigestMatch
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also now that I think about it, are we only guaranteeing the callbackData in case of a callback is required? Cause in that case the only way to get back a journal now is by requesting a callback and I'm not sure is what we want
tracing::error!("Predicate evaluation failed for request"); | ||
bail!("Predicate evaluation failed"); | ||
} | ||
// TODO(ec2): how do we check this? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can't we move this check within the previous match, so that we can then use the FulfillmentData
enum?
Or we can just fill the fulfillment data there, and then do the check once here
request.requirements.predicate.predicateType, | ||
hex::encode(&request.requirements.predicate.data) | ||
); | ||
// TODO(ec2): how do we check when we have custom claim digests? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we can still evaluate the predicate here based on the predicate type and potentially the proof type (e.g., the specified selector) no??
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this a new thing?
crates/broker/src/aggregator.rs
Outdated
let fulfillment_data = match order.request.requirements.predicate.predicateType { | ||
PredicateType::ClaimDigestMatch => FulfillmentData::from_claim_digest( | ||
order.request.requirements.predicate.claim_digest().unwrap(), | ||
), | ||
_ => FulfillmentData::from_image_id_and_journal( | ||
Digest::from_hex(order.image_id.unwrap()).unwrap(), | ||
journal, | ||
), | ||
}; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: this could be made a method/function as we use it several times within the codebase
// #[sqlx::test] | ||
// async fn get_submission_order(pool: SqlitePool) { | ||
// let db: DbObj = Arc::new(SqliteDb::from(pool).await.unwrap()); | ||
// let mut order = create_order(); | ||
// order.proof_id = Some("test".to_string()); | ||
// order.lock_price = Some(U256::from(10)); | ||
// db.add_order(&order).await.unwrap(); | ||
|
||
// let submit_order: (ProofRequest, Bytes, String, B256, U256, FulfillmentType) = | ||
// db.get_submission_order(&order.id()).await.unwrap(); | ||
// assert_eq!(submit_order.0, order.request); | ||
// assert_eq!(submit_order.1, order.client_sig); | ||
// assert_eq!(submit_order.2, order.proof_id.unwrap()); | ||
// assert_eq!(submit_order.3, order.request.requirements.imageId); | ||
// assert_eq!(submit_order.4, order.lock_price.unwrap()); | ||
// assert_eq!(submit_order.5, order.fulfillment_type); | ||
// } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what happened here?
); | ||
return Ok(image_id_str); | ||
let image_id_str = request.requirements.image_id().map(|image_id| image_id.to_string()); | ||
// When predicate is ClaimDigestMatch, we do not have the image id, so we must always download and upload the image. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is true and worth thinking if this might create a (rate limit) issue when used often. One solution could be to make sure the requestor knows the image url must guarantee some high frequency access, or we should think to cache based on the url, however that is risky as the content might change. Anyway, just food for thoughts for now
@@ -4,7 +4,6 @@ CREATE TABLE IF NOT EXISTS proof_requests ( | |||
client_address TEXT NOT NULL, -- Ethereum address | |||
|
|||
-- Requirements fields | |||
image_id TEXT NOT NULL, -- 32-byte image ID (hex encoded) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm in favor of this approach of keeping things clean. However this will require nuking the existing db as the migration will not be compatible. We could leave the value and use a dummy value as an alternative if we prefer keeping compatibility
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is the consequence of nuking the indexer DB? I don't have a strong opinion of whether or not to keep this field to maintain compatibility. I do feel like we want to start indexing claim digests and callback data though.
@@ -64,8 +64,7 @@ fn main() { | |||
fill.verify_signature(&eip_domain_separator).expect("signature does not verify") | |||
}; | |||
fill.evaluate_requirements().expect("requirements not met"); | |||
env::verify_integrity(&fill.receipt_claim()).expect("claim integrity check failed"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Doesn't removing this means we now cannot guarantee that the proof is generated with the actual input and not something else?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
or maybe you are right, and that is guaranteed by recomputing the claim digest in here already
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My assessment is that this does allow the assessor
to be run on claims that are not in fact true, but it should not be an issue since we check the seal onchain anyway. We need to check the seal onchain for data availability reasons, and because the onchain verifier is actually the canonical one.
bytes journal; | ||
/// @notice The `PredicateType` of the request that is being fulfilled. | ||
/// @dev When the `PredicateType` is `ClaimDigestMatch`, there callbacks are not supported | ||
PredicateType predicateType; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Related to my other comments, I think we can remove this.
/// @notice The type of callback | ||
CallbackType callbackType; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we maybe not support multiple callback types? I am leaning towards saying that we should not, or least not support multiple callback types right now. Maybe this will be requested in the future, but its unclear what the receiver experience would look like and I'm inclined to think we probably won't support it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Another way to keep the option open for multiple callback types would be to add a version number to the start of the encoded CallbackData
. So if we increment the version in the future, we could provide a new function for decoding it.
But we don't necessarily need to do that even: because the first 32 bytes are a hash, we can use a special value of bytes32(uint256(1))
to indicate that this is the new version in the future. This is more ugly, but it can be the plan of record in the eventuality that we do want to support multiple types of callbacks in the future.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Another option for forward compatibility is that we can increment the entire request version, and accordingly create any need code paths we need, switching on that. We have reserved bits in the request ID that can be used for this.
using PredicateLibrary for Predicate global; | ||
using ReceiptClaimLib for ReceiptClaim; | ||
|
||
/// @title Predicate Struct and Library | ||
/// @notice Represents a predicate and provides functions to create and evaluate predicates. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the way to clarify this is that "a predicate is a function over the claim that determines whether it meets the clients requirements". Prior to this PR, predicates acted over the journal only.
using PredicateLibrary for Predicate global; | ||
using ReceiptClaimLib for ReceiptClaim; | ||
|
||
/// @title Predicate Struct and Library | ||
/// @notice Represents a predicate and provides functions to create and evaluate predicates. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The names DigestMatch
and PrefixMatch
are a bit confusing with this framing, in that they are with reference to the journal. We could rename when as JournalDigestMatch
and JournalPrefixMatch
, although I'm personally ok with keeping them named as is and just adding a comment to each to describe the fact that the apply to the journal. This might be the better option since we want the common-case patterns to have the shorter, friendlier names.
@@ -26,36 +34,66 @@ library PredicateLibrary { | |||
/// @notice Creates a digest match predicate. | |||
/// @param hash The hash to match. | |||
/// @return A Predicate struct with type DigestMatch and the provided hash. | |||
function createDigestMatchPredicate(bytes32 hash) internal pure returns (Predicate memory) { | |||
return Predicate({predicateType: PredicateType.DigestMatch, data: abi.encode(hash)}); | |||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This newline is separating the function from its comment.
if (callbackIndexPlusOne > 0) { | ||
AssessorCallback calldata callback = assessorReceipt.callbacks[callbackIndexPlusOne - 1]; | ||
_executeCallback(fill.id, callback.addr, callback.gasLimit, fill.imageId, fill.journal, fill.seal); | ||
|
||
// We do not support callbacks for claim digest matches because we cant authenticate the journal. | ||
if (fill.predicateType != PredicateType.ClaimDigestMatch) { | ||
// In the case this is a not a claim digest match, we need to authenticate the journal, since we actually have it | ||
(bytes32 imageId, bytes calldata journal) = _decodeCallbackData(fill.callbackData); | ||
bytes32 calculatedClaimDigest = ReceiptClaimLib.ok(imageId, sha256(journal)).digest(); | ||
if (fill.claimDigest != calculatedClaimDigest) { | ||
revert ClaimDigestMismatch(fill.claimDigest, calculatedClaimDigest); | ||
} | ||
|
||
if (callbackIndexPlusOne > 0) { | ||
AssessorCallback calldata callback = assessorReceipt.callbacks[callbackIndexPlusOne - 1]; | ||
_executeCallback(fill.id, callback.addr, callback.gasLimit, imageId, journal, fill.seal); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
By placing the callback data into the AssessorCallback
, we can mostly eliminate the new validation here, and simply decode and execute the callback (reverting if decode fails). If we directly embed the CallbackData
into AssessorCallback
, we can avoid the extra decode step as well.
// checked by the Assessor guest. | ||
/// @notice Journal committed by the guest program execution. | ||
/// @dev The journal is checked to satisfy the predicate specified on the request's requirements. | ||
bytes journal; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ultimately, I think removing the journal from the Fulfillment
and only providing it in the callback is the right call. It will have the effect though that, by default, journals will not have onchain data availability guarantees (in the ProofDelivered
) event. Most applications we have seen so far will have computed the journal locally (as they must to use the DigestMatch
predicate) so this should not need needed, however, it may add some friction to the devex if they were planning on fetching the receipt from the chain and have the journal included.
if requirements.predicate.predicateType == PredicateType::ClaimDigestMatch | ||
&& requirements.callback.callbackType != CallbackType::None | ||
{ | ||
return Err(Error::RequirementsEvaluationError); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would drop this constraint (partly because my suggestion is to drop CallbackType
) and instead fail when trying to construct the AssessorCallback
is the FullfillmentClaimData
does not contain the image ID and journal. It may need somewhat odd that this would enable a ClaimDigestMatch
request to actually have a callback, but this is actually fine so long as we check that ReceiptClaim::ok(image_id, journal).digest()
does in fact meet the requirements, which we do.
@@ -64,8 +64,7 @@ fn main() { | |||
fill.verify_signature(&eip_domain_separator).expect("signature does not verify") | |||
}; | |||
fill.evaluate_requirements().expect("requirements not met"); | |||
env::verify_integrity(&fill.receipt_claim()).expect("claim integrity check failed"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My assessment is that this does allow the assessor
to be run on claims that are not in fact true, but it should not be an issue since we check the seal onchain anyway. We need to check the seal onchain for data availability reasons, and because the onchain verifier is actually the canonical one.
@@ -4,7 +4,6 @@ CREATE TABLE IF NOT EXISTS proof_requests ( | |||
client_address TEXT NOT NULL, -- Ethereum address | |||
|
|||
-- Requirements fields | |||
image_id TEXT NOT NULL, -- 32-byte image ID (hex encoded) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is the consequence of nuking the indexer DB? I don't have a strong opinion of whether or not to keep this field to maintain compatibility. I do feel like we want to start indexing claim digests and callback data though.
Add support for a new Predicate variant which matches on the claim digest.
The main motivation for this is to support a new Groth16 which is based on blake3 for BitVM. The current groth16 compression commits to 5 public values. This new Groth16 proof commits to 1 public value, which is the blake3 hash of the mentioned values such as the image id.
Essentially, this adds support for custom claim digests AND adds support for when the requestor does not care about the journal.
Notable smart contract changes:
CallbackData
struct which replacesimageId
andjournal
inFulfillment
which contains the journal and image id associated with the proof. Callbacks are only supported for prefix match and journal digest match. IimageId
is removed from Requirements. Instead, it is encoded into thedata
field ofPredicate
if the request is NOT claim digest match. This is because otherwise theimageId
will be unverifiable and untrustworthy data.Notable assessor guest changes:
TODO: