Skip to content

BM-1466: Ec2/claim digest #1004

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 57 commits into
base: main
Choose a base branch
from
Open

BM-1466: Ec2/claim digest #1004

wants to merge 57 commits into from

Conversation

ec2
Copy link
Contributor

@ec2 ec2 commented Aug 12, 2025

Add support for a new Predicate variant which matches on the claim digest.

The main motivation for this is to support a new Groth16 which is based on blake3 for BitVM. The current groth16 compression commits to 5 public values. This new Groth16 proof commits to 1 public value, which is the blake3 hash of the mentioned values such as the image id.

Essentially, this adds support for custom claim digests AND adds support for when the requestor does not care about the journal.

Notable smart contract changes:

  • New CallbackData struct which replaces imageId and journal in Fulfillment which contains the journal and image id associated with the proof. Callbacks are only supported for prefix match and journal digest match. I
  • imageId is removed from Requirements. Instead, it is encoded into the data field of Predicate if the request is NOT claim digest match. This is because otherwise the imageId will be unverifiable and untrustworthy data.

Notable assessor guest changes:

  • Remove the verification of the integrity of the Fulfillment. This is because this is already done in the set builder and on chain. Also it would be very difficult to do for custom claim digests.

TODO:

  • fix docs
  • some cleanup

@github-actions github-actions bot changed the title Ec2/claim digest BM-1466: Ec2/claim digest Aug 12, 2025
@ec2 ec2 force-pushed the ec2/claim-digest branch from eb5c1d7 to 50167b3 Compare August 12, 2025 15:42
@ec2 ec2 marked this pull request as ready for review August 13, 2025 19:23
AssessorCallback calldata callback = assessorReceipt.callbacks[callbackIndexPlusOne - 1];
_executeCallback(fill.id, callback.addr, callback.gasLimit, fill.imageId, fill.journal, fill.seal);
_executeCallback(fill.id, callback.addr, callback.gasLimit, imageId, journal, fill.seal);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If I'm reading this correctly, it seems that imageId and journal here are not authenticated.
It is true that we are using the correct values in the assessor guest to compute the claimDigest, however on the contract side an adversary could replace their values. If my line of thinking makes sense we could either commit to it as part of the assessor journal, or alternatively, we could recompute the claimDigest from those values in case the predicate type is != than PredicateType.ClaimDigestMatch

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also now that I think about it, are we only guaranteeing the callbackData in case of a callback is required? Cause in that case the only way to get back a journal now is by requesting a callback and I'm not sure is what we want

tracing::error!("Predicate evaluation failed for request");
bail!("Predicate evaluation failed");
}
// TODO(ec2): how do we check this?
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can't we move this check within the previous match, so that we can then use the FulfillmentData enum?
Or we can just fill the fulfillment data there, and then do the check once here

request.requirements.predicate.predicateType,
hex::encode(&request.requirements.predicate.data)
);
// TODO(ec2): how do we check when we have custom claim digests?
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we can still evaluate the predicate here based on the predicate type and potentially the proof type (e.g., the specified selector) no??

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this a new thing?

Comment on lines 213 to 221
let fulfillment_data = match order.request.requirements.predicate.predicateType {
PredicateType::ClaimDigestMatch => FulfillmentData::from_claim_digest(
order.request.requirements.predicate.claim_digest().unwrap(),
),
_ => FulfillmentData::from_image_id_and_journal(
Digest::from_hex(order.image_id.unwrap()).unwrap(),
journal,
),
};
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: this could be made a method/function as we use it several times within the codebase

Comment on lines +1145 to +1161
// #[sqlx::test]
// async fn get_submission_order(pool: SqlitePool) {
// let db: DbObj = Arc::new(SqliteDb::from(pool).await.unwrap());
// let mut order = create_order();
// order.proof_id = Some("test".to_string());
// order.lock_price = Some(U256::from(10));
// db.add_order(&order).await.unwrap();

// let submit_order: (ProofRequest, Bytes, String, B256, U256, FulfillmentType) =
// db.get_submission_order(&order.id()).await.unwrap();
// assert_eq!(submit_order.0, order.request);
// assert_eq!(submit_order.1, order.client_sig);
// assert_eq!(submit_order.2, order.proof_id.unwrap());
// assert_eq!(submit_order.3, order.request.requirements.imageId);
// assert_eq!(submit_order.4, order.lock_price.unwrap());
// assert_eq!(submit_order.5, order.fulfillment_type);
// }
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what happened here?

);
return Ok(image_id_str);
let image_id_str = request.requirements.image_id().map(|image_id| image_id.to_string());
// When predicate is ClaimDigestMatch, we do not have the image id, so we must always download and upload the image.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is true and worth thinking if this might create a (rate limit) issue when used often. One solution could be to make sure the requestor knows the image url must guarantee some high frequency access, or we should think to cache based on the url, however that is risky as the content might change. Anyway, just food for thoughts for now

@@ -4,7 +4,6 @@ CREATE TABLE IF NOT EXISTS proof_requests (
client_address TEXT NOT NULL, -- Ethereum address

-- Requirements fields
image_id TEXT NOT NULL, -- 32-byte image ID (hex encoded)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm in favor of this approach of keeping things clean. However this will require nuking the existing db as the migration will not be compatible. We could leave the value and use a dummy value as an alternative if we prefer keeping compatibility

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is the consequence of nuking the indexer DB? I don't have a strong opinion of whether or not to keep this field to maintain compatibility. I do feel like we want to start indexing claim digests and callback data though.

@@ -64,8 +64,7 @@ fn main() {
fill.verify_signature(&eip_domain_separator).expect("signature does not verify")
};
fill.evaluate_requirements().expect("requirements not met");
env::verify_integrity(&fill.receipt_claim()).expect("claim integrity check failed");
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Doesn't removing this means we now cannot guarantee that the proof is generated with the actual input and not something else?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

or maybe you are right, and that is guaranteed by recomputing the claim digest in here already

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My assessment is that this does allow the assessor to be run on claims that are not in fact true, but it should not be an issue since we check the seal onchain anyway. We need to check the seal onchain for data availability reasons, and because the onchain verifier is actually the canonical one.

bytes journal;
/// @notice The `PredicateType` of the request that is being fulfilled.
/// @dev When the `PredicateType` is `ClaimDigestMatch`, there callbacks are not supported
PredicateType predicateType;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Related to my other comments, I think we can remove this.

Comment on lines +18 to +19
/// @notice The type of callback
CallbackType callbackType;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we maybe not support multiple callback types? I am leaning towards saying that we should not, or least not support multiple callback types right now. Maybe this will be requested in the future, but its unclear what the receiver experience would look like and I'm inclined to think we probably won't support it.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Another way to keep the option open for multiple callback types would be to add a version number to the start of the encoded CallbackData. So if we increment the version in the future, we could provide a new function for decoding it.

But we don't necessarily need to do that even: because the first 32 bytes are a hash, we can use a special value of bytes32(uint256(1)) to indicate that this is the new version in the future. This is more ugly, but it can be the plan of record in the eventuality that we do want to support multiple types of callbacks in the future.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Another option for forward compatibility is that we can increment the entire request version, and accordingly create any need code paths we need, switching on that. We have reserved bits in the request ID that can be used for this.

using PredicateLibrary for Predicate global;
using ReceiptClaimLib for ReceiptClaim;

/// @title Predicate Struct and Library
/// @notice Represents a predicate and provides functions to create and evaluate predicates.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the way to clarify this is that "a predicate is a function over the claim that determines whether it meets the clients requirements". Prior to this PR, predicates acted over the journal only.

using PredicateLibrary for Predicate global;
using ReceiptClaimLib for ReceiptClaim;

/// @title Predicate Struct and Library
/// @notice Represents a predicate and provides functions to create and evaluate predicates.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The names DigestMatch and PrefixMatch are a bit confusing with this framing, in that they are with reference to the journal. We could rename when as JournalDigestMatch and JournalPrefixMatch, although I'm personally ok with keeping them named as is and just adding a comment to each to describe the fact that the apply to the journal. This might be the better option since we want the common-case patterns to have the shorter, friendlier names.

@@ -26,36 +34,66 @@ library PredicateLibrary {
/// @notice Creates a digest match predicate.
/// @param hash The hash to match.
/// @return A Predicate struct with type DigestMatch and the provided hash.
function createDigestMatchPredicate(bytes32 hash) internal pure returns (Predicate memory) {
return Predicate({predicateType: PredicateType.DigestMatch, data: abi.encode(hash)});

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This newline is separating the function from its comment.

Suggested change

Comment on lines -336 to +353
if (callbackIndexPlusOne > 0) {
AssessorCallback calldata callback = assessorReceipt.callbacks[callbackIndexPlusOne - 1];
_executeCallback(fill.id, callback.addr, callback.gasLimit, fill.imageId, fill.journal, fill.seal);

// We do not support callbacks for claim digest matches because we cant authenticate the journal.
if (fill.predicateType != PredicateType.ClaimDigestMatch) {
// In the case this is a not a claim digest match, we need to authenticate the journal, since we actually have it
(bytes32 imageId, bytes calldata journal) = _decodeCallbackData(fill.callbackData);
bytes32 calculatedClaimDigest = ReceiptClaimLib.ok(imageId, sha256(journal)).digest();
if (fill.claimDigest != calculatedClaimDigest) {
revert ClaimDigestMismatch(fill.claimDigest, calculatedClaimDigest);
}

if (callbackIndexPlusOne > 0) {
AssessorCallback calldata callback = assessorReceipt.callbacks[callbackIndexPlusOne - 1];
_executeCallback(fill.id, callback.addr, callback.gasLimit, imageId, journal, fill.seal);
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

By placing the callback data into the AssessorCallback, we can mostly eliminate the new validation here, and simply decode and execute the callback (reverting if decode fails). If we directly embed the CallbackData into AssessorCallback, we can avoid the extra decode step as well.

// checked by the Assessor guest.
/// @notice Journal committed by the guest program execution.
/// @dev The journal is checked to satisfy the predicate specified on the request's requirements.
bytes journal;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ultimately, I think removing the journal from the Fulfillment and only providing it in the callback is the right call. It will have the effect though that, by default, journals will not have onchain data availability guarantees (in the ProofDelivered) event. Most applications we have seen so far will have computed the journal locally (as they must to use the DigestMatch predicate) so this should not need needed, however, it may add some friction to the devex if they were planning on fetching the receipt from the chain and have the journal included.

Comment on lines +88 to +92
if requirements.predicate.predicateType == PredicateType::ClaimDigestMatch
&& requirements.callback.callbackType != CallbackType::None
{
return Err(Error::RequirementsEvaluationError);
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would drop this constraint (partly because my suggestion is to drop CallbackType) and instead fail when trying to construct the AssessorCallback is the FullfillmentClaimData does not contain the image ID and journal. It may need somewhat odd that this would enable a ClaimDigestMatch request to actually have a callback, but this is actually fine so long as we check that ReceiptClaim::ok(image_id, journal).digest() does in fact meet the requirements, which we do.

@@ -64,8 +64,7 @@ fn main() {
fill.verify_signature(&eip_domain_separator).expect("signature does not verify")
};
fill.evaluate_requirements().expect("requirements not met");
env::verify_integrity(&fill.receipt_claim()).expect("claim integrity check failed");
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My assessment is that this does allow the assessor to be run on claims that are not in fact true, but it should not be an issue since we check the seal onchain anyway. We need to check the seal onchain for data availability reasons, and because the onchain verifier is actually the canonical one.

@@ -4,7 +4,6 @@ CREATE TABLE IF NOT EXISTS proof_requests (
client_address TEXT NOT NULL, -- Ethereum address

-- Requirements fields
image_id TEXT NOT NULL, -- 32-byte image ID (hex encoded)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is the consequence of nuking the indexer DB? I don't have a strong opinion of whether or not to keep this field to maintain compatibility. I do feel like we want to start indexing claim digests and callback data though.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants