Skip to content

Implement Libfunc Counter #1267

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 26 commits into
base: main
Choose a base branch
from
Open

Implement Libfunc Counter #1267

wants to merge 26 commits into from

Conversation

FrancoGiachetta
Copy link
Contributor

@FrancoGiachetta FrancoGiachetta commented Jul 1, 2025

Implements the Libfunc Counter. Used to count every call of each libfunc in an execution. All libfuncs that have been compiled will be taken into account (even if they never get called, in which case the counter will be 0). An example of execution a program with this feature is as follows:

{
  "struct_construct<Tuple<felt252>>": 2,
  "function_call<user@core::panic_with_const_felt252::<375233589013918064796019>>": 0,
  "function_call<user@core::panic_with_const_felt252::<2138404743268917596156934978104436>>": 0,
  "struct_construct<Tuple<Unit>>": 1,
  "store_temp<Tuple<core::panics::Panic, Array<felt252>>>": 0,
  "store_temp<RangeCheck>": 4000027,
  "const_as_immediate<Const<felt252, 375233589013918064796019>>": 0,
  "branch_align": 4000028,
  "withdraw_gas": 2000012,
  "felt252_add": 2000010,
  "disable_ap_tracking": 2000013,
  "const_as_immediate<Const<felt252, 1>>": 2000012,
  "enum_match<core::panics::PanicResult::<(core::felt252,)>>": 2,
  "enum_init<core::panics::PanicResult::<((),)>, 1>": 0,
  "const_as_immediate<Const<felt252, 10>>": 1,
  "store_temp<core::panics::PanicResult::<(core::felt252,)>>": 2,
  "const_as_immediate<Const<felt252, 0>>": 2,
  "function_call<user@core::panic_with_const_felt252::<123770447868417490207208308>>": 0,
  "store_temp<GasBuiltin>": 2000015,
  "const_as_immediate<Const<felt252, 2138404743268917596156934978104436>>": 0,
  "enum_init<core::panics::PanicResult::<(core::felt252,)>, 0>": 2,
  "felt252_is_zero": 2000014,
  "redeposit_gas": 2000013,
  "struct_construct<Unit>": 1,
  "struct_deconstruct<Tuple<felt252>>": 2,
  "dup<felt252>": 4000022,
  "function_call<user@fib_2M::fib_2M::fib>": 2000012,
  "enum_init<core::panics::PanicResult::<((),)>, 0>": 1,
  "store_temp<core::panics::PanicResult::<((),)>>": 1,
  "store_temp<felt252>": 6000039,
  "const_as_immediate<Const<felt252, 123770447868417490207208308>>": 0,
  "struct_construct<Tuple<core::panics::Panic, Array<felt252>>>": 0,
  "struct_construct<core::panics::Panic>": 0,
  "drop<felt252>": 4,
  "const_as_immediate<Const<felt252, 55>>": 1,
  "array_new<felt252>": 0,
  "enum_init<core::panics::PanicResult::<(core::felt252,)>, 1>": 0,
  "array_append<felt252>": 0,
  "function_call<user@core::panic_with_felt252>": 0,
  "felt252_sub": 2000012,
  "const_as_immediate<Const<felt252, 2000000>>": 1,
  "drop<NonZero<felt252>>": 2000010,
  "const_as_immediate<Const<felt252, 3428715265070152839999402611900589559274061538403385725837342803058381925627>>": 1
}

Checklist

  • Linked to Github Issue
  • Unit tests added
  • Integration tests added.
  • This change requires new documentation.
    • Documentation has been added/updated.

Copy link

github-actions bot commented Jul 1, 2025

✅ Code is now correctly formatted.

Copy link

github-actions bot commented Jul 1, 2025

Benchmark results Main vs HEAD.

Base

Command Mean [s] Min [s] Max [s] Relative
base dict_insert.cairo (JIT) 5.310 ± 0.054 5.229 5.374 1.01 ± 0.02
base dict_insert.cairo (AOT) 5.262 ± 0.089 5.136 5.403 1.00

Head

Command Mean [s] Min [s] Max [s] Relative
head dict_insert.cairo (JIT) 5.125 ± 0.030 5.089 5.187 1.02 ± 0.01
head dict_insert.cairo (AOT) 5.040 ± 0.044 4.975 5.135 1.00

Base

Command Mean [s] Min [s] Max [s] Relative
base dict_snapshot.cairo (JIT) 5.226 ± 0.068 5.144 5.346 1.04 ± 0.02
base dict_snapshot.cairo (AOT) 5.043 ± 0.036 4.990 5.109 1.00

Head

Command Mean [s] Min [s] Max [s] Relative
head dict_snapshot.cairo (JIT) 5.015 ± 0.034 4.967 5.072 1.02 ± 0.01
head dict_snapshot.cairo (AOT) 4.927 ± 0.022 4.897 4.972 1.00

Base

Command Mean [s] Min [s] Max [s] Relative
base factorial_2M.cairo (JIT) 5.556 ± 0.069 5.483 5.689 1.01 ± 0.01
base factorial_2M.cairo (AOT) 5.508 ± 0.036 5.458 5.569 1.00

Head

Command Mean [s] Min [s] Max [s] Relative
head factorial_2M.cairo (JIT) 5.384 ± 0.030 5.337 5.420 1.01 ± 0.01
head factorial_2M.cairo (AOT) 5.332 ± 0.040 5.271 5.378 1.00

Base

Command Mean [s] Min [s] Max [s] Relative
base fib_2M.cairo (JIT) 5.063 ± 0.056 4.993 5.146 1.01 ± 0.01
base fib_2M.cairo (AOT) 4.993 ± 0.038 4.913 5.039 1.00

Head

Command Mean [s] Min [s] Max [s] Relative
head fib_2M.cairo (JIT) 4.933 ± 0.032 4.869 4.974 1.01 ± 0.01
head fib_2M.cairo (AOT) 4.874 ± 0.023 4.843 4.904 1.00

Base

Command Mean [s] Min [s] Max [s] Relative
base linear_search.cairo (JIT) 5.311 ± 0.051 5.240 5.390 1.03 ± 0.02
base linear_search.cairo (AOT) 5.177 ± 0.092 5.060 5.353 1.00

Head

Command Mean [s] Min [s] Max [s] Relative
head linear_search.cairo (JIT) 5.130 ± 0.046 5.060 5.196 1.03 ± 0.01
head linear_search.cairo (AOT) 5.002 ± 0.028 4.946 5.050 1.00

Base

Command Mean [s] Min [s] Max [s] Relative
base logistic_map.cairo (JIT) 5.376 ± 0.097 5.299 5.581 1.04 ± 0.02
base logistic_map.cairo (AOT) 5.169 ± 0.058 5.113 5.317 1.00

Head

Command Mean [s] Min [s] Max [s] Relative
head logistic_map.cairo (JIT) 5.201 ± 0.043 5.134 5.294 1.04 ± 0.01
head logistic_map.cairo (AOT) 5.016 ± 0.026 4.973 5.069 1.00

Copy link

github-actions bot commented Jul 1, 2025

Benchmarking results

Benchmark for program dict_insert

Open benchmarks
Command Mean [s] Min [s] Max [s] Relative
Cairo-vm (Rust, Cairo 1) 11.329 ± 0.037 11.281 11.411 2.29 ± 0.02
cairo-native (embedded AOT) 4.937 ± 0.030 4.888 4.974 1.00
cairo-native (embedded JIT using LLVM's ORC Engine) 5.030 ± 0.049 4.974 5.146 1.02 ± 0.01

Benchmark for program dict_snapshot

Open benchmarks
Command Mean [ms] Min [ms] Max [ms] Relative
Cairo-vm (Rust, Cairo 1) 822.6 ± 7.7 813.7 840.3 1.00
cairo-native (embedded AOT) 4975.2 ± 33.1 4944.0 5036.0 6.05 ± 0.07
cairo-native (embedded JIT using LLVM's ORC Engine) 5079.8 ± 38.0 4985.5 5137.2 6.18 ± 0.07

Benchmark for program factorial_2M

Open benchmarks
Command Mean [s] Min [s] Max [s] Relative
Cairo-vm (Rust, Cairo 1) 5.310 ± 0.061 5.259 5.425 1.00
cairo-native (embedded AOT) 5.377 ± 0.033 5.320 5.437 1.01 ± 0.01
cairo-native (embedded JIT using LLVM's ORC Engine) 5.434 ± 0.029 5.398 5.480 1.02 ± 0.01

Benchmark for program fib_2M

Open benchmarks
Command Mean [s] Min [s] Max [s] Relative
Cairo-vm (Rust, Cairo 1) 5.165 ± 0.090 5.093 5.399 1.05 ± 0.02
cairo-native (embedded AOT) 4.923 ± 0.040 4.878 4.991 1.00
cairo-native (embedded JIT using LLVM's ORC Engine) 4.952 ± 0.028 4.902 5.007 1.01 ± 0.01

Benchmark for program linear_search

Open benchmarks
Command Mean [ms] Min [ms] Max [ms] Relative
Cairo-vm (Rust, Cairo 1) 852.3 ± 9.5 838.4 865.5 1.00
cairo-native (embedded AOT) 5057.0 ± 39.4 4999.9 5132.3 5.93 ± 0.08
cairo-native (embedded JIT using LLVM's ORC Engine) 5179.2 ± 52.4 5115.6 5280.7 6.08 ± 0.09

Benchmark for program logistic_map

Open benchmarks
Command Mean [ms] Min [ms] Max [ms] Relative
Cairo-vm (Rust, Cairo 1) 550.6 ± 4.8 544.8 560.8 1.00
cairo-native (embedded AOT) 5076.0 ± 31.1 5028.7 5119.5 9.22 ± 0.10
cairo-native (embedded JIT using LLVM's ORC Engine) 5242.0 ± 52.6 5138.1 5307.6 9.52 ± 0.13

@codecov-commenter
Copy link

codecov-commenter commented Jul 1, 2025

Codecov Report

Attention: Patch coverage is 35.80247% with 52 lines in your changes missing coverage. Please review.

Project coverage is 80.01%. Comparing base (a8ddadc) to head (ba640cd).

Files with missing lines Patch % Lines
src/bin/cairo-native-run.rs 0.00% 52 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main    #1267      +/-   ##
==========================================
- Coverage   80.12%   80.01%   -0.11%     
==========================================
  Files         112      112              
  Lines       30068    30145      +77     
==========================================
+ Hits        24093    24122      +29     
- Misses       5975     6023      +48     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some thoughts related to this file:

  1. Why is the libfunc counter API separated in two modules instead of using only one? For example, the compiler calls count_libfunc from the runtime module but build_array_counter and store_array_counter are called from the root module.
  2. If it is necessary to have two separated modules for the API then it would be great if the documentation includes what does each module do and (if it is possible or makes sense) the reason behind this separation in modules.
  3. In line 4 I would not call the three methods listed below as important. If they are the only methods used by external files to count libfuncs then these 3 methods are the Libfunc Counter API.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. To have a separation from what belongs to MLIR, and what belongs to Rust (runtime).
  2. Yes, I'll add docs if you agree with (1.).
  3. I can change it to main if you prefer.

@FrancoGiachetta FrancoGiachetta marked this pull request as draft July 7, 2025 13:48
@FrancoGiachetta FrancoGiachetta marked this pull request as ready for review July 7, 2025 16:03
@FrancoGiachetta FrancoGiachetta marked this pull request as draft July 7, 2025 21:40
@FrancoGiachetta FrancoGiachetta marked this pull request as ready for review July 17, 2025 21:56
@@ -57,6 +59,11 @@ struct Args {
/// The output path for the libfunc profilling results
profiler_output: Option<PathBuf>,

#[cfg(feature = "with-libfunc-counter")]
#[arg(long)]
/// The output path for the execution trace
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this comment valid?

Comment on lines +156 to +163
#[cfg(feature = "with-libfunc-counter")]
let libfunc_indexes = program
.libfunc_declarations
.iter()
.enumerate()
.map(|(idx, libf)| (libf.id.clone(), idx))
.collect::<HashMap<ConcreteLibfuncId, usize>>();

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this mapping necessary? The libfunc ID already seems to be autoincremental, so maybe we can use the libfunc ID directly.

Comment on lines +176 to +203
pub fn count_libfunc(
&mut self,
context: &Context,
module: &Module,
block: &Block<'_>,
location: Location,
libfunc_idx: usize,
) -> Result<()> {
let u32_ty = IntegerType::new(context, 32).into();
let k1 = block.const_int(context, location, 1, 32)?;

let array_counter_ptr = self.get_array_counter(context, module, block, location)?;

let value_counter_ptr = block.gep(
context,
location,
array_counter_ptr,
&[GepIndex::Const(libfunc_idx as i32)],
u32_ty,
)?;

let value_counter = block.load(context, location, value_counter_ptr, u32_ty)?;
let value_incremented = block.addi(value_counter, k1, location)?;

block.store(context, location, value_counter_ptr, value_incremented)?;

Ok(())
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not convinced that this is the right approach.

Right know, to obtain the pointer to the array, we must call an extern function get_array_counter. This is more complex than required. I think this feature can be achieved without relying on invoking Rust functions from the compiled code, and without relying on a global hashmap.

  • Have each contract declare an MLIR global variable called libfunc_counts (or so). It should contain a pointer to a dinamically allocated array.
  • Before executing each statement, obtain the array pointer, find the index of the associated libfunc, and increment it by 1. We should be able to obtain the array pointer by accessing the MLIR global previously defined.
  • At the start of the execution, inside of invoke_dynamic, or run, alloc the array and set global's value to the arrays pointer. At the end of the execution, don't forget to replace the global's value with the previous pointer.
  • To expose the libfunc counts to the user, we could return it from the invoke_dynamic/run functions.

This way, the functionality would be completely hidden from the user (as opposed to the current approach, that requires changes in cairo-native-run).

Copy link
Contributor Author

@FrancoGiachetta FrancoGiachetta Jul 18, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree It would be better for the array to be built and restored inside either invoke_dynamic or run. It should be possible with the current implementation. However I don't so much complexity in calling the a rust function. Despite this, it could be done if what we want is to avoid rust as much as possible. I actually did it this way because I thought it was simpler.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You are right, there are two different orthogonal improvements that can be made:

  • Have invoke_dynamic/run handle the logic for setting up and reading the libfunc counters.
  • Use a MLIR global rather than calling a Rust function.

We could do the first improvements without changing how we obtain the array pointer. However, using an MLIR global should be more performant and require less code than calling a Rust function.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants