Skip to content

Add automatic benchmarking  #54

@nickrobinson251

Description

@nickrobinson251

Related to #52 and #53

Benchmarks for Julia packages are kind of annoying, so no one seems to do them... but we could figure it out

These should compare the performance of newer (or proposed) versions of this package to earlier versions. One possiblity would be to do comparisons to a fixed baseline so we can see the trend over time, another would be to alway compare to master / the previous version so we can make clearer pairwise comparison.

In either case we'd probably want to run both the "old" (baseline) and "new" versions to help reduce the noise from harware/OS differences (compared to just running the "new" version and comparing to the performance of the previous run of "old", i.e. always re-run "old" so it's on the same set-up as "new").

And if using Github Actions to run benachmarks, relative comparisons are probably the way to go.

See https://labs.quansight.org/blog/2021/08/github-actions-benchmarks

(As with comparing to other libraries #53, we may want to test performance on the files available in https://github.com/NREL-SIIP/PowerSystemsTestData/)

Metadata

Metadata

Assignees

No one assigned

    Labels

    ideaneeds some investigation before we decide

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions