Skip to content

Speed up NeoFSStateSyncExtensions mode of node syncronisation #3925

@AnnaShaleva

Description

@AnnaShaleva

Is your feature request related to a problem? Please describe.

Node sync via NeoFSStateSyncExtensions in SkipIndexFilesSearch: true is not fast enough for production usage even after #3913. In particular, headers/blocks synchronisation consumes unexpectedly large amount of time. Synchronisation speed is times slower than via P2P or via index files based mode. From one hand, some slow-down is expected and depends on how fast NeoFS storage nodes can respond to SEARCHV2 requests, but at the same time it looks like we have something to improve on NeoGo side.

Here are the exact numbers that I got when synchronizing headers based on NeoFS mainnet SN:

2025-06-04T16:32:06.530+0300	INFO	persisted to disk	{"blocks": 0, "keys": 4, "headerHeight": 10883, "blockHeight": 0, "took": "9.953545ms"}
2025-06-04T16:32:07.531+0300	INFO	persisted to disk	{"blocks": 0, "keys": 7, "headerHeight": 10889, "blockHeight": 0, "took": "9.699605ms"}
2025-06-04T16:32:08.533+0300	INFO	persisted to disk	{"blocks": 0, "keys": 6, "headerHeight": 10894, "blockHeight": 0, "took": "10.260628ms"}
2025-06-04T16:32:09.534+0300	INFO	persisted to disk	{"blocks": 0, "keys": 14, "headerHeight": 10907, "blockHeight": 0, "took": "11.230486ms"}
2025-06-04T16:32:13.538+0300	INFO	persisted to disk	{"blocks": 0, "keys": 55, "headerHeight": 10961, "blockHeight": 0, "took": "13.061216ms"}
2025-06-04T16:32:16.543+0300	INFO	persisted to disk	{"blocks": 0, "keys": 646, "headerHeight": 11606, "blockHeight": 0, "took": "16.005977ms"}
2025-06-04T16:32:17.539+0300	INFO	persisted to disk	{"blocks": 0, "keys": 275, "headerHeight": 11880, "blockHeight": 0, "took": "10.775464ms"}
2025-06-04T16:32:44.556+0300	INFO	persisted to disk	{"blocks": 0, "keys": 2, "headerHeight": 11881, "blockHeight": 0, "took": "12.848583ms"}
2025-06-04T16:32:51.549+0300	INFO	persisted to disk	{"blocks": 0, "keys": 9, "headerHeight": 11889, "blockHeight": 0, "took": "1.963081ms"}
2025-06-04T16:32:53.560+0300	INFO	persisted to disk	{"blocks": 0, "keys": 408, "headerHeight": 12295, "blockHeight": 0, "took": "12.047345ms"}
2025-06-04T16:32:55.560+0300	INFO	persisted to disk	{"blocks": 0, "keys": 585, "headerHeight": 12879, "blockHeight": 0, "took": "12.031364ms"}
2025-06-04T16:33:12.574+0300	INFO	persisted to disk	{"blocks": 0, "keys": 4, "headerHeight": 12882, "blockHeight": 0, "took": "13.459712ms"}
2025-06-04T16:33:16.567+0300	INFO	persisted to disk	{"blocks": 0, "keys": 62, "headerHeight": 12943, "blockHeight": 0, "took": "4.783998ms"}

For comparison, here are persist logs for P2P-based sync:

2025-06-04T16:46:04.723+0300	INFO	persisted to disk	{"blocks": 1588, "keys": 26910, "headerHeight": 85617, "blockHeight": 85617, "took": "34.590103ms"}
2025-06-04T16:46:05.719+0300	INFO	persisted to disk	{"blocks": 1157, "keys": 19729, "headerHeight": 86775, "blockHeight": 86774, "took": "30.597265ms"}
2025-06-04T16:46:06.697+0300	INFO	persisted to disk	{"blocks": 281, "keys": 5012, "headerHeight": 87055, "blockHeight": 87055, "took": "7.701568ms"}
2025-06-04T16:46:07.734+0300	INFO	persisted to disk	{"blocks": 1669, "keys": 28830, "headerHeight": 88725, "blockHeight": 88724, "took": "44.817839ms"}
2025-06-04T16:46:08.714+0300	INFO	persisted to disk	{"blocks": 1003, "keys": 19424, "headerHeight": 89728, "blockHeight": 89727, "took": "24.784207ms"}
2025-06-04T16:46:09.706+0300	INFO	persisted to disk	{"blocks": 509, "keys": 10184, "headerHeight": 90236, "blockHeight": 90236, "took": "16.603881ms"}
2025-06-04T16:46:10.700+0300	INFO	persisted to disk	{"blocks": 202, "keys": 4060, "headerHeight": 90438, "blockHeight": 90438, "took": "10.420431ms"}
2025-06-04T16:46:11.714+0300	INFO	persisted to disk	{"blocks": 877, "keys": 17036, "headerHeight": 91316, "blockHeight": 91315, "took": "24.197905ms"}
2025-06-04T16:46:12.732+0300	INFO	persisted to disk	{"blocks": 1397, "keys": 25626, "headerHeight": 92712, "blockHeight": 92712, "took": "42.042038ms"}
2025-06-04T16:46:14.696+0300	INFO	persisted to disk	{"blocks": 116, "keys": 2400, "headerHeight": 92828, "blockHeight": 92828, "took": "4.797112ms"}
2025-06-04T16:46:15.739+0300	INFO	persisted to disk	{"blocks": 1563, "keys": 28782, "headerHeight": 94391, "blockHeight": 94391, "took": "48.049744ms"}
2025-06-04T16:46:16.718+0300	INFO	persisted to disk	{"blocks": 735, "keys": 13526, "headerHeight": 95126, "blockHeight": 95126, "took": "26.014979ms"}

Describe the solution you'd like

  • play with the number of SEARCH-ing threads
  • try to optimize DefaultSearchBatchSize (ref. 5068d1b)
  • review the code of NeoFSBlockFetcher and check if there's anything that we can improve in its searching model

Describe alternatives you've considered

Leave it as is, but it's too slow for production usage, it will become a problem eventually.

Metadata

Metadata

Assignees

No one assigned

    Labels

    I4No visible changesS3Minimally significantU2Seriously plannedenhancementImproving existing functionality

    Type

    No type

    Projects

    No projects

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions