Parallelize host-side hashing and client-side hashing.#169
Conversation
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #169 +/- ##
==========================================
+ Coverage 70.65% 70.66% +0.01%
==========================================
Files 39 39
Lines 1707 1708 +1
Branches 382 384 +2
==========================================
+ Hits 1206 1207 +1
Misses 413 413
Partials 88 88
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
|
hi, this is from a pc with rp2040. |
|
@jsiverskog without this PR (because it just complicates things) can you profile the code in your setup and see what's taking up all the time? And/or provide a way for me to replicate your setup (I have an rp2040). |
|
@raveslave this didn't happen to speed stuff up for you, did it? |
|
@jsiverskog you tested a bit right? |
|
the only test i did was to compare this with the main branch and didn't see any difference in time. then i made some changes to our code to avoid syncing as often, so the issue is not really that major for us anymore. however, i should be able to profile this for you @BrianPugh if you're still interested, perhaps later today. is a regular cProfile run enough? |
|
Sure! Anything that helps indicate what the hotspot is for syncing! |
|
hi, |
|
so the bottleneck is truly the hashing speed on-device. It would be cool if we could add a check like "if size & mtime match the current on-computer files, then skip hashing and assume that they are the same." My fnv1a32 native module is around 3.5x faster, which could make this fast enough that it feels fine; i'll see if i can make a PR doing that. All of that said, I don't think this PR harms anything, so I think i'll merge it as it could result in speedups under certain situations (but it doesn't make a meaningful difference in the situations described so far) |
@raveslave see if this speeds anything up for you.