16:00:04 <fjahr> #startmeeting 16:00:04 <corebot> fjahr: Meeting started at 2026-01-15T16:00+0000 16:00:05 <corebot> fjahr: Current chairs: fjahr 16:00:06 <corebot> fjahr: Useful commands: #action #info #idea #link #topic #motion #vote #close #endmeeting 16:00:07 <corebot> fjahr: See also: https://hcoop-meetbot.readthedocs.io/en/stable/ 16:00:08 <corebot> fjahr: Participants should now identify themselves with '#here' or with an alias like '#here FirstLast' 16:00:11 <stickies-v> hi 16:00:13 <hodlinator> hi 16:00:14 <dergoegge> hi 16:00:15 <pinheadmz> yo 16:00:15 <fjahr> #bitcoin -core-dev Meeting: abubakarsadiq achow101 _aj_ ajonas b10c brunoerg cfields darosior dergoegge dzxzg eugenesiegel fanquake fjahr furszy gleb glozow hebasto hodlinator instagibbs janb84 jarolrod jonatack josibake kanzure kevkevin laanwj LarryRuane lightlike l0rinc luke-jr maflcko marcofleon maxedw Murch pinheadmz provoostenator ryanofsky sdaftuar S3RK stickies-v sipa sliv3r__ sr_gi tdb3 theStack TheCharlatan vasild 16:00:15 <fjahr> willcl-ark 16:00:16 <marcofleon> hi 16:00:16 <hebasto> hi 16:00:17 <Novo> HI 16:00:20 <cfields> hi 16:00:22 <andrewtoth> hi 16:00:24 <sedited> hi 16:00:42 <lightlike> Hi 16:00:42 <fjahr> Seems like there is one proposed meeting topic from l0rinc but it wasn't recorded so far: https://gnusha.org/bitcoin-core-dev/proposedmeetingtopics.txt Let me know if I missed any earlier ones! 16:00:45 <yuvicc> hi 16:00:51 <l0rinc> hi 16:00:59 <janb84> hi 16:01:08 <kanzure> hi 16:01:09 <enochazariah3> hi 16:01:21 <fjahr> And if there are any last minute ones to add of course... 16:01:29 <willcl-ark> hi 16:01:34 <furszy> hi 16:01:51 <fjahr> Starting with the WGs! 16:01:53 <fjahr> #topic Fuzzing WG Update (dergoegge) 16:02:10 <dergoegge> Eugene has been working on incremental snapshot support for fuzzamoto (https://github.com/dergoegge/fuzzamoto/pull/103). This will hopefully, in the long run, enable the fuzzer to explore even deeper into the state space. 16:02:11 <dergoegge> I've open sourced the continuous fuzzing infra I've been running for the past years: https://github.com/dergoegge/fuzzor (there are no docs, but let marcofleon or me know if you want to run this as well) 16:02:11 <dergoegge> We've found a bunch of bugs using antithesis (https://github.com/bitcoin/bitcoin/issues?q=is%3Aissue%20author%3Adergoegge%20antithesis) (it's only been a week, i'll have more to say on this later/next coredev but lmk if there are questions). 16:02:49 <hodlinator> cool 16:02:59 <dergoegge> (that's all) 16:03:33 <fjahr> #topic Kernel WG Update (sedited) 16:03:37 <sedited> i'm on mobile, maybe stickies-v has something to add? 16:04:12 <stickies-v> let's keep it for next week 16:04:13 <yancy> hi 16:04:19 <fjahr> #topic Benchmarking WG Update (l0rinc, andrewtoth) 16:04:23 <l0rinc> nothing from my side 16:04:40 <andrewtoth> same, no updates this week 16:04:44 <fjahr> #topic Silent Payments WG Update (Novo) 16:04:53 <Novo> I've been doing some experiments to improve worst case scan 16:04:56 <darosior> hi 16:05:15 <Novo> With randomization of the outputs and removal, we can half scan time 16:05:34 <Novo> By removal, I mean removal of found outputs from the list 16:05:52 <Novo> Of course the scan time is still significant, 4 minutes on my machine 16:06:03 <furszy> which scanning approach are you using? 16:06:10 <Novo> BIP style 16:06:26 <Novo> I'm also working benchmarking LabelSet scanning with Block 16:06:30 <Novo> *Blocks 16:06:41 <instagibbs> 4 minutes over what data? 16:06:43 <furszy> cool. It would be nice to know what's the diff there 16:06:48 <Novo> I'm implementing LabelSet on core to use for testing 16:06:59 <furszy> instagibbs: I assume an entire block targeting a single entity 16:07:11 <Novo> 4 minutes on the secp worst-case benchmark with 23250 outputs 16:07:58 <instagibbs> ah 16:08:15 <l0rinc> how long does it take to scan every single block to find a single transaction at the very end? 16:08:23 <furszy> let us know how it goes Novo. We should have updates on the libsecp side soon as well. 16:08:35 <Murch[m]> hi 16:08:38 <fjahr> Novo: Having a benchmark in secp that simulates a block seems easier to me than implementing LabelSet in core but I guess you considered that 16:09:30 <Novo> @l0rinc a single transaction at the end doesn't usually influence scanning time. The time it takes to scan txs without payments will be normal 16:09:56 <l0rinc> no, I mean just as a benchmark, how long would it take to scan every block 16:10:12 <Novo> fjahr I want to test with mainnet and maybe create some special blocks on regtest with script 16:11:03 <Novo> l0rinc I don't have an exact number for you. I can share that later 16:11:26 <l0rinc> I was just interested in the magnitude. 20 minutes, 20 hours... 16:11:46 <Novo> l0rinc more like milliseconds 16:12:15 <abubakarsadiq> hi 16:12:16 <l0rinc> we can read every transaction that quickly - let's discuss it offline 16:12:19 <Murch[m]> milliseconds for every block, or do you mean a single block? 16:12:21 <l0rinc> *can't 16:12:28 <Novo> Alright l0rinc 16:12:34 <fjahr> #topic Net Split WG Update (cfields) 16:12:48 <Novo> Murch[m] I meant per block 16:13:09 <cfields> No net split update this week. But I did manage to track down and work around the gcc bug slowing down the chacha20 impl that l0rinc mentioned here last week: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=107563#c13 16:13:20 <cfields> For anyone wondering why I'm spending _so_ much time on chacha20, it's because I'm hoping to prove that writing generic vectorized code like this is a reasonable way forward for us for our low-level bottlenecks. If so, we could be more liberal with adding simd code as it's easier to review than bare asm. 16:13:45 <l0rinc> :+1: 16:14:14 <fjahr> (not seeing the cluster people, feel free to speak up if there is something to share otherwise moving on in a few sec) 16:14:37 <fjahr> #topic Constant build failures on forked repos since introduction of `Valgrind fuzz` CI job (l0rinc) 16:14:45 <l0rinc> The Valgrind fuzz job was introduced in e4b04630bcf59ea03c1373777a0167af699f92a4. Since then, ci/test/00_setup_env_native_fuzz_with_valgrind.sh seems to regularly time out on forked repositories (which fall back to GitHub-hosted runners), hitting the 240 minute job timeout, see: l0rinc/bitcoin/actions/runs/21003573850/job/60381011678 16:14:52 <l0rinc> I first tried bumping the timeout to the GitHub Actions maximum, since the run took 4h 36m 50s. 16:14:57 <l0rinc> maflcko suggested that we rather remove it from the CI matrix since it didn't reveal anything (similarly to how the valgrind no-fuzz task is handled) - which was rejected by others, see: #34304 16:14:59 <corebot> https://github.com/bitcoin/bitcoin/issues/34304 | ci: remove `Valgrind fuzz` from CI matrix by l0rinc · Pull Request #34304 · bitcoin/bitcoin · GitHub 16:15:02 <l0rinc> Being able to run CI on forked repositories was working before #33461, it's useful to have a playground available to validate CI passing before we push to bitcoin/bitcoin - to avoid people pushing to bitcoin/bitcoin and experimenting there. 16:15:06 <corebot> https://github.com/bitcoin/bitcoin/issues/33461 | ci: add Valgrind fuzz by fanquake · Pull Request #33461 · bitcoin/bitcoin · GitHub 16:15:09 <l0rinc> It's fine if certain heavy tasks are skipped on GitHub-hosted runners, but we shouldn't get used to red CI on forks, and we should make sure bitcoin/bitcoin isn't the only green build. 16:16:07 <fanquake> I posted my thoughts above, before the meeting, but can copy them back here 16:16:15 <fanquake> It would be good to know more about your expectations here. Should the CI pass when run somewhere that doesn't have the hardware to run it? 16:16:19 <fanquake> If we added 10x the CPU to the main repo tomorrow, and expanded the number of tasks, and added heavier tasks, then more things would also fail in forks, without the extra hardware. 16:16:34 <fanquake> I don't really see how to avoid that, without unnecessarily constraining the main repo (based on some vague/possibly changing external requirements), or providing every contributor with the same hardware. 16:16:41 <ryanofsky> could it be fixed by having the job respect some RUN_SLOW_CHECKS variable that is only set in the main repo? 16:16:53 <fanquake> (rember than GH CI in forks only works because GH gives away CPU for free, for now) 16:17:03 <fanquake> I think we should be running as many tests, as fast as possible, and tuning timeouts/expectations around that. Not optimising for what might be happening outside the repo. 16:17:23 <l0rinc> yes, that was an alternative that willcl-ark also suggested 16:17:49 <sipa> hi 16:18:05 <darosior> fanquake: +1. It seems the fact that CI can run in forked repos is just a byproduct of our CI configuration still being quite light compared to the size/importance of the project. 16:18:10 <l0rinc> thus isn't an optimization, I just want to restore the build passing outside of bitcoin/bitcoin 16:18:16 <fanquake> If someone wants to run the CI on a raspberry Pi, when shouldn't be optimising for that 16:18:21 <fanquake> *we 16:20:29 <fjahr> Investigating some RUN_SLOW_CHECKS variable seems like the best option, I can totally see that GH changes their policies in the future and this option goes away 16:21:13 <fjahr> Seems like there is nothing to add to the conversation for now. sipa: Did you want to say something about cluster mempool? 16:21:22 <sipa> cluster mempool update: #34259 got merged; i've marked #34257 as next to review. Since it's more than an implementation, I'm curious about more conceptual opinions on the matter too. 16:21:25 <corebot> https://github.com/bitcoin/bitcoin/issues/34259 | Find minimal chunks in SFL by sipa · Pull Request #34259 · bitcoin/bitcoin · GitHub 16:21:26 <corebot> https://github.com/bitcoin/bitcoin/issues/34257 | txgraph: deterministic optimal transaction order by sipa · Pull Request #34257 · bitcoin/bitcoin · GitHub 16:22:17 <sipa> (and thanks for the review, we're in the home stretch) 16:22:44 <fjahr> Thanks for the updates everyone! Anything else to discuss? 16:23:10 <furszy> just a small add to the SP convo 16:23:18 <furszy> Novo: l0rinc was asking for entire chain scanning. Just to add an answer for it: for now, the focus has been on adversarial scenarios (at least on libsecp), mostly the worst case for a single block. Outcomes of this obviously improve the overall scanning time as well, but the happy path has not been part of this first "let's do not introduce a DoS vector" step. 16:23:24 <furszy> that's it. 16:24:04 <l0rinc> thanks for clarifying furszy 16:24:26 <fjahr> #endmeeting