16:00:10 <stickies-v> #startmeeting 
16:00:10 <corebot> stickies-v: Meeting started at 2025-12-11T16:00+0000
16:00:11 <corebot> stickies-v: Current chairs: stickies-v
16:00:12 <corebot> stickies-v: Useful commands: #action #info #idea #link #topic #motion #vote #close #endmeeting
16:00:13 <corebot> stickies-v: See also: https://hcoop-meetbot.readthedocs.io/en/stable/
16:00:14 <corebot> stickies-v: Participants should now identify themselves with '#here' or with an alias like '#here FirstLast'
16:00:20 <stickies-v> #bitcoin -core-dev Meeting: abubakarsadiq achow101 _aj_ ajonas b10c brunoerg cfields darosior dergoegge dzxzg eugenesiegel fanquake fjahr furszy gleb glozow hebasto hodlinator instagibbs janb84 jarolrod jonatack josibake kanzure kevkevin laanwj LarryRuane lightlike l0rinc luke-jr maflcko marcofleon maxedw Murch pinheadmz provoostenator ryanofsky sdaftuar S3RK stickies-v sipa sr_gi tdb3 theStack TheCharlatan vasild
16:00:20 <stickies-v> willcl-ark
16:00:23 <l0rinc> hi
16:00:25 <hodlinator> hi
16:00:26 <cfields> hi
16:00:31 <kevkevin> hi
16:00:36 <andrewtoth> hi
16:00:37 <pinheadmz> hi
16:00:42 <stringintech> hi
16:00:49 <darosior> hi
16:00:50 <yancy> hi
16:01:19 <stickies-v> There are no pre-proposed meeting topics this week. Any last minute ones to add?
16:01:20 <Novo> hi
16:01:26 <eugenesiegel> hi
16:01:37 <sipa> hi
16:01:50 <pinheadmz> #proposedmeetingtopic https://github.com/libevent/libevent/issues/1812
16:01:50 <corebot> pinheadmz: Unknown command: #proposedmeetingtopic
16:01:55 <pinheadmz> #lastminute 
16:01:55 <corebot> pinheadmz: Unknown command: #lastminute
16:02:03 <stickies-v> thx, will add it to the list!
16:02:14 <stickies-v> #topic Fuzzing WG Update (dergoegge)
16:02:21 <b10c> hi
16:02:33 <dergoegge> no update, but i'll commit to an update next week
16:02:49 <stickies-v> high hopes
16:02:54 <stickies-v> #topic Kernel WG Update (sedited)
16:03:40 <jonatack> hi
16:04:29 <stickies-v> not sure if sedited is here, so i'l do somehting impromptu
16:04:56 <stickies-v> stringintech has been doing a lot of work on extending the language bindings test suite, latest PR in https://github.com/stringintech/kernel-bindings-tests/pull/11
16:05:15 <stickies-v> i've been doing a lot of work on separating node and kernel logging, will open an RFC issue by tomorrow
16:06:19 <stickies-v> bunch of open PRs to look at too, for overview see https://github.com/orgs/bitcoin/projects/3
16:06:39 <stickies-v> #topic Benchmarking WG Update (l0rinc, andrewtoth)
16:06:45 <l0rinc> #30442 and #33602 were just merged - thanks for the reviews!
16:06:50 <corebot> l0rinc: Error: That URL raised <HTTP Error 503: Service Unavailable>
16:06:52 <corebot> l0rinc: Error: That URL raised <HTTP Error 503: Service Unavailable>
16:07:02 <l0rinc> #30442 and #33602
16:07:06 <corebot> https://github.com/bitcoin/bitcoin/issues/30442 | precalculate SipHash constant salt XORs by l0rinc · Pull Request #30442 · bitcoin/bitcoin · GitHub
16:07:15 <corebot> https://github.com/bitcoin/bitcoin/issues/33602 | [IBD] coins: reduce lookups in dbcache layer propagation by l0rinc · Pull Request #33602 · bitcoin/bitcoin · GitHub
16:07:18 <stickies-v> looks like corebot needs even more of a tuneup than ibd
16:07:23 <l0rinc> andrewtoth rebased #31132 after the above changes and added new test cases. We also had our first Umbrel measurement: a full sync finished in 6.5 hours (31% faster than master) B-)
16:07:28 <corebot> l0rinc: Error: That URL raised <HTTP Error 503: Service Unavailable>
16:07:41 <l0rinc> stickies-v lol, indeed
16:07:41 <cfields> Nice!
16:07:50 <l0rinc> We are also making good progress reviewing #33657. Roman is very responsive, and once the remaining Windows test inconsistencies are fixed, I expect it to be ready for review (and merge) soon.
16:07:50 <kevkevin> #31132 
16:07:57 <corebot> l0rinc: Error: That URL raised <Connection timed out.>
16:07:59 <dergoegge> what are the specs on the umbrel machine?
16:08:04 <corebot> https://github.com/bitcoin/bitcoin/issues/31132 | validation: fetch block inputs on parallel threads >40% faster IBD by andrewtoth · Pull Request #31132 · bitcoin/bitcoin · GitHub
16:08:29 <l0rinc> dergoegge `reindex-chainstate | 923319 blocks | dbcache 450 | umbrel | x86_64 | Intel(R) N150 | 4 cores | 15Gi RAM | ext4 | SSD`
16:08:35 <fjahr> hi
16:08:44 <l0rinc> Roman's PR is #33657
16:08:50 <corebot> l0rinc: Error: That URL raised <Connection timed out.>
16:09:02 <l0rinc> Rob pushed a pruned prototype of SwiftSync in #34004. My understanding is that it is meant as a first iteration, not a final implementation; high-level reviews and benchmarks would be interesting.
16:09:08 <corebot> https://github.com/bitcoin/bitcoin/issues/34004 | Implementation of SwiftSync by rustaceanrob · Pull Request #34004 · bitcoin/bitcoin · GitHub
16:09:18 <l0rinc> Now that we have stable access to a few benchmarking servers, we noticed some memory anomalies: a -reindex-chainstate with the default 450 MB -dbcache was running 4x slower on an rpi5 with 8 GB of memory than on an rpi5 with 16 GB of memory.
16:09:28 <l0rinc> We also observed that on the 16 GB system, runs with -dbcache values of 4 GB and higher were a lot slower than with -dbcache of 3 GB, and that an rpi5 with 16 GB of memory ran out of memory with -dbcache of 10 GB`.
16:09:36 <l0rinc> We are still investigating these, but so far it seems:
16:09:44 <l0rinc> * heavy memory fragmentation overhead may play a role in the premature OOMs. We are benchmarking lowering `M_ARENA_MAX` on 64-bit systems (similarly to the existing 32-bit architectures) to see if that helps in lower-memory environments (for example, an rpi4 with 2 GB of memory, which currently IBDs in about 3 weeks)
16:09:56 <l0rinc> * a high -dbcache value may crowd out the OS page cache (blocks + UTXO set), especially when the UTXO set size approaches the memory limit, which could be the cause of the reported sudden slowdown around block 800k (more reads from disk when the files cannot be cached in memory)
16:09:57 <stickies-v> not sure if rob is here, but might be worth opening an issue for conceptual discussion for swiftsync?
16:10:11 <Novo> I'll let Rob know
16:10:14 <l0rinc> * SSDs (and HDDs with certain filesystems) can experience severe performance degradation when they are nearly full, which, paired with more frequent disk access, dramatically slows down validation for existing 1 TB drives
16:10:27 <l0rinc> The good news is that #31132 seems to result in more than a 50% speedup on these pathological low-end devices.
16:10:35 <dergoegge> do we also have bench results for higher end machines?
16:10:36 <corebot> l0rinc: Error: That URL raised <Connection timed out.>
16:11:02 <l0rinc> yes, on my M4 max reindex-chainstate takes 1.5 hours with #31132 B-)
16:11:07 <corebot> https://github.com/bitcoin/bitcoin/issues/31132 | validation: fetch block inputs on parallel threads >40% faster IBD by andrewtoth · Pull Request #31132 · bitcoin/bitcoin · GitHub
16:11:08 <instagibbs> charts seem to have M4 max
16:11:21 <andrewtoth> dergoegge: https://github.com/bitcoin/bitcoin/pull/31132#pullrequestreview-3515011880
16:11:27 <dergoegge> 🚀
16:12:00 <darosior> l0rinc: re "4x slower on an rpi5 with 8 GB of memory than on an rpi5 with 16 GB of memory" -> probably LevelDB caching?
16:12:07 <l0rinc> and these are against master, which already contains several optimizations
16:12:57 <l0rinc> darosior yes, I think the frequently accessed files being cached by the OS end up being the LevelDB files I think
16:13:00 <cfields> l0rinc: if fragmentation turns out to be significant, you may want to experiment with other malloc impls (like tcmalloc) as well
16:13:23 <sipa> darosior: not LevelDB caching, which is limited - but OS-level caching possibly
16:13:26 <l0rinc> cfields: thanks, they're on my radar, but these measurements take very long
16:13:48 <cfields> 👍
16:14:04 <l0rinc> thank you for the comments, that's it from me
16:14:14 <darosior> sipa: is it? IIRC it's 4096 * 32 MiB by default
16:14:33 <stickies-v> thanks for the comprehensive overview again!
16:14:41 <stickies-v> #topic Silent Payments WG Update (Novo)
16:15:00 <darosior> sipa: (by our defaults, not LevelDB's)
16:15:33 <Novo> No changes from last week. I plan to do some tests with other silent payment wallets. I will give more updates by next week
16:15:37 <sipa> darosior: 128 GiB? we'd OOM every system if that was true
16:15:46 <darosior> sipa: see https://github.com/bitcoin/bitcoin/issues/33351
16:15:59 <stickies-v> 👍
16:16:18 <sipa> darosior: that's mmap, which is nomimally in user space, but it's really the OS cache being exposed to the application
16:16:20 <vasild> hi
16:16:27 <Murch[m]> hi
16:16:32 <sipa> darosior: leveldb also has its own internal cache, which we set to some tiny value iirc
16:17:00 <darosior> sipa: oh ok, my bad. I was refering to LevelDB's mmaping when loosely saying "caching".
16:17:13 <l0rinc> darosior: #31644 indicates we're not using those caches: "readoptions.fill_cache=false"
16:17:18 <corebot> l0rinc: Error: That URL raised <HTTP Error 503: Service Unavailable>
16:17:30 <sipa> go home corebot, you're drunk
16:17:42 <darosior> :)
16:18:05 <sipa> Novo: sorry, continue, it's your topic
16:18:15 <fanquake> Novo: my understanding of the latest discussion in https://github.com/bitcoin-core/secp256k1/pull/1765 is that the BIP might need more clarifications?
16:18:19 <kevkevin> I'm core bot now https://github.com/bitcoin/bitcoin/issues/31644
16:18:27 <l0rinc> :)))
16:19:49 <stickies-v> let's move on to the next topic, we're slowing down a bit
16:19:53 <stickies-v> #topic Cluster Mempool WG Update (sdaftuar, sipa)
16:20:01 <sipa> I have split off #34023 (which contains just optimizations and follow-ups) from #32545 (which switches cluster linearization to the SFL algorithm). Reviews on the PR have focused mainly on the big algorithm description comment added in the beginning, but I hope to see code review soon. There is also the much simpler #33335 still open.
16:20:07 <corebot> https://github.com/bitcoin/bitcoin/issues/34023 | Optimized SFL cluster linearization by sipa · Pull Request #34023 · bitcoin/bitcoin · GitHub
16:20:13 <corebot> https://github.com/bitcoin/bitcoin/issues/32545 | Replace cluster linearization algorithm with SFL by sipa · Pull Request #32545 · bitcoin/bitcoin · GitHub
16:20:19 <corebot> https://github.com/bitcoin/bitcoin/issues/33335 | txgraph: randomize order of same-feerate distinct-cluster transactions by sipa · Pull Request #33335 · bitcoin/bitcoin · GitHub
16:20:20 <sipa> good bot
16:20:41 <sipa> that's it from me, if no questions
16:21:54 <stickies-v> #topic Stratum v2 WG Update (sjors)
16:23:08 <stickies-v> #topic Net Split WG Update (cfields)
16:24:15 <cfields> Been cleaning up my chacha20 simd impl this week (PR incoming), so no net split update.
16:24:47 <dergoegge> any conclusions from the discussion with aj?
16:25:47 <cfields> Not yet. I want to get a little further along and get some other opinions. I think we just pretty fundamentally disagree about shared memory and we'll have to pick one direction or the other...
16:25:55 <l0rinc> cfields: were you able to generalize the simd instructions (i.e. convince the compilers via hints)?
16:26:10 <cfields> But that decision doesn't have to be made yet, it's a good bit down the road.
16:26:25 <dergoegge> 👍
16:26:41 <eugenesiegel> quick question
16:27:01 <cfields> l0rinc: yes, it's generic. Still beats the linux kernel's hand-written asm though :)
16:27:17 <l0rinc> sweet, looking forward to reviewing it
16:27:56 <eugenesiegel> I think it was brought up that process isolation was a good thing in the presence of an attacker who could take down our node. What happens if a DoS is found that takes down the process handling the message processing from peers? Does that just mean we get eclipsed since we can't process any more messages?
16:28:35 <darosior> eugenesiegel: i think in this case the entire node would have to be stopped to let the user know. But that seems tangential to Cory's work
16:28:58 <eugenesiegel> I thought this was one of the motivations for using shared memory? Or am I mistaken?
16:29:01 <darosior> As far as i understand his separation of processes was merely a demonstration, not an actual suggestion
16:29:26 <sipa> i don't think process separation between net and net processing isn't practically interesting - it's a nice demo of clean separation that it's possible, but i don't think it's useful to actually adopt
16:29:29 <cfields> eugenesiegel: multi-process is a bit out of scope for the net split project. We'd still need to discuss whether or not splitting p2p into a separate process even makes sense. It's just something that becomes possible.
16:29:47 <cfields> Hah, what sipa said.
16:29:54 <eugenesiegel> Ah, ok. Makes sense
16:29:56 <sipa> (replace one of my "isn't"s by "is")
16:30:10 <stickies-v> ooh, a puzzle
16:30:30 <stickies-v> #topic libevent organization members team needs update (https://github.com/libevent/libevent/issues/1812) (pinheadmz)
16:30:41 <darosior> xz much?
16:30:45 <cfields> eugenesiegel: as for shared memory, there are 2 models for communicating between net and net_processing. Shared memory (like via CNode, which both sides access) like we have now, or generic handles on both sides.
16:31:20 <cfields> I prefer the latter as it's a cleaner separation, aj prefers the former as it's more performant.
16:31:53 <sipa> cfields: i've been casually following the discussion, but haven't formed an opinion myself
16:32:16 <dergoegge> i'd also lean towards cleaner separation
16:32:39 <dergoegge> don't think the performance difference matters much
16:32:59 <stickies-v> pinheadmz: are you here? otherwise we'll park it for next week
16:33:06 <vasild> what is "generic handles on both sides"?
16:33:14 <cfields> I think it's worth experimenting with both. Imo it just comes down to what the real-world performance penalty is.
16:33:59 <darosior> stickies-v: i think he was here, i've also got something to ask about, maybe we can do that until pinheadmz comes back?
16:34:06 <cfields> vasild: referencing NodeIds and performing lookups in CConnman/PeerMan
16:34:12 <pinheadmz> Connection issues. Park it
16:34:28 <stickies-v> darosior: yeah what's your topic?
16:34:35 <darosior> EOM / EOL https://github.com/bitcoin-core/bitcoincore.org/pull/1200
16:34:38 <vasild> cfields: ok, I see, thanks for the clarification
16:34:47 <stickies-v> cool. i'll let the net split convo finish and then move on
16:35:28 <sipa> from what i understand from the discussion, it's not really about performance, but about whether introducing the handles is actually considered cleaner or not
16:37:11 <stickies-v> alright, looks like we're done with net split?
16:37:39 <cfields> Hmm, as it's pretty fundamental (though nothing has to be decided just yet), I supppose it's worth having a discussion at some point. I'll tee that up for a wg call at some point in the future.
16:37:47 <cfields> stickies-v: 👍
16:37:50 <stickies-v> #topic Update the release lifecycle page to reflect current practices (https://github.com/bitcoin-core/bitcoincore.org/pull/1200) (darosior)
16:37:54 <darosior> I'd like to know if anyone else has an opinion about dropping the EOM status from our releases. We do not observe it, and as far as i can tell we never really have. This PR also updates a bunch of details about the release lifecycle and in general what users can expect from releases, to reflect current practices.
16:37:54 <darosior> https://github.com/bitcoin-core/bitcoincore.org/pull/1200
16:38:48 <sipa> concept ack on simiplifying things, but i've refrained from reviewing it because thinking about all the off-by-ones involved about these things hurts my brain
16:38:48 <stickies-v> sorry i was going to reply on the pr but lost track of it, at first thought i agree that the distinction is  (at least currently) unnecessarily complicating things, so concept ack (but i'll do it on the pr too)
16:39:47 <darosior> sipa: :) this one does not involve a shift, merely removing one column in the table and clarifying things in the text
16:40:03 <Murch[m]> I commented, and basically my point is, is something end-of-life when it got its last update, or when another update would be due and it won’t get it
16:40:26 <sipa> Murch[m]: that's the off-by-one i'm talking about
16:40:49 <darosior> A major branch N becomes end of life when the version N+3 is released
16:41:19 <Murch[m]> Seems fine with me
16:41:25 <darosior> But this is not changed by my PR, my PR just removed the unobserved "end of maintenance" middle state we had at N+2
16:41:41 <Murch[m]> But would you call the branch “maintained” between N+2 release and N+3 release?
16:41:58 <darosior> We already maintain all branches until N+3 is released
16:42:16 <darosior> So, yes
16:42:36 <Murch[m]> How so? I don’t think there are any backports going to N after N.x is released with the N+2 release.
16:42:50 <Murch[m]> What does it mean to be “maintained”, then?
16:43:19 <darosior> There may be if necessary
16:43:37 <Murch[m]> If that’s so, fine. I just don’t think I’ve ever seen it
16:43:46 <darosior> cc fanquake since you usually take care of backports
16:43:58 <darosior> Murch[m]: really? That seems surprising to me
16:44:14 <darosior> We did backport stuff to 27 before the 30 release for instance
16:44:40 <Murch[m]> Okay, then I stand corrected
16:45:02 <fanquake> I think it's more a coincidence that point releases have stated to "sync up" around major releases
16:45:13 <darosior> For instance #33564
16:45:14 <sipa> Murch[m]: there was no 27.x release with those backports however, but it seems "maintained" is about the branch, not necessarily the guarantee of point releases for them
16:45:20 <corebot> https://github.com/bitcoin/bitcoin/issues/33564 | [27.x] Fix Qt download URLs by fanquake · Pull Request #33564 · bitcoin/bitcoin · GitHub
16:45:33 <sipa> also, backports are already a judgment call in any case between complexity of backport and severity
16:45:40 <fanquake> Yea, my next Q was going to be, if we push backports, but don't cut a release, is that still maintained? I'd say yes
16:45:57 <fanquake> And we are actively doing that, ad-hoc
16:46:09 <darosior> Yes and the release page does mention that as branches age the threshold for something to be backported gets higher
16:46:20 <sipa> ok i should just review it
16:46:35 <stickies-v> fanquake: the only reasing not releasing it being that the backports aren't important enough? if so, yes i agree
16:46:53 <fanquake> I agree that getting rid of a 3'rd state of maintenance is good
16:46:55 <Murch[m]> sipa: Oh okay
16:46:57 <Murch[m]> fanquake: Yeah, I’d agree
16:47:02 <Murch[m]> If there is still work on the branch, I’d call it maintained
16:47:18 <Murch[m]> darosior: That covers my questions then!
16:47:22 <Murch[m]> Yeah, I like the simplification
16:47:24 <fanquake> stickies-v: Yea. Sometimes things are backported that aren't user-facing
16:47:37 <fanquake> i.e: backport CI fixes incase we do need the CI to run later
16:47:39 <darosior> The release page was also claiming a bunch of stuff that was imprecise at best and maybe almost misleading
16:47:57 <Murch[m]> I’ll review again, then
16:48:06 <darosior> Alright, that's it from me then. Thanks!
16:48:17 <stickies-v> Anything else to discuss?
16:48:32 <Murch[m]> Maybe just briefly
16:48:33 <sipa> you added a topic yourself, stickies-v?
16:48:55 <sipa> oh, that was pinheadmz's topic, i see
16:48:55 <Murch[m]> I made a patch to BIP 3 to address the review since I motioned to activate. Would love me some review, especially by the people who chimed in.
16:49:08 <stickies-v> yeah but we'll park pinheadmz's for next week as per his request
16:49:15 <sipa> Murch[m]: cool, will do
16:49:28 <Murch[m]> Thanks, sipa
16:49:29 <darosior> Murch[m]: will take a look
16:49:39 <Murch[m]> Okay thanks!
16:50:04 <stickies-v> #endmeeting