Thanks for taking interest. I will add methodology overview to the readme. To your questions:
1. I extracted the algorithms from nostr outbox implementations to a typescript library.
2. In the TS library we have nostr algos from the real world (e.g. from amethyst, nostrudel, nostur etc)
3. In the TS library we also have the general CS algorithms not yet found in nostr apps
4. The computer science problem of “outbox” maps to “weighted set cover problem” AKA “maximum coverage problem”.
https://en.wikipedia.org/wiki/Maximum_coverage_problem
5. The “winning” algo for long term event retrieval MAB is used in real world applications in online ads, a/b testing
https://web.stanford.edu/class/cme241/lecture_slides/Mult…
https://web.mit.edu/6.7950/www/lectures/L15-2022fa.pdf
6. I answer two questions to the above:
Given a npub (e.g. fiatjaf who has 400/whatever follows) run a ~~dozen or so algos
A) “assignment coverage” phase 1(academic/no network ) if every relay is online, and no events are deleted, how many of your follows are reached by this algo?
B) “event recall” phase 2 (real world like/pull events from relays) given realistic conditions of relays being down, events being pruned/deleted, relays gating event retrieval (via auth or otherwise) what are the numbers?
C) given phase 2 above, what happens if we triage relays first via nip-66 liveness monitor events, relays, services like nostr.watch by
@e771af0b…8e8ed66f
7. I have not gone into a dozen apps to measure in app the outbox results. All of this is done in deno/typescript
8. The real world results are vastly different than the academic ones.
I’m measuring more details on nip-66 - the hypotheses are 1) improved coverage, and 2) drastically reduced number of calls to relays to find the info being sought