Rocksolid Light

Welcome to RetroBBS

mail  files  register  newsreader  groups  login

Message-ID:  

C is quirky, flawed, and an enormous success -- Dennis M. Ritchie


computers / news.software.nntp / Re: Meta: a usenet server just for sci.math

SubjectAuthor
* Re: Meta: a usenet server just for sci.mathRoss Finlayson
`* Re: Meta: a usenet server just for sci.mathRoss Finlayson
 `* Re: Meta: a usenet server just for sci.mathRoss Finlayson
  `* Re: Meta: a usenet server just for sci.mathRoss Finlayson
   `* Re: Meta: a usenet server just for sci.mathRoss Finlayson
    `- Re: Meta: a usenet server just for sci.mathRoss Finlayson

1
Re: Meta: a usenet server just for sci.math

<-bOdnWSSIMUKcZn7nZ2dnZfqnPednZ2d@giganews.com>

  copy mid

https://www.rocksolidbbs.com/computers/article-flat.php?id=3028&group=news.software.nntp#3028

  copy link   Newsgroups: sci.math news.software.nntp
Path: i2pn2.org!i2pn.org!usenet.blueworldhosting.com!diablo1.usenet.blueworldhosting.com!feeder.usenetexpress.com!tr3.iad1.usenetexpress.com!69.80.99.27.MISMATCH!Xl.tags.giganews.com!local-2.nntp.ord.giganews.com!news.giganews.com.POSTED!not-for-mail
NNTP-Posting-Date: Thu, 28 Mar 2024 04:05:43 +0000
Subject: Re: Meta: a usenet server just for sci.math
Newsgroups: sci.math,news.software.nntp
References: <8f7c0783-39dd-4f48-99bf-f1cf53b17dd9@googlegroups.com> <1b50e6d3-2e7c-41eb-9324-e91925024f90o@googlegroups.com> <31663ae2-a6a2-44b8-9aa3-9f0d16d24d79o@googlegroups.com> <6eedc16b-2c82-4aaf-a338-92aba2360ba2n@googlegroups.com> <51605ff6-f18f-48c5-8e83-0397632556aen@googlegroups.com> <b0c4589a-f222-457e-95b3-437c0721c2a2n@googlegroups.com> <5a48e832-3573-4c33-b9cb-d112f01b733bn@googlegroups.com> <8wWdnVqZk54j3Fj4nZ2dnZfqnPGdnZ2d@giganews.com> <MY-cnRuWkPoIhFr4nZ2dnZfqnPSdnZ2d@giganews.com> <NqqdnbEz-KTJTlr4nZ2dnZfqnPudnZ2d@giganews.com> <FqOcnYWdRfEI2lT4nZ2dnZfqn_SdnZ2d@giganews.com> <NVudnVAqkJ0Sk1D4nZ2dnZfqn_idnZ2d@giganews.com> <RuKdnfj4NM2rlkz4nZ2dnZfqn_qdnZ2d@giganews.com> <HfCdnROSvfir-E_4nZ2dnZfqnPWdnZ2d@giganews.com> <FLicnRkOg7SrWU_4nZ2dnZfqnPadnZ2d@giganews.com> <v7ecnUsYY7bW40j4nZ2dnZfqnPudnZ2d@giganews.com> <q7-dnR2O9OsAAH74nZ2dnZfqnPhg4p2d@giganews.com> <Hp-cnUAirtFtx2P4nZ2dnZfqnPednZ2d@giganews.com> <MDKdnRJpQ_Q87Z77nZ2dnZfqn_idnZ2d@giganews.com>
From: ross.a.finlayson@gmail.com (Ross Finlayson)
Date: Wed, 27 Mar 2024 21:05:44 -0700
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101 Thunderbird/38.6.0
MIME-Version: 1.0
In-Reply-To: <MDKdnRJpQ_Q87Z77nZ2dnZfqn_idnZ2d@giganews.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
Message-ID: <-bOdnWSSIMUKcZn7nZ2dnZfqnPednZ2d@giganews.com>
Lines: 384
X-Usenet-Provider: http://www.giganews.com
X-Trace: sv3-pAKAd5/5Rre6kRuAb5zLHpn22eNfdiEfLPGMAo+642n0DnvTUWpVz5FHwaOgi2Xa7r6vtm991ivLpHH!3n43tmkD1ujVgTcHbatTi4nyiuy/xApgFln8lyB/jt7Jk+fTaR0EC0FZlgNbHSZbSC9UvEFPJ7GM
X-Complaints-To: abuse@giganews.com
X-DMCA-Notifications: http://www.giganews.com/info/dmca.html
X-Abuse-and-DMCA-Info: Please be sure to forward a copy of ALL headers
X-Abuse-and-DMCA-Info: Otherwise we will be unable to process your complaint properly
X-Postfilter: 1.3.40
 by: Ross Finlayson - Thu, 28 Mar 2024 04:05 UTC

On 03/26/2024 06:04 PM, Ross Finlayson wrote:
> arithmetic hash searches
>
> take a hashcode, split it up
>
> invert each arithmetically, find intersection in 64 bits
>
> fill in those
>
> detect misses when the bits don't intersect the search
>
> when all hits, then "refine", next double range,
>
> compose those naturally by union
>
> when definite misses excluded then go find matching partition
>
> arithmetic partition hash
>
> So, the idea is, that, each message ID, has applied a uniform
> hash, then that it fills a range, of so many bits.
>
> Then, its hash is split into smaller chunks the same 1/2/3/4
> of the paths, then those are considered a fixed-point fraction,
> of the bits set of the word width, plus one.
>
> Then, sort of pyramidally, is that in increasing words, or doubling,
> is that a bunch of those together, mark those words,
> uniformly in the range.
>
> For example 0b00001111, would mark 0b00001000, then
> 0b0000000010000000, and so on, for detecting whether
> the hash code's integer value, is in the range 15/16 - 16/16.
>
> The idea is that the ranges this way compose with binary OR,
> then that a given integer, then that the integer, can be
> detected to be out of the range, if its bit is zero, and then
> otherwise that it may or may not be in the range.
>
> 0b00001111 number N1
> 0b00001000 range R1
> 0b00000111 number N2
> 0b00000100 range R2
>
> 0b00001100 union range UR = R1 | R2 | ....
>
>
> missing(N) {
> return (UR & RN == 0);
> }
>
>
> This sort of helps where, in a usual hash map, determining
> that an item doesn't exist, is worst case, while the usual
> finding the item that exists is log 2, then that usually its value
> is associated with that, besides.
>
> Then, when there are lots of partitions, and they're about
> uniform, it's expected the message ID to be found in only
> one of the partitions, is that the partitions can be organized
> according to their axes of partitions, composing the ranges
> together, then that search walks down those, until it's either
> a definite miss, or an ambiguous hit, then to search among
> those.
>
> It seems then for each partition (group x date), then those
> can be composed together (group x month, group x year,
> groups x year, all), so that looking to find the group x date
> where a message ID is, results that it's a constant-time
> operation to check each of those, and the data structure
> is not very large, with regards to computing the integers'
> offset in each larger range, either giving up when it's
> an unambiguous miss or fully searching when it's an
> ambiguous hit.
>
> This is where, the binary-tree that searches in log 2 n,
> worst-case, where it's balanced and uniform, though
> it's not to be excluded that a usual hashmap implementation
> is linear in hash collisions, is for excluding partitions,
> in about constant time and space given that it's just a
> function of the number of partitions and the eventual
> size of the pyramidal range, that instead of having a
> binary tree with space n^2, the front of it has size L r
> for L the levels of the partition pyramid and r the size
> of the range stamp.
>
> Then, searching in the partitions, seems it essentially
> results, that there's an ordering of the message IDs,
> so there's the "message IDs" file, either fixed-length-records
> or with an index file with fixed-length-records or otherwise
> for reading out the groups' messages, then another one
> with the message ID's sorted, figuring there's a natural
> enough binary search of those with value identity, or bsearch
> after qsort, as it were.
>
> So, the idea is that there's a big grid of group X date archives,
> each one of those a zip file, with being sort of contrived the
> zip files, so that each entry is self-contained, and it sort of
> results that concatenating them results another. So
> anyways, the idea then is for each of those, for each of
> their message IDs, to compute its four integers, W_i,
> then allocate a range, and zero it, then saturate each
> bit, in each range for each integer. So, that's like, say,
> for fitting the range into 4K, for each partition, with
> there being 2^8 of those in a megabyte, or that many
> partitions (512), or about a megabyte in space for each
> partition, but really where these are just variables,
> because it's opportunistic, and the ranges can start
> with just 32 or 64 bits figuring that most partitions
> are sparse, also, in this case, though usually it would
> be expected they are half-full.
>
> There are as many of these ranges as the hash is split
> into numbers, is the idea.
>
> Then the idea is that these ranges are pyramidal in the
> sense, that when doing lookup for the ID, is starting
> from the top of the pyramid, projecting the hash number
> into the range bit string, with one bit for each sub-range,
> so it's branchless, and'ing the number bits and the partition
> range together, and if any of the hash splits isn't in the
> range, a branch, dropping the partition pyramid, else,
> descending into the partition pyramid.
>
> (Code without branches can go a lot faster than
> code with lots of branches, if/then.)
>
> At each level of the pyramid, it's figured that only one
> of the partitions will not be excluded, except for hash
> collisions, then if it's a base level to commence bsearch,
> else to drop the other partition pyramids, and continue
> with the reduced set of ranges in RAM, and the projected
> bits of the ID's hash integer.
>
> The ranges don't even really have to be constant if it's
> so that there's a limit so they're under a constant, then
> according to uniformity they only have so many, eg,
> just projecting out their 1's, so the partition pyramid
> digging sort of always finds one or more partitions
> with possible matches, those being hash collisions or
> messages duplicated across groups, and mostly finds
> those with exclusions, so that it results reducing, for
> example that empty groups are dropped right off
> though not being skipped, while full groups then
> get into needing more than constant space and
> constant time to search.
>
> Of course if all the partitions miss then it's
> also a fast exit that none have the ID.
>
> So, this, "partition pyramid hash filter", with basically,
> "constant and configurable space and time", basically
> has that because Message Id's will only exist in one or
> a few partitions, and for a single group and not across
> about all groups, exactly one, and the hash is uniform, so
> that hash collisions are low, and the partitions aren't
> overfilled, so that hash collisions are low, then it sort
> of results all the un-used partitions at rest, don't fill
> up in n^2 space the log 2 n hash-map search. Then,
> they could, if there was spare space, and it made sense
> that in the write-once-read-many world it was somehow
> many instead of never, a usual case, or, just using a
> list of sorted message Id's in the partition and bsearch,
> this can map the file without loading its contents in
> space, except as ephemerally, or the usual disk controller's
> mmap space, or "ready-time" and "ephemeral-space".
>
> In this sort of way there's no resident RAM for the partitions
> except each one with a fixed-size arithmetic hash stamp,
> while lookups have a fixed or constant cost, plus then
> also a much smaller usual log 2 time / n^2 space trade-off,
> while memory-mapping active files automatically caches.
>
>
> So, the idea is to combine the BFF backing file format
> and LFF library file format ideas, with that the group x date
> partitions make the for archive and active partitions,
> then to have constant-time/constant-space partition
> pyramid arithmetic hash range for lookup, then
> ready-time/ephemeral-space lookup in partitions,
> then that the maintenance of the pyramid tree,
> happens with dropping partitions, while just
> accumulating with adding partitions.
>
> Yeah, I know that a usual idea is just to make a hash map
> after an associative array with log 2 n lookup in n^2 space,
> that maintenance is in adding and removing items,
> here the idea is to have partitions above items,
> and sort of naturally to result "on startup, find
> the current partitions, compose their partition pyramid,
> then run usually constant-time/constant-space in that
> then ready-time/ephemeral-space under that,
> maintenance free", then that as active partitions
> being written roll over to archive partitions being
> finished, then they just get added to the pyramid
> and their ranges or'ed up into the pyramid.
>
> Hmm... 32K or 2^15 groups, 16K or 2^14 days, or
> about 40 years of Usenet in partitions, 2^29,
> about 2^8 per megabyte or about 2^20 or one
> gigabyte RAM, or, just a file, then memory-mapping
> the partition pyramid file, figuring again that
> most partitions are not resident in RAM,
> this seems a sort of good simple idea to
> implement lookup by Message ID over 2^30 many.
>
> I mean if "text Usenet for all time is about a billion messages",
> it seems around that size.
>
>


Click here to read the complete article
Re: Meta: a usenet server just for sci.math

<CoCdnYJuP9p8aob7nZ2dnZfqnPudnZ2d@giganews.com>

  copy mid

https://www.rocksolidbbs.com/computers/article-flat.php?id=3047&group=news.software.nntp#3047

  copy link   Newsgroups: sci.math news.software.nntp
Path: i2pn2.org!i2pn.org!news.neodome.net!npeer.as286.net!npeer-ng0.as286.net!peer03.ams1!peer.ams1.xlned.com!news.xlned.com!peer02.iad!feed-me.highwinds-media.com!news.highwinds-media.com!Xl.tags.giganews.com!local-1.nntp.ord.giganews.com!news.giganews.com.POSTED!not-for-mail
NNTP-Posting-Date: Sun, 14 Apr 2024 15:36:01 +0000
Subject: Re: Meta: a usenet server just for sci.math
Newsgroups: sci.math,news.software.nntp
References: <8f7c0783-39dd-4f48-99bf-f1cf53b17dd9@googlegroups.com>
<31663ae2-a6a2-44b8-9aa3-9f0d16d24d79o@googlegroups.com>
<6eedc16b-2c82-4aaf-a338-92aba2360ba2n@googlegroups.com>
<51605ff6-f18f-48c5-8e83-0397632556aen@googlegroups.com>
<b0c4589a-f222-457e-95b3-437c0721c2a2n@googlegroups.com>
<5a48e832-3573-4c33-b9cb-d112f01b733bn@googlegroups.com>
<8wWdnVqZk54j3Fj4nZ2dnZfqnPGdnZ2d@giganews.com>
<MY-cnRuWkPoIhFr4nZ2dnZfqnPSdnZ2d@giganews.com>
<NqqdnbEz-KTJTlr4nZ2dnZfqnPudnZ2d@giganews.com>
<FqOcnYWdRfEI2lT4nZ2dnZfqn_SdnZ2d@giganews.com>
<NVudnVAqkJ0Sk1D4nZ2dnZfqn_idnZ2d@giganews.com>
<RuKdnfj4NM2rlkz4nZ2dnZfqn_qdnZ2d@giganews.com>
<HfCdnROSvfir-E_4nZ2dnZfqnPWdnZ2d@giganews.com>
<FLicnRkOg7SrWU_4nZ2dnZfqnPadnZ2d@giganews.com>
<v7ecnUsYY7bW40j4nZ2dnZfqnPudnZ2d@giganews.com>
<q7-dnR2O9OsAAH74nZ2dnZfqnPhg4p2d@giganews.com>
<Hp-cnUAirtFtx2P4nZ2dnZfqnPednZ2d@giganews.com>
<MDKdnRJpQ_Q87Z77nZ2dnZfqn_idnZ2d@giganews.com>
<-bOdnWSSIMUKcZn7nZ2dnZfqnPednZ2d@giganews.com>
From: ross.a.finlayson@gmail.com (Ross Finlayson)
Date: Sun, 14 Apr 2024 08:36:01 -0700
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101
Thunderbird/38.6.0
MIME-Version: 1.0
In-Reply-To: <-bOdnWSSIMUKcZn7nZ2dnZfqnPednZ2d@giganews.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
Message-ID: <CoCdnYJuP9p8aob7nZ2dnZfqnPudnZ2d@giganews.com>
Lines: 491
X-Usenet-Provider: http://www.giganews.com
X-Trace: sv3-6mHXvHmnW6U/9eGrqZwQJr3pOgIvIT+avlJef9I5VvhB508QqJCamlHrTnxNC+vhhpGrV7/+pGdVkW3!2MjXGieKpmcQKLKNJkMKXLkORXp1BzzX+txZC9NwIOIltEksaf+Ay1+t17EdbFAyIRaGaW4kE8Q=
X-Complaints-To: abuse@giganews.com
X-DMCA-Notifications: http://www.giganews.com/info/dmca.html
X-Abuse-and-DMCA-Info: Please be sure to forward a copy of ALL headers
X-Abuse-and-DMCA-Info: Otherwise we will be unable to process your complaint properly
X-Postfilter: 1.3.40
X-Received-Bytes: 23467
 by: Ross Finlayson - Sun, 14 Apr 2024 15:36 UTC

On 03/27/2024 09:05 PM, Ross Finlayson wrote:
> On 03/26/2024 06:04 PM, Ross Finlayson wrote:
>> arithmetic hash searches
>>
>> take a hashcode, split it up
>>
>> invert each arithmetically, find intersection in 64 bits
>>
>> fill in those
>>
>> detect misses when the bits don't intersect the search
>>
>> when all hits, then "refine", next double range,
>>
>> compose those naturally by union
>>
>> when definite misses excluded then go find matching partition
>>
>> arithmetic partition hash
>>
>> So, the idea is, that, each message ID, has applied a uniform
>> hash, then that it fills a range, of so many bits.
>>
>> Then, its hash is split into smaller chunks the same 1/2/3/4
>> of the paths, then those are considered a fixed-point fraction,
>> of the bits set of the word width, plus one.
>>
>> Then, sort of pyramidally, is that in increasing words, or doubling,
>> is that a bunch of those together, mark those words,
>> uniformly in the range.
>>
>> For example 0b00001111, would mark 0b00001000, then
>> 0b0000000010000000, and so on, for detecting whether
>> the hash code's integer value, is in the range 15/16 - 16/16.
>>
>> The idea is that the ranges this way compose with binary OR,
>> then that a given integer, then that the integer, can be
>> detected to be out of the range, if its bit is zero, and then
>> otherwise that it may or may not be in the range.
>>
>> 0b00001111 number N1
>> 0b00001000 range R1
>> 0b00000111 number N2
>> 0b00000100 range R2
>>
>> 0b00001100 union range UR = R1 | R2 | ....
>>
>>
>> missing(N) {
>> return (UR & RN == 0);
>> }
>>
>>
>> This sort of helps where, in a usual hash map, determining
>> that an item doesn't exist, is worst case, while the usual
>> finding the item that exists is log 2, then that usually its value
>> is associated with that, besides.
>>
>> Then, when there are lots of partitions, and they're about
>> uniform, it's expected the message ID to be found in only
>> one of the partitions, is that the partitions can be organized
>> according to their axes of partitions, composing the ranges
>> together, then that search walks down those, until it's either
>> a definite miss, or an ambiguous hit, then to search among
>> those.
>>
>> It seems then for each partition (group x date), then those
>> can be composed together (group x month, group x year,
>> groups x year, all), so that looking to find the group x date
>> where a message ID is, results that it's a constant-time
>> operation to check each of those, and the data structure
>> is not very large, with regards to computing the integers'
>> offset in each larger range, either giving up when it's
>> an unambiguous miss or fully searching when it's an
>> ambiguous hit.
>>
>> This is where, the binary-tree that searches in log 2 n,
>> worst-case, where it's balanced and uniform, though
>> it's not to be excluded that a usual hashmap implementation
>> is linear in hash collisions, is for excluding partitions,
>> in about constant time and space given that it's just a
>> function of the number of partitions and the eventual
>> size of the pyramidal range, that instead of having a
>> binary tree with space n^2, the front of it has size L r
>> for L the levels of the partition pyramid and r the size
>> of the range stamp.
>>
>> Then, searching in the partitions, seems it essentially
>> results, that there's an ordering of the message IDs,
>> so there's the "message IDs" file, either fixed-length-records
>> or with an index file with fixed-length-records or otherwise
>> for reading out the groups' messages, then another one
>> with the message ID's sorted, figuring there's a natural
>> enough binary search of those with value identity, or bsearch
>> after qsort, as it were.
>>
>> So, the idea is that there's a big grid of group X date archives,
>> each one of those a zip file, with being sort of contrived the
>> zip files, so that each entry is self-contained, and it sort of
>> results that concatenating them results another. So
>> anyways, the idea then is for each of those, for each of
>> their message IDs, to compute its four integers, W_i,
>> then allocate a range, and zero it, then saturate each
>> bit, in each range for each integer. So, that's like, say,
>> for fitting the range into 4K, for each partition, with
>> there being 2^8 of those in a megabyte, or that many
>> partitions (512), or about a megabyte in space for each
>> partition, but really where these are just variables,
>> because it's opportunistic, and the ranges can start
>> with just 32 or 64 bits figuring that most partitions
>> are sparse, also, in this case, though usually it would
>> be expected they are half-full.
>>
>> There are as many of these ranges as the hash is split
>> into numbers, is the idea.
>>
>> Then the idea is that these ranges are pyramidal in the
>> sense, that when doing lookup for the ID, is starting
>> from the top of the pyramid, projecting the hash number
>> into the range bit string, with one bit for each sub-range,
>> so it's branchless, and'ing the number bits and the partition
>> range together, and if any of the hash splits isn't in the
>> range, a branch, dropping the partition pyramid, else,
>> descending into the partition pyramid.
>>
>> (Code without branches can go a lot faster than
>> code with lots of branches, if/then.)
>>
>> At each level of the pyramid, it's figured that only one
>> of the partitions will not be excluded, except for hash
>> collisions, then if it's a base level to commence bsearch,
>> else to drop the other partition pyramids, and continue
>> with the reduced set of ranges in RAM, and the projected
>> bits of the ID's hash integer.
>>
>> The ranges don't even really have to be constant if it's
>> so that there's a limit so they're under a constant, then
>> according to uniformity they only have so many, eg,
>> just projecting out their 1's, so the partition pyramid
>> digging sort of always finds one or more partitions
>> with possible matches, those being hash collisions or
>> messages duplicated across groups, and mostly finds
>> those with exclusions, so that it results reducing, for
>> example that empty groups are dropped right off
>> though not being skipped, while full groups then
>> get into needing more than constant space and
>> constant time to search.
>>
>> Of course if all the partitions miss then it's
>> also a fast exit that none have the ID.
>>
>> So, this, "partition pyramid hash filter", with basically,
>> "constant and configurable space and time", basically
>> has that because Message Id's will only exist in one or
>> a few partitions, and for a single group and not across
>> about all groups, exactly one, and the hash is uniform, so
>> that hash collisions are low, and the partitions aren't
>> overfilled, so that hash collisions are low, then it sort
>> of results all the un-used partitions at rest, don't fill
>> up in n^2 space the log 2 n hash-map search. Then,
>> they could, if there was spare space, and it made sense
>> that in the write-once-read-many world it was somehow
>> many instead of never, a usual case, or, just using a
>> list of sorted message Id's in the partition and bsearch,
>> this can map the file without loading its contents in
>> space, except as ephemerally, or the usual disk controller's
>> mmap space, or "ready-time" and "ephemeral-space".
>>
>> In this sort of way there's no resident RAM for the partitions
>> except each one with a fixed-size arithmetic hash stamp,
>> while lookups have a fixed or constant cost, plus then
>> also a much smaller usual log 2 time / n^2 space trade-off,
>> while memory-mapping active files automatically caches.
>>
>>
>> So, the idea is to combine the BFF backing file format
>> and LFF library file format ideas, with that the group x date
>> partitions make the for archive and active partitions,
>> then to have constant-time/constant-space partition
>> pyramid arithmetic hash range for lookup, then
>> ready-time/ephemeral-space lookup in partitions,
>> then that the maintenance of the pyramid tree,
>> happens with dropping partitions, while just
>> accumulating with adding partitions.
>>
>> Yeah, I know that a usual idea is just to make a hash map
>> after an associative array with log 2 n lookup in n^2 space,
>> that maintenance is in adding and removing items,
>> here the idea is to have partitions above items,
>> and sort of naturally to result "on startup, find
>> the current partitions, compose their partition pyramid,
>> then run usually constant-time/constant-space in that
>> then ready-time/ephemeral-space under that,
>> maintenance free", then that as active partitions
>> being written roll over to archive partitions being
>> finished, then they just get added to the pyramid
>> and their ranges or'ed up into the pyramid.
>>
>> Hmm... 32K or 2^15 groups, 16K or 2^14 days, or
>> about 40 years of Usenet in partitions, 2^29,
>> about 2^8 per megabyte or about 2^20 or one
>> gigabyte RAM, or, just a file, then memory-mapping
>> the partition pyramid file, figuring again that
>> most partitions are not resident in RAM,
>> this seems a sort of good simple idea to
>> implement lookup by Message ID over 2^30 many.
>>
>> I mean if "text Usenet for all time is about a billion messages",
>> it seems around that size.
>>
>>
>
>
>
> So, trying to figure out if this "arithmetic hash range
> pyramidal partition" data structure is actually sort of
> reasonable, gets into that it involves finding a balance
> in what's otherwise a very well-understood trade-off,
> in terms of the cost of a lookup, over time, and then
> especially as whether an algorithm is "scale-able",
> that even a slightly lesser algorithm might be better
> if it results "scale-able", especially if it breaks down
> to a very, very minimal set of resources, in time,
> and in various organizations of space, or distance,
> which everybody knows as CPU, RAM, and DISK,
> in terms of time, those of lookups per second,
> and particularly where parallelizable as with
> regards to both linear speed-up and also immutable
> data structures, or, clustering. ("Scale.")
>
>
> Then it's probably so that the ranges are pretty small,
> because they double, and whether it's best just to
> have an overall single range, or, refinements of it,
> according to a "factor", a "factor" that represents
> how likely it is that hashes don't collide in the range,
> or that they do.
>
> This is a different way of looking at hash collisions,
> besides that two objects have the same hash,
> just that they're in the same partition of the range
> their integer value, for fixed-length uniform hashes.
>
> I.e., a hash collision proper would always be a
> redundant or order-dependent dig-up, of a sort,
> where the idea is that the lookup first results
> searching the pyramid plan for possibles, then
> digging up each of those and checking for match.
>
> The idea that group x date sort of has that those
> are about on the same order is a thing, then about
> the idea that "category" and "year" are similarly
> about so,
>
> Big8 x year
> group x date
>
> it's very contrived to have those be on the same
> order, in terms of otherwise partitioning, or about
> what it results that "partitions are organized so that
> their partitions are tuples and the tuples are about
> on the same order, so it goes, thus that uniformity
> of hashes, results being equi-distributed in those,
> so that it results the factor is good and that arithmetic
> hash ranges filter out most of the partitions, and,
> especially that there aren't many false-positive dig-up
> partitions.
>
> It's sort of contrived, but then it does sort of make
> it so that also other search concerns like "only these
> groups or only these years anyways", naturally get
> dropped out at the partition layer, and, right in the
> front of the lookup algorithm.
>
> It's pretty much expected though that there would
> be non-zero false-positive dig-ups, where here a dig-up
> is that the arithmetic hash range matched, but it's
> actually a different Message ID's hash in the range,
> and not the lookup value(s).
>
> Right, so just re-capping here a bit, the idea is that
> there are groups, and dates, and for each is a zip file,
> which is a collection of files in a file-system entry file
> with about random access on the zip file each entry,
> and compressed, and the entries include Messages,
> by their Message ID's, then that the entries are
> maybe in sub-directories, that reflect components
> of the Message ID's hash, where a hash, is a fixed-length
> value, like 64 bytes or 128 bytes, or a power of two
> and usually an even power of two thus a multiple of four,
> thus that a 64 byte hash has 2^64 * 2^8 many possible
> values, then that a range, of length R bits, has R many
> partitions, in terms of the hash size and the range size,
> whether the factor is low enough, that most partitions
> will naturally be absent most ranges, because hashes
> can only be computed from Message ID's, not by their
> partitions or other information like the group or date.
>
> So, if there are 2^30 or a billion messages, then a
> 32 bit hash, would have a fair expectation that
> unused values would be not dense, then for
> what gets into "birthday problem" or otherwise
> how "Dirichlet principle" makes for how often
> are hash collisions, for how often are range collisions,
> either making redundant dig-ups, in the way this
> sort of algorithm services look-ups.
>
> The 32 bits is quite a bit less than 64 * 8, though,
> about whether it would also result, that, splitting
> that into subdirectories, results different organizations
> here about "tuned to Usenet-scale and organization",
> vis-a-vis, "everybody's email" or something like that.
> That said, it shouldn't just fall apart if the size or
> count blows up, though it might be expect then
> a various sorts of partitioning, to keep the partition
> tuple orders square, or on the same orders.
>
>
> The md5 is widely available, "md5sum", it's 128 bits,
> its output is hexadecimal characters, 32-many.
>
> https://en.wikipedia.org/wiki/MD5
> https://en.wikipedia.org/wiki/Partition_(database)
> https://en.wikipedia.org/wiki/Hash_function#Uniformity
>
> Otherwise the only goal of the hash is to be uniform,
> and also to have "avalanche criterion", so that near Message-Id's
> will still be expected to have different hashes, as it's not
> necessarily expected that they're the same group and
> date, though that would be a thing, yet Message ID's
> should be considered opaque and not seated together.
>
> Then MD5 is about the most usual hash utility laying
> around, if not SHA-1, or SHA-256. Hmm..., in the
> interests of digital preservation is "the tools for
> any algorithms should also be around forever",
> one of those things.
>
> So anyways, then each group x date has its Message ID's,
> each of those has its hash, each of those fits in a range,
> indicating one bit in the range where it is, then those are
> OR'd together to result a bit-mask of the range, then
> that a lookup can check its hash's bit against the range,
> and dig-up the partition if it's in, or, skip the partition
> if it's not, with the idea that the range is big enough
> and the resulting group x date is small enough, that
> the "pyramidal partition", is mostly sparse, at the lower
> levels, that it's mostly "look-arounds" until finally the
> "dig-ups", in the leaf nodes of the pyramidal partitions.
>
> I.e., the dig-ups will eventually include spurious or
> redundant false-positives, that the algorithm will
> access the leaf partitions at uniform random.
>
> The "pyramidal" then also get into both the empties,
> like rec.calm with zero posts ten years running,
> or alt.spew which any given day exceeds zip files
> or results a lot of "zip format, but the variously
> packaged, not-recompressed binaries", the various
> other use cases than mostly at-rest and never-read
> archival purposes. The idea of the "arithmetic hash
> range pyramidal partition" is that mostly the
> leaf partitions are quite small and sparse, and
> mostly the leveling of the pyramid into year/month/date
> and big8/middle/group, as it were, winnows those
> down in what's a constant-rate constant-space scan
> on the immutable data structure of the partition pyramid.
>
> Yeah, I know, "numbers", here though the idea is
> that about 30K groups at around 18K days = 50 years
> makes about 30 * 20 * million or less than a billion
> files the zip files, which would all fit on a volume
> that supports up to four billion-many files, or an
> object-store, then with regards to that most of
> those would be quite small or even empty,
> then with regards to "building the pyramid",
> the levels big8/middle/group X year/month/date,
> the data structure of the hashes marking the ranges,
> then those themselves resulting a file, which are
> basically the entire contents of allocated RAM,
> or for that matter a memory-mapped file, with
> the idea that everything else is ephemeral RAM.
>
>
>


Click here to read the complete article
Re: Meta: a usenet server just for sci.math

<bJ6dna6Zv4r7lbn7nZ2dnZfqn_WdnZ2d@giganews.com>

  copy mid

https://www.rocksolidbbs.com/computers/article-flat.php?id=3096&group=news.software.nntp#3096

  copy link   Newsgroups: sci.math news.software.nntp comp.programming.threads
Path: i2pn2.org!i2pn.org!weretis.net!feeder6.news.weretis.net!border-3.nntp.ord.giganews.com!nntp.giganews.com!Xl.tags.giganews.com!local-2.nntp.ord.giganews.com!news.giganews.com.POSTED!not-for-mail
NNTP-Posting-Date: Sat, 20 Apr 2024 18:24:38 +0000
Subject: Re: Meta: a usenet server just for sci.math
Newsgroups: sci.math,news.software.nntp,comp.programming.threads
References: <8f7c0783-39dd-4f48-99bf-f1cf53b17dd9@googlegroups.com>
<6eedc16b-2c82-4aaf-a338-92aba2360ba2n@googlegroups.com>
<51605ff6-f18f-48c5-8e83-0397632556aen@googlegroups.com>
<b0c4589a-f222-457e-95b3-437c0721c2a2n@googlegroups.com>
<5a48e832-3573-4c33-b9cb-d112f01b733bn@googlegroups.com>
<8wWdnVqZk54j3Fj4nZ2dnZfqnPGdnZ2d@giganews.com>
<MY-cnRuWkPoIhFr4nZ2dnZfqnPSdnZ2d@giganews.com>
<NqqdnbEz-KTJTlr4nZ2dnZfqnPudnZ2d@giganews.com>
<FqOcnYWdRfEI2lT4nZ2dnZfqn_SdnZ2d@giganews.com>
<NVudnVAqkJ0Sk1D4nZ2dnZfqn_idnZ2d@giganews.com>
<RuKdnfj4NM2rlkz4nZ2dnZfqn_qdnZ2d@giganews.com>
<HfCdnROSvfir-E_4nZ2dnZfqnPWdnZ2d@giganews.com>
<FLicnRkOg7SrWU_4nZ2dnZfqnPadnZ2d@giganews.com>
<v7ecnUsYY7bW40j4nZ2dnZfqnPudnZ2d@giganews.com>
<q7-dnR2O9OsAAH74nZ2dnZfqnPhg4p2d@giganews.com>
<Hp-cnUAirtFtx2P4nZ2dnZfqnPednZ2d@giganews.com>
<MDKdnRJpQ_Q87Z77nZ2dnZfqn_idnZ2d@giganews.com>
<-bOdnWSSIMUKcZn7nZ2dnZfqnPednZ2d@giganews.com>
<CoCdnYJuP9p8aob7nZ2dnZfqnPudnZ2d@giganews.com>
From: ross.a.finlayson@gmail.com (Ross Finlayson)
Date: Sat, 20 Apr 2024 11:24:49 -0700
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101
Thunderbird/38.6.0
MIME-Version: 1.0
In-Reply-To: <CoCdnYJuP9p8aob7nZ2dnZfqnPudnZ2d@giganews.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
Message-ID: <bJ6dna6Zv4r7lbn7nZ2dnZfqn_WdnZ2d@giganews.com>
Lines: 337
X-Usenet-Provider: http://www.giganews.com
X-Trace: sv3-qGf6/yfJ2D6L27bs3xvi2tPjp31lALhODXVdg7pGKkH5cIDvo5McwVTJt46EkH/HycAJfz01KdWce8K!EBPZULdbpYrIOCgzGLNGT6Jxz+IacKSVC0dGnQE5Hgv6ktPp9o4dqbq1cz4DkPC2ZzB57Gz1x0vy
X-Complaints-To: abuse@giganews.com
X-DMCA-Notifications: http://www.giganews.com/info/dmca.html
X-Abuse-and-DMCA-Info: Please be sure to forward a copy of ALL headers
X-Abuse-and-DMCA-Info: Otherwise we will be unable to process your complaint properly
X-Postfilter: 1.3.40
 by: Ross Finlayson - Sat, 20 Apr 2024 18:24 UTC

Well I've been thinking about the re-routine as a model of cooperative
multithreading,
then thinking about the flow-machine of protocols

NNTP
IMAP <-> NNTP
HTTP <-> IMAP <-> NNTP

Both IMAP and NNTP are session-oriented on the connection, while,
HTTP, in terms of session, has various approaches in terms of HTTP 1.1
and connections, and the session ID shared client/server.

The re-routine idea is this, that each kind of method, is memoizable,
and, it memoizes, by object identity as the key, for the method, all
its callers, how this is like so.

interface Reroutine1 {

Result1 rr1(String a1) {

Result2 r2 = reroutine2.rr2(a1);

Result3 r3 = reroutine3.rr3(r2);

return result(r2, r3);
}

}

The idea is that the executor, when it's submitted a reroutine,
when it runs the re-routine, in a thread, then it puts in a ThreadLocal,
the re-routine, so that when a re-routine it calls, returns null as it
starts an asynchronous computation for the input, then when
it completes, it submits to the executor the re-routine again.

Then rr1 runs through again, retrieving r2 which is memoized,
invokes rr3, which throws, after queuing to memoize and
resubmit rr1, when that calls back to resubmit r1, then rr1
routines, signaling the original invoker.

Then it seems each re-routine basically has an instance part
and a memoized part, and that it's to flush the memo
after it finishes, in terms of memoizing the inputs.

Result 1 rr(String a1) {
// if a1 is in the memo, return for it
// else queue for it and carry on

}

What is a re-routine?

It's a pattern for cooperative multithreading.

It's sort of a functional approach to functions and flow.

It has a declarative syntax in the language with usual flow-of-control.

So, it's cooperative multithreading so it yields?

No, it just quits, and expects to be called back.

So, if it quits, how does it complete?

The entry point to re-routine provides a callback.

Re-routines only return results to other re-routines,
It's the default callback. Otherwise they just callback.

So, it just quits?

If a re-routine gets called with a null, it throws.

If a re-routine gets a null, it just continues.

If a re-routine completes, it callbacks.

So, can a re-routine call any regular code?

Yeah, there are some issues, though.

So, it's got callbacks everywhere?

Well, it's just got callbacks implicitly everywhere.

So, how does it work?

Well, you build a re-routine with an input and a callback,
you call it, then when it completes, it calls the callback.

Then, re-routines call other re-routines with the argument,
and the callback's in a ThreadLocal, and the re-routine memoizes
all of its return values according to the object identity of the inputs,
then when a re-routine completes, it calls again with another ThreadLocal
indicating to delete the memos, following the exact same flow-of-control
only deleting the memos going along, until it results all the memos in
the re-routines for the interned or ref-counted input are deleted,
then the state of the re-routine is de-allocated.

So, it's sort of like a monad and all in pure and idempotent functions?

Yeah, it's sort of like a monad and all in pure and idempotent functions.

So, it's a model of cooperative multithreading, though with no yield,
and callbacks implicitly everywhere?

Yeah, it's sort of figured that a called re-routine always has a
callback in the ThreadLocal, because the runtime has pre-emptive
multithreading anyways, that the thread runs through its re-routines in
their normal declarative flow-of-control with exception handling, and
whatever re-routines or other pure monadic idempotent functions it
calls, throw when they get null inputs.

Also it sort of doesn't have primitive types, Strings must always be
interned, all objects must have a distinct identity w.r.t. ==, and null
is never an argument or return value.

So, what does it look like?

interface Reroutine1 {

Result1 rr1(String a1) {

Result2 r2 = reroutine2.rr2(a1);

Result3 r3 = reroutine3.rr3(r2);

return result(r2, r3);
}

}

So, I expect that to return "result(r2, r3)".

Well, that's synchronous, and maybe blocking, the idea is that it calls
rr2, gets a1, and rr2 constructs with the callback of rr1 and it's own
callback, and a1, and makes a memo for a1, and invokes whatever is its
implementation, and returns null, then rr1 continues and invokes rr3
with r2, which is null, so that throws a NullPointerException, and rr1
quits.

So, ..., that's cooperative multithreading?

Well you see what happens is that rr2 invoked another re-routine or end
routine, and at some point it will get called back, and that will happen
over and over again until rr2 has an r2, then rr2 will memoize (a1, r2),
and then it will callback rr1.

Then rr1 had quit, it runs again, this time it gets r2 from the (a1,
r2) memo in the monad it's building, then it passes a non-null r2 to
rr3, which proceeds in much the same way, while rr1 quits again until
rr3 calls it back.

So, ..., it's non-blocking, because it just quits all the time, then
happens to run through the same paces filling in?

That's the idea, that re-routines are responsible to build the monad
and call-back.

So, can I just implement rr2 and rr3 as synchronous and blocking?

Sure, they're interfaces, their implementation is separate. If they
don't know re-routine semantics then they're just synchronous and
blocking. They'll get called every time though when the re-routine gets
called back, and actually they need to know the semantics of returning
an Object or value by identity, because, calling equals() to implement
Memo usually would be too much, where the idea is to actually function
only monadically, and that given same Object or value input, must return
same Object or value output.

So, it's sort of an approach as a monadic pure idempotency?

Well, yeah, you can call it that.

So, what's the point of all this?

Well, the idea is that there are 10,000 connections, and any time one
of them demultiplexes off the connection an input command message, then
it builds one of these with the response input to the demultiplexer on
its protocol on its connection, on the multiplexer to all the
connections, with a callback to itself. Then the re-routine is launched
and when it returns, it calls-back to the originator by its
callback-number, then the output command response writes those back out.

The point is that there are only as many Theads as cores so the goal is
that they never block,
and that the memos make for interning Objects by value, then the goal is
mostly to receive command objects and handles to request bodies and
result objects and handles to response bodies, then to call-back with
those in whatever serial order is necessary, or not.

So, won't this run through each of these re-routines umpteen times?

Yeah, you figure that the runtime of the re-routine is on the order of
n^2 the order of statements in the re-routine.

So, isn't that terrible?

Well, it doesn't block.

So, it sounds like a big mess.

Yeah, it could be. That's why to avoid blocking and callback
semantics, is to make monadic idempotency semantics, so then the
re-routines are just written in normal synchronous flow-of-control, and
they're well-defined behavior is exactly according to flow-of-control
including exception-handling.

There's that and there's basically it only needs one Thread, so, less
Thread x stack size, for a deep enough thread call-stack. Then the idea
is about one Thread per core, figuring for the thread to always be
running and never be blocking.

So, it's just normal flow-of-control.

Well yeah, you expect to write the routine in normal flow-of-control,
and to test it with synchronous and in-memory editions that just run
through synchronously, and that if you don't much care if it blocks,
then it's the same code and has no semantics about the asynchronous or
callbacks actually in it. It just returns when it's done.

So what's the requirements of one of these again?

Well, the idea is, that, for a given instance of a re-routine, it's an
Object, that implements an interface, and it has arguments, and it has a
return value. The expectation is that the re-routine gets called with
the same arguments, and must return the same return value. This way
later calls to re-routines can match the same expectation, same/same.

Also, if it gets different arguments, by Object identity or primitive
value, the re-routine must return a different return value, those being
same/same.

The re-routine memoizes its arguments by its argument list, Object or
primitive value, and a given argument list is same if the order and
types and values of those are same, and it must return the same return
value by type and value.

So, how is this cooperative multithreading unobtrusively in
flow-of-control again?

Here for example the idea would be, rr2 quits and rr1 continues, rr3
quits and rr1 continues, then reaching rr4, rr4 throws and rr1 quits.
When rr2's or rr3's memo-callback completes, then it calls-back rr1. as
those come in, at some point rr4 will be fulfilled, and thus rr4 will
quit and rr1 will quit. When rr4's callback completes, then it will
call-back rr1, which will finally complete, and then call-back whatever
called r1. Then rr1 runs itself through one more time to
delete or decrement all its memos.


Click here to read the complete article
Re: Meta: a usenet server just for sci.math

<TO6cnaz7jdFtBbv7nZ2dnZfqn_WdnZ2d@giganews.com>

  copy mid

https://www.rocksolidbbs.com/computers/article-flat.php?id=3129&group=news.software.nntp#3129

  copy link   Newsgroups: sci.math news.software.nntp comp.programming.threads
Path: i2pn2.org!i2pn.org!weretis.net!feeder6.news.weretis.net!border-3.nntp.ord.giganews.com!border-2.nntp.ord.giganews.com!border-4.nntp.ord.giganews.com!nntp.giganews.com!Xl.tags.giganews.com!local-2.nntp.ord.giganews.com!news.giganews.com.POSTED!not-for-mail
NNTP-Posting-Date: Mon, 22 Apr 2024 17:05:51 +0000
Subject: Re: Meta: a usenet server just for sci.math
Newsgroups: sci.math,news.software.nntp,comp.programming.threads
References: <8f7c0783-39dd-4f48-99bf-f1cf53b17dd9@googlegroups.com>
<51605ff6-f18f-48c5-8e83-0397632556aen@googlegroups.com>
<b0c4589a-f222-457e-95b3-437c0721c2a2n@googlegroups.com>
<5a48e832-3573-4c33-b9cb-d112f01b733bn@googlegroups.com>
<8wWdnVqZk54j3Fj4nZ2dnZfqnPGdnZ2d@giganews.com>
<MY-cnRuWkPoIhFr4nZ2dnZfqnPSdnZ2d@giganews.com>
<NqqdnbEz-KTJTlr4nZ2dnZfqnPudnZ2d@giganews.com>
<FqOcnYWdRfEI2lT4nZ2dnZfqn_SdnZ2d@giganews.com>
<NVudnVAqkJ0Sk1D4nZ2dnZfqn_idnZ2d@giganews.com>
<RuKdnfj4NM2rlkz4nZ2dnZfqn_qdnZ2d@giganews.com>
<HfCdnROSvfir-E_4nZ2dnZfqnPWdnZ2d@giganews.com>
<FLicnRkOg7SrWU_4nZ2dnZfqnPadnZ2d@giganews.com>
<v7ecnUsYY7bW40j4nZ2dnZfqnPudnZ2d@giganews.com>
<q7-dnR2O9OsAAH74nZ2dnZfqnPhg4p2d@giganews.com>
<Hp-cnUAirtFtx2P4nZ2dnZfqnPednZ2d@giganews.com>
<MDKdnRJpQ_Q87Z77nZ2dnZfqn_idnZ2d@giganews.com>
<-bOdnWSSIMUKcZn7nZ2dnZfqnPednZ2d@giganews.com>
<CoCdnYJuP9p8aob7nZ2dnZfqnPudnZ2d@giganews.com>
<bJ6dna6Zv4r7lbn7nZ2dnZfqn_WdnZ2d@giganews.com>
From: ross.a.finlayson@gmail.com (Ross Finlayson)
Date: Mon, 22 Apr 2024 10:06:02 -0700
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101
Thunderbird/38.6.0
MIME-Version: 1.0
In-Reply-To: <bJ6dna6Zv4r7lbn7nZ2dnZfqn_WdnZ2d@giganews.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
Message-ID: <TO6cnaz7jdFtBbv7nZ2dnZfqn_WdnZ2d@giganews.com>
Lines: 799
X-Usenet-Provider: http://www.giganews.com
X-Trace: sv3-4OQk/4dMD0KdGEARw4Rd9DD0+DrU0YT3R0GKsvZyOjsJe4V1SVidcuBvvAm6DT7b2WFThYO98l5v114!0p3J8JqQE5WMDeJnC5UOpqt1eP2Xd888zyBqJPkYkIcB4hho0gDstpbY948+5W9acGsZA72NaRXB
X-Complaints-To: abuse@giganews.com
X-DMCA-Notifications: http://www.giganews.com/info/dmca.html
X-Abuse-and-DMCA-Info: Please be sure to forward a copy of ALL headers
X-Abuse-and-DMCA-Info: Otherwise we will be unable to process your complaint properly
X-Postfilter: 1.3.40
 by: Ross Finlayson - Mon, 22 Apr 2024 17:06 UTC

On 04/20/2024 11:24 AM, Ross Finlayson wrote:
>
>
> Well I've been thinking about the re-routine as a model of cooperative
> multithreading,
> then thinking about the flow-machine of protocols
>
> NNTP
> IMAP <-> NNTP
> HTTP <-> IMAP <-> NNTP
>
> Both IMAP and NNTP are session-oriented on the connection, while,
> HTTP, in terms of session, has various approaches in terms of HTTP 1.1
> and connections, and the session ID shared client/server.
>
>
> The re-routine idea is this, that each kind of method, is memoizable,
> and, it memoizes, by object identity as the key, for the method, all
> its callers, how this is like so.
>
> interface Reroutine1 {
>
> Result1 rr1(String a1) {
>
> Result2 r2 = reroutine2.rr2(a1);
>
> Result3 r3 = reroutine3.rr3(r2);
>
> return result(r2, r3);
> }
>
> }
>
>
> The idea is that the executor, when it's submitted a reroutine,
> when it runs the re-routine, in a thread, then it puts in a ThreadLocal,
> the re-routine, so that when a re-routine it calls, returns null as it
> starts an asynchronous computation for the input, then when
> it completes, it submits to the executor the re-routine again.
>
> Then rr1 runs through again, retrieving r2 which is memoized,
> invokes rr3, which throws, after queuing to memoize and
> resubmit rr1, when that calls back to resubmit r1, then rr1
> routines, signaling the original invoker.
>
> Then it seems each re-routine basically has an instance part
> and a memoized part, and that it's to flush the memo
> after it finishes, in terms of memoizing the inputs.
>
>
> Result 1 rr(String a1) {
> // if a1 is in the memo, return for it
> // else queue for it and carry on
>
> }
>
>
> What is a re-routine?
>
> It's a pattern for cooperative multithreading.
>
> It's sort of a functional approach to functions and flow.
>
> It has a declarative syntax in the language with usual
> flow-of-control.
>
> So, it's cooperative multithreading so it yields?
>
> No, it just quits, and expects to be called back.
>
> So, if it quits, how does it complete?
>
> The entry point to re-routine provides a callback.
>
> Re-routines only return results to other re-routines,
> It's the default callback. Otherwise they just callback.
>
> So, it just quits?
>
> If a re-routine gets called with a null, it throws.
>
> If a re-routine gets a null, it just continues.
>
> If a re-routine completes, it callbacks.
>
> So, can a re-routine call any regular code?
>
> Yeah, there are some issues, though.
>
> So, it's got callbacks everywhere?
>
> Well, it's just got callbacks implicitly everywhere.
>
> So, how does it work?
>
> Well, you build a re-routine with an input and a callback,
> you call it, then when it completes, it calls the callback.
>
> Then, re-routines call other re-routines with the argument,
> and the callback's in a ThreadLocal, and the re-routine memoizes
> all of its return values according to the object identity of the
> inputs,
> then when a re-routine completes, it calls again with another
> ThreadLocal
> indicating to delete the memos, following the exact same
> flow-of-control
> only deleting the memos going along, until it results all the memos in
> the re-routines for the interned or ref-counted input are deleted,
> then the state of the re-routine is de-allocated.
>
> So, it's sort of like a monad and all in pure and idempotent functions?
>
> Yeah, it's sort of like a monad and all in pure and idempotent
> functions.
>
> So, it's a model of cooperative multithreading, though with no yield,
> and callbacks implicitly everywhere?
>
> Yeah, it's sort of figured that a called re-routine always has a
> callback in the ThreadLocal, because the runtime has pre-emptive
> multithreading anyways, that the thread runs through its re-routines in
> their normal declarative flow-of-control with exception handling, and
> whatever re-routines or other pure monadic idempotent functions it
> calls, throw when they get null inputs.
>
> Also it sort of doesn't have primitive types, Strings must always
> be interned, all objects must have a distinct identity w.r.t. ==, and
> null is never an argument or return value.
>
> So, what does it look like?
>
> interface Reroutine1 {
>
> Result1 rr1(String a1) {
>
> Result2 r2 = reroutine2.rr2(a1);
>
> Result3 r3 = reroutine3.rr3(r2);
>
> return result(r2, r3);
> }
>
> }
>
> So, I expect that to return "result(r2, r3)".
>
> Well, that's synchronous, and maybe blocking, the idea is that it
> calls rr2, gets a1, and rr2 constructs with the callback of rr1 and it's
> own callback, and a1, and makes a memo for a1, and invokes whatever is
> its implementation, and returns null, then rr1 continues and invokes rr3
> with r2, which is null, so that throws a NullPointerException, and rr1
> quits.
>
> So, ..., that's cooperative multithreading?
>
> Well you see what happens is that rr2 invoked another re-routine or
> end routine, and at some point it will get called back, and that will
> happen over and over again until rr2 has an r2, then rr2 will memoize
> (a1, r2), and then it will callback rr1.
>
> Then rr1 had quit, it runs again, this time it gets r2 from the
> (a1, r2) memo in the monad it's building, then it passes a non-null r2
> to rr3, which proceeds in much the same way, while rr1 quits again until
> rr3 calls it back.
>
> So, ..., it's non-blocking, because it just quits all the time, then
> happens to run through the same paces filling in?
>
> That's the idea, that re-routines are responsible to build the
> monad and call-back.
>
> So, can I just implement rr2 and rr3 as synchronous and blocking?
>
> Sure, they're interfaces, their implementation is separate. If
> they don't know re-routine semantics then they're just synchronous and
> blocking. They'll get called every time though when the re-routine gets
> called back, and actually they need to know the semantics of returning
> an Object or value by identity, because, calling equals() to implement
> Memo usually would be too much, where the idea is to actually function
> only monadically, and that given same Object or value input, must return
> same Object or value output.
>
> So, it's sort of an approach as a monadic pure idempotency?
>
> Well, yeah, you can call it that.
>
> So, what's the point of all this?
>
> Well, the idea is that there are 10,000 connections, and any time
> one of them demultiplexes off the connection an input command message,
> then it builds one of these with the response input to the demultiplexer
> on its protocol on its connection, on the multiplexer to all the
> connections, with a callback to itself. Then the re-routine is launched
> and when it returns, it calls-back to the originator by its
> callback-number, then the output command response writes those back out.
>
> The point is that there are only as many Theads as cores so the
> goal is that they never block,
> and that the memos make for interning Objects by value, then the goal is
> mostly to receive command objects and handles to request bodies and
> result objects and handles to response bodies, then to call-back with
> those in whatever serial order is necessary, or not.
>
> So, won't this run through each of these re-routines umpteen times?
>
> Yeah, you figure that the runtime of the re-routine is on the order
> of n^2 the order of statements in the re-routine.
>
> So, isn't that terrible?
>
> Well, it doesn't block.
>
> So, it sounds like a big mess.
>
> Yeah, it could be. That's why to avoid blocking and callback
> semantics, is to make monadic idempotency semantics, so then the
> re-routines are just written in normal synchronous flow-of-control, and
> they're well-defined behavior is exactly according to flow-of-control
> including exception-handling.
>
> There's that and there's basically it only needs one Thread, so,
> less Thread x stack size, for a deep enough thread call-stack. Then the
> idea is about one Thread per core, figuring for the thread to always be
> running and never be blocking.
>
> So, it's just normal flow-of-control.
>
> Well yeah, you expect to write the routine in normal
> flow-of-control, and to test it with synchronous and in-memory editions
> that just run through synchronously, and that if you don't much care if
> it blocks, then it's the same code and has no semantics about the
> asynchronous or callbacks actually in it. It just returns when it's done.
>
>
> So what's the requirements of one of these again?
>
> Well, the idea is, that, for a given instance of a re-routine, it's
> an Object, that implements an interface, and it has arguments, and it
> has a return value. The expectation is that the re-routine gets called
> with the same arguments, and must return the same return value. This
> way later calls to re-routines can match the same expectation, same/same.
>
> Also, if it gets different arguments, by Object identity or
> primitive value, the re-routine must return a different return value,
> those being same/same.
>
> The re-routine memoizes its arguments by its argument list, Object
> or primitive value, and a given argument list is same if the order and
> types and values of those are same, and it must return the same return
> value by type and value.
>
> So, how is this cooperative multithreading unobtrusively in
> flow-of-control again?
>
> Here for example the idea would be, rr2 quits and rr1 continues, rr3
> quits and rr1 continues, then reaching rr4, rr4 throws and rr1 quits.
> When rr2's or rr3's memo-callback completes, then it calls-back rr1. as
> those come in, at some point rr4 will be fulfilled, and thus rr4 will
> quit and rr1 will quit. When rr4's callback completes, then it will
> call-back rr1, which will finally complete, and then call-back whatever
> called r1. Then rr1 runs itself through one more time to
> delete or decrement all its memos.
>
> interface Reroutine1 {
>
> Result1 rr1(String a1) {
>
> Result2 r2 = reroutine2.rr2(a1);
>
> Result3 r3 = reroutine3.rr3(a1);
>
> Result4 r4 = reroutine4.rr4(a1, r2, r3);
>
> return Result1.r4(a1, r4);
> }
>
> }
>
> The idea is that it doesn't block when it launchs rr2 and rr3, until
> such time as it just quits when it tries to invoke rr4 and gets a
> resulting NullPointerException, then eventually rr4 will complete and be
> memoized and call-back rr1, then rr1 will be called-back and then
> complete, then run itself through to delete or decrement the ref-count
> of all its memo-ized fragmented monad respectively.
>
> Thusly it's cooperative multithreading by never blocking and always just
> launching callbacks.
>
> There's this System.identityHashCode() method and then there's a notion
> of Object pools and interning Objects then as for about this way that
> it's about numeric identity instead of value identity, so that when
> making memo's that it's always "==" and for a HashMap with
> System.identityHashCode() instead of ever calling equals(), when calling
> equals() is more expensive than calling == and the same/same
> memo-ization is about Object numeric value or the primitive scalar
> value, those being same/same.
>
> https://docs.oracle.com/javase/8/docs/api/java/lang/System.html#identityHashCode-java.lang.Object-
>
>
> So, you figure to return Objects to these connections by their session
> and connection and mux/demux in these callbacks and then write those out?
>
> Well, the idea is to make it so that according to the protocol, the
> back-end sort of knows what makes a handle to a datum of the sort, given
> the protocol and the protocol and the protocol, and the callback is just
> these handles, about what goes in the outer callbacks or outside the
> re-routine, those can be different/same. Then the single writer thread
> servicing the network I/O just wants to transfer those handles, or, as
> necessary through the compression and encryption codecs, then write
> those out, well making use of the java.nio for scatter/gather and vector
> I/O in the non-blocking and asynchronous I/O as much as possible.
>
>
> So, that seems a lot of effort to just passing the handles, ....
>
> Well, I don't want to write any code except normal flow-of-control.
>
> So, this same/same bit seems onerous, as long as different/same has a
> ref-count and thus the memo-ized monad-fragment is maintained when all
> sorts of requests fetch the same thing.
>
> Yeah, maybe you're right. There's much to be gained by re-using monadic
> pure idempotent functions yet only invoking them once. That gets into
> value equality besides numeric equality, though, with regards to going
> into re-routines and interning all Objects by value, so that inside and
> through it's all "==" and System.identityHashCode, the memos, then about
> the ref-counting in the memos.
>
>
> So, I suppose you know HTTP, and about HTTP/2 and IMAP and NNTP here?
>
> Yeah, it's a thing.
>
> So, I think this needs a much cleaner and well-defined definition, to
> fully explore its meaning.
>
> Yeah, I suppose. There's something to be said for reading it again.
>
>
>
>
>
>


Click here to read the complete article
Re: Meta: a usenet server just for sci.math

<5hednQYuTYucCrf7nZ2dnZfqnPcAAAAA@giganews.com>

  copy mid

https://www.rocksolidbbs.com/computers/article-flat.php?id=3132&group=news.software.nntp#3132

  copy link   Newsgroups: sci.math news.software.nntp comp.programming.threads
Path: i2pn2.org!i2pn.org!weretis.net!feeder9.news.weretis.net!border-4.nntp.ord.giganews.com!nntp.giganews.com!Xl.tags.giganews.com!local-2.nntp.ord.giganews.com!news.giganews.com.POSTED!not-for-mail
NNTP-Posting-Date: Thu, 25 Apr 2024 17:46:40 +0000
Subject: Re: Meta: a usenet server just for sci.math
Newsgroups: sci.math,news.software.nntp,comp.programming.threads
References: <8f7c0783-39dd-4f48-99bf-f1cf53b17dd9@googlegroups.com>
<b0c4589a-f222-457e-95b3-437c0721c2a2n@googlegroups.com>
<5a48e832-3573-4c33-b9cb-d112f01b733bn@googlegroups.com>
<8wWdnVqZk54j3Fj4nZ2dnZfqnPGdnZ2d@giganews.com>
<MY-cnRuWkPoIhFr4nZ2dnZfqnPSdnZ2d@giganews.com>
<NqqdnbEz-KTJTlr4nZ2dnZfqnPudnZ2d@giganews.com>
<FqOcnYWdRfEI2lT4nZ2dnZfqn_SdnZ2d@giganews.com>
<NVudnVAqkJ0Sk1D4nZ2dnZfqn_idnZ2d@giganews.com>
<RuKdnfj4NM2rlkz4nZ2dnZfqn_qdnZ2d@giganews.com>
<HfCdnROSvfir-E_4nZ2dnZfqnPWdnZ2d@giganews.com>
<FLicnRkOg7SrWU_4nZ2dnZfqnPadnZ2d@giganews.com>
<v7ecnUsYY7bW40j4nZ2dnZfqnPudnZ2d@giganews.com>
<q7-dnR2O9OsAAH74nZ2dnZfqnPhg4p2d@giganews.com>
<Hp-cnUAirtFtx2P4nZ2dnZfqnPednZ2d@giganews.com>
<MDKdnRJpQ_Q87Z77nZ2dnZfqn_idnZ2d@giganews.com>
<-bOdnWSSIMUKcZn7nZ2dnZfqnPednZ2d@giganews.com>
<CoCdnYJuP9p8aob7nZ2dnZfqnPudnZ2d@giganews.com>
<bJ6dna6Zv4r7lbn7nZ2dnZfqn_WdnZ2d@giganews.com>
<TO6cnaz7jdFtBbv7nZ2dnZfqn_WdnZ2d@giganews.com>
From: ross.a.finlayson@gmail.com (Ross Finlayson)
Date: Thu, 25 Apr 2024 10:46:48 -0700
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101
Thunderbird/38.6.0
MIME-Version: 1.0
In-Reply-To: <TO6cnaz7jdFtBbv7nZ2dnZfqn_WdnZ2d@giganews.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
Message-ID: <5hednQYuTYucCrf7nZ2dnZfqnPcAAAAA@giganews.com>
Lines: 1154
X-Usenet-Provider: http://www.giganews.com
X-Trace: sv3-Ph1782X8OgeKhsuagHa/SnmGDu/uy0m3hDR7Ig/zyRim1yKvTr684LHp5utdxzQIU28kDhZS8Wttbzq!BzNHOjtEr3Oudyy4wXytoGdDLvhYnOpfOWFr6egPDQiQir3bph96p1dZlkV53EXno/I0YM0E0aY=
X-Complaints-To: abuse@giganews.com
X-DMCA-Notifications: http://www.giganews.com/info/dmca.html
X-Abuse-and-DMCA-Info: Please be sure to forward a copy of ALL headers
X-Abuse-and-DMCA-Info: Otherwise we will be unable to process your complaint properly
X-Postfilter: 1.3.40
 by: Ross Finlayson - Thu, 25 Apr 2024 17:46 UTC

On 04/22/2024 10:06 AM, Ross Finlayson wrote:
> On 04/20/2024 11:24 AM, Ross Finlayson wrote:
>>
>>
>> Well I've been thinking about the re-routine as a model of cooperative
>> multithreading,
>> then thinking about the flow-machine of protocols
>>
>> NNTP
>> IMAP <-> NNTP
>> HTTP <-> IMAP <-> NNTP
>>
>> Both IMAP and NNTP are session-oriented on the connection, while,
>> HTTP, in terms of session, has various approaches in terms of HTTP 1.1
>> and connections, and the session ID shared client/server.
>>
>>
>> The re-routine idea is this, that each kind of method, is memoizable,
>> and, it memoizes, by object identity as the key, for the method, all
>> its callers, how this is like so.
>>
>> interface Reroutine1 {
>>
>> Result1 rr1(String a1) {
>>
>> Result2 r2 = reroutine2.rr2(a1);
>>
>> Result3 r3 = reroutine3.rr3(r2);
>>
>> return result(r2, r3);
>> }
>>
>> }
>>
>>
>> The idea is that the executor, when it's submitted a reroutine,
>> when it runs the re-routine, in a thread, then it puts in a ThreadLocal,
>> the re-routine, so that when a re-routine it calls, returns null as it
>> starts an asynchronous computation for the input, then when
>> it completes, it submits to the executor the re-routine again.
>>
>> Then rr1 runs through again, retrieving r2 which is memoized,
>> invokes rr3, which throws, after queuing to memoize and
>> resubmit rr1, when that calls back to resubmit r1, then rr1
>> routines, signaling the original invoker.
>>
>> Then it seems each re-routine basically has an instance part
>> and a memoized part, and that it's to flush the memo
>> after it finishes, in terms of memoizing the inputs.
>>
>>
>> Result 1 rr(String a1) {
>> // if a1 is in the memo, return for it
>> // else queue for it and carry on
>>
>> }
>>
>>
>> What is a re-routine?
>>
>> It's a pattern for cooperative multithreading.
>>
>> It's sort of a functional approach to functions and flow.
>>
>> It has a declarative syntax in the language with usual
>> flow-of-control.
>>
>> So, it's cooperative multithreading so it yields?
>>
>> No, it just quits, and expects to be called back.
>>
>> So, if it quits, how does it complete?
>>
>> The entry point to re-routine provides a callback.
>>
>> Re-routines only return results to other re-routines,
>> It's the default callback. Otherwise they just callback.
>>
>> So, it just quits?
>>
>> If a re-routine gets called with a null, it throws.
>>
>> If a re-routine gets a null, it just continues.
>>
>> If a re-routine completes, it callbacks.
>>
>> So, can a re-routine call any regular code?
>>
>> Yeah, there are some issues, though.
>>
>> So, it's got callbacks everywhere?
>>
>> Well, it's just got callbacks implicitly everywhere.
>>
>> So, how does it work?
>>
>> Well, you build a re-routine with an input and a callback,
>> you call it, then when it completes, it calls the callback.
>>
>> Then, re-routines call other re-routines with the argument,
>> and the callback's in a ThreadLocal, and the re-routine memoizes
>> all of its return values according to the object identity of the
>> inputs,
>> then when a re-routine completes, it calls again with another
>> ThreadLocal
>> indicating to delete the memos, following the exact same
>> flow-of-control
>> only deleting the memos going along, until it results all the
>> memos in
>> the re-routines for the interned or ref-counted input are deleted,
>> then the state of the re-routine is de-allocated.
>>
>> So, it's sort of like a monad and all in pure and idempotent functions?
>>
>> Yeah, it's sort of like a monad and all in pure and idempotent
>> functions.
>>
>> So, it's a model of cooperative multithreading, though with no yield,
>> and callbacks implicitly everywhere?
>>
>> Yeah, it's sort of figured that a called re-routine always has a
>> callback in the ThreadLocal, because the runtime has pre-emptive
>> multithreading anyways, that the thread runs through its re-routines in
>> their normal declarative flow-of-control with exception handling, and
>> whatever re-routines or other pure monadic idempotent functions it
>> calls, throw when they get null inputs.
>>
>> Also it sort of doesn't have primitive types, Strings must always
>> be interned, all objects must have a distinct identity w.r.t. ==, and
>> null is never an argument or return value.
>>
>> So, what does it look like?
>>
>> interface Reroutine1 {
>>
>> Result1 rr1(String a1) {
>>
>> Result2 r2 = reroutine2.rr2(a1);
>>
>> Result3 r3 = reroutine3.rr3(r2);
>>
>> return result(r2, r3);
>> }
>>
>> }
>>
>> So, I expect that to return "result(r2, r3)".
>>
>> Well, that's synchronous, and maybe blocking, the idea is that it
>> calls rr2, gets a1, and rr2 constructs with the callback of rr1 and it's
>> own callback, and a1, and makes a memo for a1, and invokes whatever is
>> its implementation, and returns null, then rr1 continues and invokes rr3
>> with r2, which is null, so that throws a NullPointerException, and rr1
>> quits.
>>
>> So, ..., that's cooperative multithreading?
>>
>> Well you see what happens is that rr2 invoked another re-routine or
>> end routine, and at some point it will get called back, and that will
>> happen over and over again until rr2 has an r2, then rr2 will memoize
>> (a1, r2), and then it will callback rr1.
>>
>> Then rr1 had quit, it runs again, this time it gets r2 from the
>> (a1, r2) memo in the monad it's building, then it passes a non-null r2
>> to rr3, which proceeds in much the same way, while rr1 quits again until
>> rr3 calls it back.
>>
>> So, ..., it's non-blocking, because it just quits all the time, then
>> happens to run through the same paces filling in?
>>
>> That's the idea, that re-routines are responsible to build the
>> monad and call-back.
>>
>> So, can I just implement rr2 and rr3 as synchronous and blocking?
>>
>> Sure, they're interfaces, their implementation is separate. If
>> they don't know re-routine semantics then they're just synchronous and
>> blocking. They'll get called every time though when the re-routine gets
>> called back, and actually they need to know the semantics of returning
>> an Object or value by identity, because, calling equals() to implement
>> Memo usually would be too much, where the idea is to actually function
>> only monadically, and that given same Object or value input, must return
>> same Object or value output.
>>
>> So, it's sort of an approach as a monadic pure idempotency?
>>
>> Well, yeah, you can call it that.
>>
>> So, what's the point of all this?
>>
>> Well, the idea is that there are 10,000 connections, and any time
>> one of them demultiplexes off the connection an input command message,
>> then it builds one of these with the response input to the demultiplexer
>> on its protocol on its connection, on the multiplexer to all the
>> connections, with a callback to itself. Then the re-routine is launched
>> and when it returns, it calls-back to the originator by its
>> callback-number, then the output command response writes those back out.
>>
>> The point is that there are only as many Theads as cores so the
>> goal is that they never block,
>> and that the memos make for interning Objects by value, then the goal is
>> mostly to receive command objects and handles to request bodies and
>> result objects and handles to response bodies, then to call-back with
>> those in whatever serial order is necessary, or not.
>>
>> So, won't this run through each of these re-routines umpteen times?
>>
>> Yeah, you figure that the runtime of the re-routine is on the order
>> of n^2 the order of statements in the re-routine.
>>
>> So, isn't that terrible?
>>
>> Well, it doesn't block.
>>
>> So, it sounds like a big mess.
>>
>> Yeah, it could be. That's why to avoid blocking and callback
>> semantics, is to make monadic idempotency semantics, so then the
>> re-routines are just written in normal synchronous flow-of-control, and
>> they're well-defined behavior is exactly according to flow-of-control
>> including exception-handling.
>>
>> There's that and there's basically it only needs one Thread, so,
>> less Thread x stack size, for a deep enough thread call-stack. Then the
>> idea is about one Thread per core, figuring for the thread to always be
>> running and never be blocking.
>>
>> So, it's just normal flow-of-control.
>>
>> Well yeah, you expect to write the routine in normal
>> flow-of-control, and to test it with synchronous and in-memory editions
>> that just run through synchronously, and that if you don't much care if
>> it blocks, then it's the same code and has no semantics about the
>> asynchronous or callbacks actually in it. It just returns when it's
>> done.
>>
>>
>> So what's the requirements of one of these again?
>>
>> Well, the idea is, that, for a given instance of a re-routine, it's
>> an Object, that implements an interface, and it has arguments, and it
>> has a return value. The expectation is that the re-routine gets called
>> with the same arguments, and must return the same return value. This
>> way later calls to re-routines can match the same expectation, same/same.
>>
>> Also, if it gets different arguments, by Object identity or
>> primitive value, the re-routine must return a different return value,
>> those being same/same.
>>
>> The re-routine memoizes its arguments by its argument list, Object
>> or primitive value, and a given argument list is same if the order and
>> types and values of those are same, and it must return the same return
>> value by type and value.
>>
>> So, how is this cooperative multithreading unobtrusively in
>> flow-of-control again?
>>
>> Here for example the idea would be, rr2 quits and rr1 continues, rr3
>> quits and rr1 continues, then reaching rr4, rr4 throws and rr1 quits.
>> When rr2's or rr3's memo-callback completes, then it calls-back rr1. as
>> those come in, at some point rr4 will be fulfilled, and thus rr4 will
>> quit and rr1 will quit. When rr4's callback completes, then it will
>> call-back rr1, which will finally complete, and then call-back whatever
>> called r1. Then rr1 runs itself through one more time to
>> delete or decrement all its memos.
>>
>> interface Reroutine1 {
>>
>> Result1 rr1(String a1) {
>>
>> Result2 r2 = reroutine2.rr2(a1);
>>
>> Result3 r3 = reroutine3.rr3(a1);
>>
>> Result4 r4 = reroutine4.rr4(a1, r2, r3);
>>
>> return Result1.r4(a1, r4);
>> }
>>
>> }
>>
>> The idea is that it doesn't block when it launchs rr2 and rr3, until
>> such time as it just quits when it tries to invoke rr4 and gets a
>> resulting NullPointerException, then eventually rr4 will complete and be
>> memoized and call-back rr1, then rr1 will be called-back and then
>> complete, then run itself through to delete or decrement the ref-count
>> of all its memo-ized fragmented monad respectively.
>>
>> Thusly it's cooperative multithreading by never blocking and always just
>> launching callbacks.
>>
>> There's this System.identityHashCode() method and then there's a notion
>> of Object pools and interning Objects then as for about this way that
>> it's about numeric identity instead of value identity, so that when
>> making memo's that it's always "==" and for a HashMap with
>> System.identityHashCode() instead of ever calling equals(), when calling
>> equals() is more expensive than calling == and the same/same
>> memo-ization is about Object numeric value or the primitive scalar
>> value, those being same/same.
>>
>> https://docs.oracle.com/javase/8/docs/api/java/lang/System.html#identityHashCode-java.lang.Object-
>>
>>
>>
>> So, you figure to return Objects to these connections by their session
>> and connection and mux/demux in these callbacks and then write those out?
>>
>> Well, the idea is to make it so that according to the protocol, the
>> back-end sort of knows what makes a handle to a datum of the sort, given
>> the protocol and the protocol and the protocol, and the callback is just
>> these handles, about what goes in the outer callbacks or outside the
>> re-routine, those can be different/same. Then the single writer thread
>> servicing the network I/O just wants to transfer those handles, or, as
>> necessary through the compression and encryption codecs, then write
>> those out, well making use of the java.nio for scatter/gather and vector
>> I/O in the non-blocking and asynchronous I/O as much as possible.
>>
>>
>> So, that seems a lot of effort to just passing the handles, ....
>>
>> Well, I don't want to write any code except normal flow-of-control.
>>
>> So, this same/same bit seems onerous, as long as different/same has a
>> ref-count and thus the memo-ized monad-fragment is maintained when all
>> sorts of requests fetch the same thing.
>>
>> Yeah, maybe you're right. There's much to be gained by re-using monadic
>> pure idempotent functions yet only invoking them once. That gets into
>> value equality besides numeric equality, though, with regards to going
>> into re-routines and interning all Objects by value, so that inside and
>> through it's all "==" and System.identityHashCode, the memos, then about
>> the ref-counting in the memos.
>>
>>
>> So, I suppose you know HTTP, and about HTTP/2 and IMAP and NNTP here?
>>
>> Yeah, it's a thing.
>>
>> So, I think this needs a much cleaner and well-defined definition, to
>> fully explore its meaning.
>>
>> Yeah, I suppose. There's something to be said for reading it again.
>>
>>
>>
>>
>>
>>
>
>
>
>
>
> ReRoutines: monadic functional non-blocking asynchrony in the language
>
>
> Implementing a sort of Internet protocol server, it sort of has three or
> four kinds of machines.
>
> flow-machine: select/epoll hardware driven I/O events
>
> protocol-establishment: setting up and changing protocol (commands,
> encryption/compression)
>
> protocol-coding: block coding in encryption/compression and wire/object
> commands/results
>
> routine: inside the objects of the commands of the protocol,
> commands/results
>
> Then, it often looks sort of like
>
> flow <-> protocol <-> routine <-> protocol <-> flow
>
>
> On either outer side of the flow is a connection, it's a socket or the
> receipt or sending of a datagram, according to the network interface and
> select/epoll.
>
> The establishment of a protocol looks like
> connection/configuration/commencement/conclusion, or setup/teardown.
> Protocols get involved renegotiation within a protocol, and for example
> upgrade among protocols. Then the protocol is setup and established.
>
> The idea is that a protocol's coding is in three parts for
> coding/decoding, compression/decompression, and (en)cryption/decryption,
> or as it gets set up.
>
> flow->decrypt->decomp->decod->routine->cod->comp->crypt->flow-v
> flow<-crypt<-comp<-cod<-routine<-decod<-decomp<-decrypt<-flow<-
>
>
>
> Whenever data arrives, the idea goes, is that the flow is interpreted
> according to the protocol, resulting commands, then the routine derives
> results from the commands, as by issuing others, in their protocols, to
> the backend flow. Then, the results get sent back out through the
> protocol, to the frontend, the clients of what it serves the protocol
> the server.
>
> The idea is that there are about 10,000 connections at a time, or more
> or less.
>
> flow <-> protocol <-> routine <-> protocol <-> flow
> flow <-> protocol <-> routine <-> protocol <-> flow
> flow <-> protocol <-> routine <-> protocol <-> flow
> ...
>
>
>
>
> Then, the routine in the middle, has that there's one processor, and on
> the processor are a number of cores, each one independent. Then, the
> operating system establishes that each of the cores, has any number of
> threads-of-control or threads, and each thread has the state of where it
> is in the callstack of routines, and the threads are preempted so that
> multithreading, that a core runs multiple threads, gives each thread
> some running from the entry to the exit of the thread, in any given
> interval of time. Each thread-of-control is thusly independent, while it
> must synchronize with any other thread-of-control, to establish common
> or mutual state, and threads establish taking turns by mutual exclusion,
> called "mutex".
>
> Into and out of the protocol, coding, is either a byte-sequence or
> block, or otherwise the flow is a byte-sequence, that being serial,
> however the protocol multiplexes and demultiplexes messages, the
> commands and their results, to and from the flow.
>
> Then the idea is that what arrives to/from the routine, is objects in
> the protocol, or handles to the transport of byte sequences, in the
> protocol, to the flow.
>
> A usual idea is that there's a thread that services the flow, where, how
> it works is that a thread blocks waiting for there to be any I/O,
> input/output, reading input from the flow, and writing output to the
> flow. So, mostly the thread that blocks has that there's one thread that
> blocks on input, and when there's any input, then it reads or transfers
> the bytes from the input, into buffers. That's its only job, and only
> one thread can block on a given select/epoll selector, which is any
> given number of ports, the connections, the idea being that it just
> blocks until select returns for its keys of interest, it services each
> of the I/O's by copying from the network interface's buffers into the
> program's buffers, then other threads do the rest.
>
> So, if a thread results waiting at all for any other action to complete
> or be ready, it's said to "block". While a thread is blocked, the CPU or
> core just skips it in scheduling the preemptive multithreading, yet it
> still takes some memory and other resources and is in the scheduler of
> the threads.
>
> The idea that the I/O thread, ever blocks, is that it's a feature of
> select/epoll that hardware results waking it up, with the idea that
> that's the only thread that ever blocks.
>
> So, for the other threads, in the decryption/decompression/decoding and
> coding/compression/cryption, the idea is that a thread, runs through
> those, then returns what it's doing, and joins back to a limited pool of
> threads, with a usual idea of there being 1 core : 1 thread, so that
> multithreading is sort of simplified, because as far as the system
> process is concerned, it has a given number of cores and the system
> preemptively multithreads it, and as far as the virtual machine is
> concerned, is has a given number of cores and the virtual machine
> preemptively multithreads its threads, about the thread-of-control, in
> the flow-of-control, of the thing.
>
> A usual way that the routine muliplexes and demultiplexes objects in the
> protocol from a flow's input back to a flow's output, has that the
> thread-per-connection model has that a single thread carries out the
> entire task through the backend flow, blocking along the way, until it
> results joining after writing back out to its connection. Yet, that has
> a thread per each connection, and threads use scheduling and heap
> resources. So, here thread-per-connection is being avoided.
>
> Then, a usual idea of the tasks, is that as I/O is received and flows
> into the decryption/decompression/decoding, then what's decoded, results
> the specification of a task, the command, and the connection, where to
> return its result. The specification is a data structure, so it's an
> object or Object, then. This is added to a queue of tasks, where
> "buffers" represent the ephemeral storage of content in transport the
> byte-sequences, while, the queue is as usually a first-in/first-out
> (FIFO) queue also, of tasks.
>
> Then, the idea is that each of the cores consumes task specifications
> from the task queue, performs them according to the task specification,
> then the results are written out, as coded/compressed/crypted, in the
> protocol.
>
> So, to avoid the threads blocking at all, introduces the idea of
> "asynchrony" or callbacks, where the idea is that the "blocking" and
> "synchronous" has that anywhere in the threads' thread-of-control
> flow-of-control, according to the program or the routine, it is current
> and synchronous, the value that it has, then with regards to what it
> returns or writes, as the result. So, "asynchrony" is the idea that
> there's established a callback, or a place to pause and continue, then a
> specification of the task in the protocol is put to an event queue and
> executed, or from servicing the O/I's of the backend flow, that what
> results from that, has the context of the callback and returns/writes to
> the relevant connection, its result.
>
> I -> flow -> protocol -> routine -> protocol -> flow -> O -v
> O <- flow <- protocol <- routine <- protocol <- flow <- I <-
>
>
> The idea of non-blocking then, is that a routine either provides a
> result immediately available, and is non-blocking, or, queues a task
> what results a callback that provides the result eventually, and is
> non-blocking, and never invokes any other routine that blocks, so is
> non-blocking.
>
> This way a thread, executing tasks, always runs through a task, and thus
> services the task queue or TQ, so that the cores' threads are always
> running and never blocking. (Besides the I/O and O/I threads which block
> when there's no traffic, and usually would be constantly woken up and
> not waiting blocked.) This way, the TQ threads, only block when there's
> nothing in the TQ, or are just deconstructed, and reconstructed, in a
> "pool" of threads, the TQ's executor pool.
>
> Enter the ReRoutine
>
> The idea of a ReRoutine, a re-routine, is that it is a usual procedural
> implementation as if it were synchronous, and agnostic of callbacks.
>
> It is named after "routine" and "co-routine". It is a sort of co-routine
> that builds a monad and is aware its originating caller, re-caller, and
> callback, or, its re-routine caller, re-caller, and callback.
>
> The idea is that there are callbacks implicitly at each method boundary,
> and that nulls are reserved values to indicate the result or lack
> thereof of re-routines, so that the code has neither callbacks nor any
> nulls.
>
> The originating caller has that the TQ, has a task specification, the
> session+attachment of the client in the protocol where to write the
> output, and the command, then the state of the monad of the task, that
> lives on the heap with the task specification and task object. The TQ
> consumers or executors or the executor, when a thread picks up the task,
> it picks up or builds ("originates") the monad state, which is the
> partial state of the re-routine and a memo of the partial state of the
> re-routine, and installs this in the thread local storage or
> ThreadLocal, for the duration of the invocation of the re-routine. Then
> the thread enters the re-routine, which proceeds until it would block,
> where instead it queues a command/task with callback to re-call it to
> re-launch it, and throw a NullPointerException and quits/returns.
>
> This happens recursively and iteratively in the re-routine implemented
> as re-routines, each re-routine updates the partial state of the monad,
> then that as a re-routine completes, it re-launches the calling
> re-routine, until the original re-routine completes, and it calls the
> original callback with the result.
>
> This way the re-routine's method body, is written as plain declarative
> procedural code, the flow-of-control, is exactly as if it were
> synchronous code, and flow-of-control is exactly as if written in the
> language with no callbacks and never nulls, and exception-handling as
> exactly defined by the language.
>
> As the re-routine accumulates the partial results, they live on the
> heap, in the monad, as a member of the originating task's object the
> task in the task queue. This is always added back to the queue as one of
> the pending results of a re-routine, so it stays referenced as an object
> on the heap, then that as it is completed and the original re-routine
> returns, then it's no longer referenced and the garbage-collector can
> reclaim it from the heap or the allocator can delete it.
>
>
>
>
>
>
>
> Well, for the re-routine, I sort of figure there's a Callstack and a
> Callback type
>
> class Callstack {
> Stack<Callback> callstack;
> }
>
> interface Callback {
> void callback() throws Exception;
> }
>
> and then a placeholder sort of type for Callflush
>
> class Callflush {
> Callstack callstack;
> }
>
> with the idea that the presence in ThreadLocals is to be sorted out,
> about a kind of ThreadLocal static pretty much.
>
> With not returning null and for memoizing call-graph dependencies,
> there's basically for an "unvoid" type.
>
> class unvoid {
>
> }
>
> Then it's sort of figure that there's an interface with some defaults,
> with the idea that some boilerplate gets involved in the Memoization.
>
> interface Caller {}
>
> interface Callee {}
>
> class Callmemo {
> memoize(Caller caller, Object[] args);
> flush(Caller caller);
> }
>
>
> Then it seems that the Callstack should instead be of a Callgraph, and
> then what's maintained from call to call is a Callpath, and then what's
> memoized is all kept with the Callgraph, then with regards to objects on
> the heap and their distinctness, only being reachable from the
> Callgraph, leaving less work for the garbage collector, to maintain the
> heap.
>
> The interning semantics would still be on the class level, or for
> constructor semantics, as with regards to either interning Objects for
> uniqueness, or that otherwise they'd be memoized, with the key being the
> Callpath, and the initial arguments into the Callgraph.
>
> Then the idea seems that the ThreaderCaller, establishes the Callgraph
> with respect to the Callgraph of an object, installing it on the thread,
> otherwise attached to the Callgraph, with regards to the ReRoutine.
>
>
>
> About the ReRoutine, it's starting to come together as an idea, what is
> the apparatus for invoking re-routines, that they build the monad of the
> IOE's (inputs, outputs, exceptions) of the re-routines in their
> call-graph, in terms of ThreadLocals of some ThreadLocals that callers
> of the re-routines, maintain, with idea of the memoized monad along the
> way, and each original re-routine.
>
> class IOE <O, E> {
> Object[] input;
> Object output;
> Exception exception;
> }
>
> So the idea is that there are some ThreadLocal's in a static ThreadGlobal
>
> public class ThreadGlobals {
> public static ThreadLocal<MonadMemo> monadMemo;
> }
>
> where callers or originators or ReRoutines, keep a map of the Runnables
> or Callables they have, to the MonadMemo's,
>
> class Originator {
> Map<? extends ReRoutineMapKey, MonadMemo> monadMemoMap;
> }
>
> then when it's about to invoke a Runnable, if it's a ReRoutine, then it
> either retrieves the MonadMemo or makes a new one, and sets it on the
> ThreadLocal, then invokes the Runnable, then clears the ThreadLocal.
>
> Then a MonadMemo, pretty simply, is a List of IOE's, that when the
> ReRoutine runs through the callgraph, the callstack is indicated by a
> tree of integers, and the stack path in the ReRoutine, so that any
> ReRoutine that calls ReRoutines A/B/C, points to an IOE that it finds in
> the thing, then it's default behavior is to return its memo-ized value,
> that otherwise is making the callback that fills its memo and re-invokes
> all the way back the Original routine, or just its own entry point.
>
> This is basically that the Originator, when the ReRoutine quits out,
> sort of has that any ReRoutine it originates, also gets filled up by the
> Originator.
>
> So, then the Originator sort of has a map to a ReRoutine, then for any
> Path, the Monad, so that when it sets the ThreadLocal with the
> MonadMemo, it also sets the Path for the callee, launches it again when
> its callback returned to set its memo and relaunch it, then back up the
> path stack to the original re-routine.
>
> One of the issues here is "automatic parallelization". What I mean by
> that is that the re-routine just goes along and when it gets nulls
> meaning "pending" it just continues along, then expects
> NullPointerExceptions as "UnsatisifiedInput", to quit, figuring it gets
> relaunched when its input is satisfied.
>
> This way then when routines serially don't depend on each others'
> outputs, then they all get launched apiece, parallelizing.
>
> Then, I wonder about usual library code, basically about Collections and
> Streams, and the usual sorts of routines that are applied to the
> arguments, and how to basically establish that the rule of re-routine
> code is that anything that gets a null must throw a
> NullPointerException, so the re-routine will quit until the arguments
> are satisfied, the inputs to library code. Then with the Memo being
> stored in the MonadMemo, it's figured that will work out regardless the
> Objects' or primitives' value, with regards to Collections and Stream
> code and after usual flow-of-control in Iterables for the for loops, or
> whatever other application library code, that they will be run each time
> the re-routine passes their section with satisfied arguments, then as
> with regards to, that the Memo is just whatever serial order the
> re-routine passes, not needing to lookup by Object identity which is
> otherwise part of an interning pattern.
>
> void rr1(String s1) {
>
> List<String> l1 = rr2.get(s1);
>
> Map<String, String> m1 = new LinkedHashMap<>();
>
> l1.stream().forEach(s -> m1.put(s, rr3.get(s)));
>
> return m1;
> }
>
> See what I figure is that the order of the invocations to rr3.get() is
> serial, so it really only needs to memoize its OE, Output|Exception,
> then about that putting null values in the Map, and having to check the
> values in the Map for null values, and otherwise to make it so that the
> semantics of null and NullPointerException, result that satisfying
> inputs result calls, and unsatisfying inputs result quits, figuring
> those unsatisfying inputs are results of unsatisfied outputs, that will
> be satisfied when the callee gets populated its memo and makes the
> callback.
>
> If the order of invocations is out-of-order, gets again into whether the
> Object/primitive by value needs to be the same each time, IOE, about the
> library code in Collections, Streams, parallelStream, and Iterables, and
> basically otherwise that any kind of library code, should throw
> NullPointerException if it gets an "unexpected" null or what doesn't
> fulfill it.
>
> The idea though that rr3 will get invoked say 1000 times with the rr2's
> result, those each make their call, then re-launch 1000 times, has that
> it's figured that the Executor, or Originator, when it looks up and
> loads the "ReRoutineMapKey", is to have the count of those and whether
> the count is fulfilled, then to no-op later re-launches of the
> call-backs, after all the results are populated in the partial monad memo.
>
> Then, there's perhaps instead as that each re-routine just checks its
> input or checks its return value for nulls, those being unsatisfied.
>
> (The exception handling thoroughly or what happens when rr3 throws and
> this kind of thing is involved thoroughly in library code.)
>
> The idea is it remains correct if the worst thing nulls do is throw
> NullPointerException, because that's just a usual quit and means another
> re-launch is coming up, and that it automatically queues for
> asynchronous parallel invocation each the derivations while resulting
> never blocking.
>
> It's figured that re-routines check their inputs for nulls, and throw
> quit, and check their inputs for library container types, and checking
> any member of a library container collection for null, to throw quit,
> and then it will result that the automatic asynchronous parallelization
> proceeds, while the re-routines are never blocking, there's only as much
> memory on the heap of the monad as would be in the lifetime of the
> original re-routine, and whatever re-calls or re-launches of the
> re-routine established local state in local variables and library code,
> would come in and out of scope according to plain stack unwinding.
>
> Then there's still the perceived deficiency that the re-routine's method
> body will be run many times, yet it's only run as many times as result
> throwing-quit, when it reaches where its argument to the re-routine or
> result value isn't yet satisfied yet is pending.
>
> It would re-run the library code any number of times, until it results
> all non-nulls, then the resulting satisfied argument to the following
> re-routines, would be memo-ized in the monad, and the return value of
> the re-routine thus returning immediately its value on the partial monad.
>
> This way each re-call of the re-routine, mostly encounters its own monad
> results in constant time, and throws-quit or gets thrown-quit only when
> it would be unsatisfying, with the expectation that whatever
> throws-quit, either NullPointerException or extending
> NullPointerException, will have a pending callback, that will queue on a
> TQ, the task specification to re-launch and re-enter the original or
> derived, re-routine.
>
> The idea is sort of that it's sort of, Java with non-blocking I/O and
> ThreadLocal (1.7+, not 17+), or you know, C/C++ with non-blocking I/O
> and thread local storage, then for the abstract or interface of the
> re-routines, how it works out that it's a usual sort of model of
> co-operative multithreading, the re-routine, the routine "in the language".
>
>
> Then it's great that the routine can be stubbed or implemented agnostic
> of asynchrony, and declared in the language with standard libraries,
> basically using the semantics of exception handling and convention of
> re-launching callbacks to implement thread-of-control flow-of-control,
> that can be implemented in the synchronous and blocking for unit tests
> and modules of the routine, making a great abstraction of flow-of-control.
>
>
> Basically anything that _does_ block then makes for having its own
> thread, whose only job is to block and when it unblocks, throw-toss the
> re-launch toward the origin of the re-routine, and consume the next
> blocking-task off the TQ. Yet, the re-routines and their servicing the
> TQ only need one thread and never block. (And scale in core count and
> automatically parallelize asynchronous requests according to satisfied
> inputs.)
>
>
> Mostly the idea of the re-routine is "in the language, it's just plain,
> ordinary, synchronous routine".
>
>
>


Click here to read the complete article
Re: Meta: a usenet server just for sci.math

<oxmcnS5dcuH4vLD7nZ2dnZfqnPWdnZ2d@giganews.com>

  copy mid

https://www.rocksolidbbs.com/computers/article-flat.php?id=3133&group=news.software.nntp#3133

  copy link   Newsgroups: sci.math news.software.nntp comp.programming.threads
Path: i2pn2.org!i2pn.org!weretis.net!feeder9.news.weretis.net!border-2.nntp.ord.giganews.com!border-1.nntp.ord.giganews.com!nntp.giganews.com!Xl.tags.giganews.com!local-1.nntp.ord.giganews.com!news.giganews.com.POSTED!not-for-mail
NNTP-Posting-Date: Sat, 27 Apr 2024 16:01:41 +0000
Subject: Re: Meta: a usenet server just for sci.math
Newsgroups: sci.math,news.software.nntp,comp.programming.threads
References: <8f7c0783-39dd-4f48-99bf-f1cf53b17dd9@googlegroups.com>
<b0c4589a-f222-457e-95b3-437c0721c2a2n@googlegroups.com>
<5a48e832-3573-4c33-b9cb-d112f01b733bn@googlegroups.com>
<8wWdnVqZk54j3Fj4nZ2dnZfqnPGdnZ2d@giganews.com>
<MY-cnRuWkPoIhFr4nZ2dnZfqnPSdnZ2d@giganews.com>
<NqqdnbEz-KTJTlr4nZ2dnZfqnPudnZ2d@giganews.com>
<FqOcnYWdRfEI2lT4nZ2dnZfqn_SdnZ2d@giganews.com>
<NVudnVAqkJ0Sk1D4nZ2dnZfqn_idnZ2d@giganews.com>
<RuKdnfj4NM2rlkz4nZ2dnZfqn_qdnZ2d@giganews.com>
<HfCdnROSvfir-E_4nZ2dnZfqnPWdnZ2d@giganews.com>
<FLicnRkOg7SrWU_4nZ2dnZfqnPadnZ2d@giganews.com>
<v7ecnUsYY7bW40j4nZ2dnZfqnPudnZ2d@giganews.com>
<q7-dnR2O9OsAAH74nZ2dnZfqnPhg4p2d@giganews.com>
<Hp-cnUAirtFtx2P4nZ2dnZfqnPednZ2d@giganews.com>
<MDKdnRJpQ_Q87Z77nZ2dnZfqn_idnZ2d@giganews.com>
<-bOdnWSSIMUKcZn7nZ2dnZfqnPednZ2d@giganews.com>
<CoCdnYJuP9p8aob7nZ2dnZfqnPudnZ2d@giganews.com>
<bJ6dna6Zv4r7lbn7nZ2dnZfqn_WdnZ2d@giganews.com>
<TO6cnaz7jdFtBbv7nZ2dnZfqn_WdnZ2d@giganews.com>
<5hednQYuTYucCrf7nZ2dnZfqnPcAAAAA@giganews.com>
From: ross.a.finlayson@gmail.com (Ross Finlayson)
Date: Sat, 27 Apr 2024 09:01:43 -0700
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101
Thunderbird/38.6.0
MIME-Version: 1.0
In-Reply-To: <5hednQYuTYucCrf7nZ2dnZfqnPcAAAAA@giganews.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
Message-ID: <oxmcnS5dcuH4vLD7nZ2dnZfqnPWdnZ2d@giganews.com>
Lines: 1755
X-Usenet-Provider: http://www.giganews.com
X-Trace: sv3-7B2aXfHf4JGZ3qfVb6ACmM7ry4OG/U/+w2YvLFEaNagtR3/JM/0RUwGKybfco4czrGiIyQJZ09vh+kP!5yRv9yQX+WGEv/HsblrHW4bOwd1TGjqwfDYIlo4FkPs9JF4F1i5V7TTpv2VuGFeDPBJ8GqMYsNoA
X-Complaints-To: abuse@giganews.com
X-DMCA-Notifications: http://www.giganews.com/info/dmca.html
X-Abuse-and-DMCA-Info: Please be sure to forward a copy of ALL headers
X-Abuse-and-DMCA-Info: Otherwise we will be unable to process your complaint properly
X-Postfilter: 1.3.40
 by: Ross Finlayson - Sat, 27 Apr 2024 16:01 UTC

On 04/25/2024 10:46 AM, Ross Finlayson wrote:
> On 04/22/2024 10:06 AM, Ross Finlayson wrote:
>> On 04/20/2024 11:24 AM, Ross Finlayson wrote:
>>>
>>>
>>> Well I've been thinking about the re-routine as a model of cooperative
>>> multithreading,
>>> then thinking about the flow-machine of protocols
>>>
>>> NNTP
>>> IMAP <-> NNTP
>>> HTTP <-> IMAP <-> NNTP
>>>
>>> Both IMAP and NNTP are session-oriented on the connection, while,
>>> HTTP, in terms of session, has various approaches in terms of HTTP 1.1
>>> and connections, and the session ID shared client/server.
>>>
>>>
>>> The re-routine idea is this, that each kind of method, is memoizable,
>>> and, it memoizes, by object identity as the key, for the method, all
>>> its callers, how this is like so.
>>>
>>> interface Reroutine1 {
>>>
>>> Result1 rr1(String a1) {
>>>
>>> Result2 r2 = reroutine2.rr2(a1);
>>>
>>> Result3 r3 = reroutine3.rr3(r2);
>>>
>>> return result(r2, r3);
>>> }
>>>
>>> }
>>>
>>>
>>> The idea is that the executor, when it's submitted a reroutine,
>>> when it runs the re-routine, in a thread, then it puts in a ThreadLocal,
>>> the re-routine, so that when a re-routine it calls, returns null as it
>>> starts an asynchronous computation for the input, then when
>>> it completes, it submits to the executor the re-routine again.
>>>
>>> Then rr1 runs through again, retrieving r2 which is memoized,
>>> invokes rr3, which throws, after queuing to memoize and
>>> resubmit rr1, when that calls back to resubmit r1, then rr1
>>> routines, signaling the original invoker.
>>>
>>> Then it seems each re-routine basically has an instance part
>>> and a memoized part, and that it's to flush the memo
>>> after it finishes, in terms of memoizing the inputs.
>>>
>>>
>>> Result 1 rr(String a1) {
>>> // if a1 is in the memo, return for it
>>> // else queue for it and carry on
>>>
>>> }
>>>
>>>
>>> What is a re-routine?
>>>
>>> It's a pattern for cooperative multithreading.
>>>
>>> It's sort of a functional approach to functions and flow.
>>>
>>> It has a declarative syntax in the language with usual
>>> flow-of-control.
>>>
>>> So, it's cooperative multithreading so it yields?
>>>
>>> No, it just quits, and expects to be called back.
>>>
>>> So, if it quits, how does it complete?
>>>
>>> The entry point to re-routine provides a callback.
>>>
>>> Re-routines only return results to other re-routines,
>>> It's the default callback. Otherwise they just callback.
>>>
>>> So, it just quits?
>>>
>>> If a re-routine gets called with a null, it throws.
>>>
>>> If a re-routine gets a null, it just continues.
>>>
>>> If a re-routine completes, it callbacks.
>>>
>>> So, can a re-routine call any regular code?
>>>
>>> Yeah, there are some issues, though.
>>>
>>> So, it's got callbacks everywhere?
>>>
>>> Well, it's just got callbacks implicitly everywhere.
>>>
>>> So, how does it work?
>>>
>>> Well, you build a re-routine with an input and a callback,
>>> you call it, then when it completes, it calls the callback.
>>>
>>> Then, re-routines call other re-routines with the argument,
>>> and the callback's in a ThreadLocal, and the re-routine memoizes
>>> all of its return values according to the object identity of the
>>> inputs,
>>> then when a re-routine completes, it calls again with another
>>> ThreadLocal
>>> indicating to delete the memos, following the exact same
>>> flow-of-control
>>> only deleting the memos going along, until it results all the
>>> memos in
>>> the re-routines for the interned or ref-counted input are deleted,
>>> then the state of the re-routine is de-allocated.
>>>
>>> So, it's sort of like a monad and all in pure and idempotent functions?
>>>
>>> Yeah, it's sort of like a monad and all in pure and idempotent
>>> functions.
>>>
>>> So, it's a model of cooperative multithreading, though with no yield,
>>> and callbacks implicitly everywhere?
>>>
>>> Yeah, it's sort of figured that a called re-routine always has a
>>> callback in the ThreadLocal, because the runtime has pre-emptive
>>> multithreading anyways, that the thread runs through its re-routines in
>>> their normal declarative flow-of-control with exception handling, and
>>> whatever re-routines or other pure monadic idempotent functions it
>>> calls, throw when they get null inputs.
>>>
>>> Also it sort of doesn't have primitive types, Strings must always
>>> be interned, all objects must have a distinct identity w.r.t. ==, and
>>> null is never an argument or return value.
>>>
>>> So, what does it look like?
>>>
>>> interface Reroutine1 {
>>>
>>> Result1 rr1(String a1) {
>>>
>>> Result2 r2 = reroutine2.rr2(a1);
>>>
>>> Result3 r3 = reroutine3.rr3(r2);
>>>
>>> return result(r2, r3);
>>> }
>>>
>>> }
>>>
>>> So, I expect that to return "result(r2, r3)".
>>>
>>> Well, that's synchronous, and maybe blocking, the idea is that it
>>> calls rr2, gets a1, and rr2 constructs with the callback of rr1 and it's
>>> own callback, and a1, and makes a memo for a1, and invokes whatever is
>>> its implementation, and returns null, then rr1 continues and invokes rr3
>>> with r2, which is null, so that throws a NullPointerException, and rr1
>>> quits.
>>>
>>> So, ..., that's cooperative multithreading?
>>>
>>> Well you see what happens is that rr2 invoked another re-routine or
>>> end routine, and at some point it will get called back, and that will
>>> happen over and over again until rr2 has an r2, then rr2 will memoize
>>> (a1, r2), and then it will callback rr1.
>>>
>>> Then rr1 had quit, it runs again, this time it gets r2 from the
>>> (a1, r2) memo in the monad it's building, then it passes a non-null r2
>>> to rr3, which proceeds in much the same way, while rr1 quits again until
>>> rr3 calls it back.
>>>
>>> So, ..., it's non-blocking, because it just quits all the time, then
>>> happens to run through the same paces filling in?
>>>
>>> That's the idea, that re-routines are responsible to build the
>>> monad and call-back.
>>>
>>> So, can I just implement rr2 and rr3 as synchronous and blocking?
>>>
>>> Sure, they're interfaces, their implementation is separate. If
>>> they don't know re-routine semantics then they're just synchronous and
>>> blocking. They'll get called every time though when the re-routine gets
>>> called back, and actually they need to know the semantics of returning
>>> an Object or value by identity, because, calling equals() to implement
>>> Memo usually would be too much, where the idea is to actually function
>>> only monadically, and that given same Object or value input, must return
>>> same Object or value output.
>>>
>>> So, it's sort of an approach as a monadic pure idempotency?
>>>
>>> Well, yeah, you can call it that.
>>>
>>> So, what's the point of all this?
>>>
>>> Well, the idea is that there are 10,000 connections, and any time
>>> one of them demultiplexes off the connection an input command message,
>>> then it builds one of these with the response input to the demultiplexer
>>> on its protocol on its connection, on the multiplexer to all the
>>> connections, with a callback to itself. Then the re-routine is launched
>>> and when it returns, it calls-back to the originator by its
>>> callback-number, then the output command response writes those back out.
>>>
>>> The point is that there are only as many Theads as cores so the
>>> goal is that they never block,
>>> and that the memos make for interning Objects by value, then the goal is
>>> mostly to receive command objects and handles to request bodies and
>>> result objects and handles to response bodies, then to call-back with
>>> those in whatever serial order is necessary, or not.
>>>
>>> So, won't this run through each of these re-routines umpteen times?
>>>
>>> Yeah, you figure that the runtime of the re-routine is on the order
>>> of n^2 the order of statements in the re-routine.
>>>
>>> So, isn't that terrible?
>>>
>>> Well, it doesn't block.
>>>
>>> So, it sounds like a big mess.
>>>
>>> Yeah, it could be. That's why to avoid blocking and callback
>>> semantics, is to make monadic idempotency semantics, so then the
>>> re-routines are just written in normal synchronous flow-of-control, and
>>> they're well-defined behavior is exactly according to flow-of-control
>>> including exception-handling.
>>>
>>> There's that and there's basically it only needs one Thread, so,
>>> less Thread x stack size, for a deep enough thread call-stack. Then the
>>> idea is about one Thread per core, figuring for the thread to always be
>>> running and never be blocking.
>>>
>>> So, it's just normal flow-of-control.
>>>
>>> Well yeah, you expect to write the routine in normal
>>> flow-of-control, and to test it with synchronous and in-memory editions
>>> that just run through synchronously, and that if you don't much care if
>>> it blocks, then it's the same code and has no semantics about the
>>> asynchronous or callbacks actually in it. It just returns when it's
>>> done.
>>>
>>>
>>> So what's the requirements of one of these again?
>>>
>>> Well, the idea is, that, for a given instance of a re-routine, it's
>>> an Object, that implements an interface, and it has arguments, and it
>>> has a return value. The expectation is that the re-routine gets called
>>> with the same arguments, and must return the same return value. This
>>> way later calls to re-routines can match the same expectation,
>>> same/same.
>>>
>>> Also, if it gets different arguments, by Object identity or
>>> primitive value, the re-routine must return a different return value,
>>> those being same/same.
>>>
>>> The re-routine memoizes its arguments by its argument list, Object
>>> or primitive value, and a given argument list is same if the order and
>>> types and values of those are same, and it must return the same return
>>> value by type and value.
>>>
>>> So, how is this cooperative multithreading unobtrusively in
>>> flow-of-control again?
>>>
>>> Here for example the idea would be, rr2 quits and rr1 continues, rr3
>>> quits and rr1 continues, then reaching rr4, rr4 throws and rr1 quits.
>>> When rr2's or rr3's memo-callback completes, then it calls-back rr1. as
>>> those come in, at some point rr4 will be fulfilled, and thus rr4 will
>>> quit and rr1 will quit. When rr4's callback completes, then it will
>>> call-back rr1, which will finally complete, and then call-back whatever
>>> called r1. Then rr1 runs itself through one more time to
>>> delete or decrement all its memos.
>>>
>>> interface Reroutine1 {
>>>
>>> Result1 rr1(String a1) {
>>>
>>> Result2 r2 = reroutine2.rr2(a1);
>>>
>>> Result3 r3 = reroutine3.rr3(a1);
>>>
>>> Result4 r4 = reroutine4.rr4(a1, r2, r3);
>>>
>>> return Result1.r4(a1, r4);
>>> }
>>>
>>> }
>>>
>>> The idea is that it doesn't block when it launchs rr2 and rr3, until
>>> such time as it just quits when it tries to invoke rr4 and gets a
>>> resulting NullPointerException, then eventually rr4 will complete and be
>>> memoized and call-back rr1, then rr1 will be called-back and then
>>> complete, then run itself through to delete or decrement the ref-count
>>> of all its memo-ized fragmented monad respectively.
>>>
>>> Thusly it's cooperative multithreading by never blocking and always just
>>> launching callbacks.
>>>
>>> There's this System.identityHashCode() method and then there's a notion
>>> of Object pools and interning Objects then as for about this way that
>>> it's about numeric identity instead of value identity, so that when
>>> making memo's that it's always "==" and for a HashMap with
>>> System.identityHashCode() instead of ever calling equals(), when calling
>>> equals() is more expensive than calling == and the same/same
>>> memo-ization is about Object numeric value or the primitive scalar
>>> value, those being same/same.
>>>
>>> https://docs.oracle.com/javase/8/docs/api/java/lang/System.html#identityHashCode-java.lang.Object-
>>>
>>>
>>>
>>>
>>> So, you figure to return Objects to these connections by their session
>>> and connection and mux/demux in these callbacks and then write those
>>> out?
>>>
>>> Well, the idea is to make it so that according to the protocol, the
>>> back-end sort of knows what makes a handle to a datum of the sort, given
>>> the protocol and the protocol and the protocol, and the callback is just
>>> these handles, about what goes in the outer callbacks or outside the
>>> re-routine, those can be different/same. Then the single writer thread
>>> servicing the network I/O just wants to transfer those handles, or, as
>>> necessary through the compression and encryption codecs, then write
>>> those out, well making use of the java.nio for scatter/gather and vector
>>> I/O in the non-blocking and asynchronous I/O as much as possible.
>>>
>>>
>>> So, that seems a lot of effort to just passing the handles, ....
>>>
>>> Well, I don't want to write any code except normal flow-of-control.
>>>
>>> So, this same/same bit seems onerous, as long as different/same has a
>>> ref-count and thus the memo-ized monad-fragment is maintained when all
>>> sorts of requests fetch the same thing.
>>>
>>> Yeah, maybe you're right. There's much to be gained by re-using monadic
>>> pure idempotent functions yet only invoking them once. That gets into
>>> value equality besides numeric equality, though, with regards to going
>>> into re-routines and interning all Objects by value, so that inside and
>>> through it's all "==" and System.identityHashCode, the memos, then about
>>> the ref-counting in the memos.
>>>
>>>
>>> So, I suppose you know HTTP, and about HTTP/2 and IMAP and NNTP here?
>>>
>>> Yeah, it's a thing.
>>>
>>> So, I think this needs a much cleaner and well-defined definition, to
>>> fully explore its meaning.
>>>
>>> Yeah, I suppose. There's something to be said for reading it again.
>>>
>>>
>>>
>>>
>>>
>>>
>>
>>
>>
>>
>>
>> ReRoutines: monadic functional non-blocking asynchrony in the language
>>
>>
>> Implementing a sort of Internet protocol server, it sort of has three or
>> four kinds of machines.
>>
>> flow-machine: select/epoll hardware driven I/O events
>>
>> protocol-establishment: setting up and changing protocol (commands,
>> encryption/compression)
>>
>> protocol-coding: block coding in encryption/compression and wire/object
>> commands/results
>>
>> routine: inside the objects of the commands of the protocol,
>> commands/results
>>
>> Then, it often looks sort of like
>>
>> flow <-> protocol <-> routine <-> protocol <-> flow
>>
>>
>> On either outer side of the flow is a connection, it's a socket or the
>> receipt or sending of a datagram, according to the network interface and
>> select/epoll.
>>
>> The establishment of a protocol looks like
>> connection/configuration/commencement/conclusion, or setup/teardown.
>> Protocols get involved renegotiation within a protocol, and for example
>> upgrade among protocols. Then the protocol is setup and established.
>>
>> The idea is that a protocol's coding is in three parts for
>> coding/decoding, compression/decompression, and (en)cryption/decryption,
>> or as it gets set up.
>>
>> flow->decrypt->decomp->decod->routine->cod->comp->crypt->flow-v
>> flow<-crypt<-comp<-cod<-routine<-decod<-decomp<-decrypt<-flow<-
>>
>>
>>
>> Whenever data arrives, the idea goes, is that the flow is interpreted
>> according to the protocol, resulting commands, then the routine derives
>> results from the commands, as by issuing others, in their protocols, to
>> the backend flow. Then, the results get sent back out through the
>> protocol, to the frontend, the clients of what it serves the protocol
>> the server.
>>
>> The idea is that there are about 10,000 connections at a time, or more
>> or less.
>>
>> flow <-> protocol <-> routine <-> protocol <-> flow
>> flow <-> protocol <-> routine <-> protocol <-> flow
>> flow <-> protocol <-> routine <-> protocol <-> flow
>> ...
>>
>>
>>
>>
>> Then, the routine in the middle, has that there's one processor, and on
>> the processor are a number of cores, each one independent. Then, the
>> operating system establishes that each of the cores, has any number of
>> threads-of-control or threads, and each thread has the state of where it
>> is in the callstack of routines, and the threads are preempted so that
>> multithreading, that a core runs multiple threads, gives each thread
>> some running from the entry to the exit of the thread, in any given
>> interval of time. Each thread-of-control is thusly independent, while it
>> must synchronize with any other thread-of-control, to establish common
>> or mutual state, and threads establish taking turns by mutual exclusion,
>> called "mutex".
>>
>> Into and out of the protocol, coding, is either a byte-sequence or
>> block, or otherwise the flow is a byte-sequence, that being serial,
>> however the protocol multiplexes and demultiplexes messages, the
>> commands and their results, to and from the flow.
>>
>> Then the idea is that what arrives to/from the routine, is objects in
>> the protocol, or handles to the transport of byte sequences, in the
>> protocol, to the flow.
>>
>> A usual idea is that there's a thread that services the flow, where, how
>> it works is that a thread blocks waiting for there to be any I/O,
>> input/output, reading input from the flow, and writing output to the
>> flow. So, mostly the thread that blocks has that there's one thread that
>> blocks on input, and when there's any input, then it reads or transfers
>> the bytes from the input, into buffers. That's its only job, and only
>> one thread can block on a given select/epoll selector, which is any
>> given number of ports, the connections, the idea being that it just
>> blocks until select returns for its keys of interest, it services each
>> of the I/O's by copying from the network interface's buffers into the
>> program's buffers, then other threads do the rest.
>>
>> So, if a thread results waiting at all for any other action to complete
>> or be ready, it's said to "block". While a thread is blocked, the CPU or
>> core just skips it in scheduling the preemptive multithreading, yet it
>> still takes some memory and other resources and is in the scheduler of
>> the threads.
>>
>> The idea that the I/O thread, ever blocks, is that it's a feature of
>> select/epoll that hardware results waking it up, with the idea that
>> that's the only thread that ever blocks.
>>
>> So, for the other threads, in the decryption/decompression/decoding and
>> coding/compression/cryption, the idea is that a thread, runs through
>> those, then returns what it's doing, and joins back to a limited pool of
>> threads, with a usual idea of there being 1 core : 1 thread, so that
>> multithreading is sort of simplified, because as far as the system
>> process is concerned, it has a given number of cores and the system
>> preemptively multithreads it, and as far as the virtual machine is
>> concerned, is has a given number of cores and the virtual machine
>> preemptively multithreads its threads, about the thread-of-control, in
>> the flow-of-control, of the thing.
>>
>> A usual way that the routine muliplexes and demultiplexes objects in the
>> protocol from a flow's input back to a flow's output, has that the
>> thread-per-connection model has that a single thread carries out the
>> entire task through the backend flow, blocking along the way, until it
>> results joining after writing back out to its connection. Yet, that has
>> a thread per each connection, and threads use scheduling and heap
>> resources. So, here thread-per-connection is being avoided.
>>
>> Then, a usual idea of the tasks, is that as I/O is received and flows
>> into the decryption/decompression/decoding, then what's decoded, results
>> the specification of a task, the command, and the connection, where to
>> return its result. The specification is a data structure, so it's an
>> object or Object, then. This is added to a queue of tasks, where
>> "buffers" represent the ephemeral storage of content in transport the
>> byte-sequences, while, the queue is as usually a first-in/first-out
>> (FIFO) queue also, of tasks.
>>
>> Then, the idea is that each of the cores consumes task specifications
>> from the task queue, performs them according to the task specification,
>> then the results are written out, as coded/compressed/crypted, in the
>> protocol.
>>
>> So, to avoid the threads blocking at all, introduces the idea of
>> "asynchrony" or callbacks, where the idea is that the "blocking" and
>> "synchronous" has that anywhere in the threads' thread-of-control
>> flow-of-control, according to the program or the routine, it is current
>> and synchronous, the value that it has, then with regards to what it
>> returns or writes, as the result. So, "asynchrony" is the idea that
>> there's established a callback, or a place to pause and continue, then a
>> specification of the task in the protocol is put to an event queue and
>> executed, or from servicing the O/I's of the backend flow, that what
>> results from that, has the context of the callback and returns/writes to
>> the relevant connection, its result.
>>
>> I -> flow -> protocol -> routine -> protocol -> flow -> O -v
>> O <- flow <- protocol <- routine <- protocol <- flow <- I <-
>>
>>
>> The idea of non-blocking then, is that a routine either provides a
>> result immediately available, and is non-blocking, or, queues a task
>> what results a callback that provides the result eventually, and is
>> non-blocking, and never invokes any other routine that blocks, so is
>> non-blocking.
>>
>> This way a thread, executing tasks, always runs through a task, and thus
>> services the task queue or TQ, so that the cores' threads are always
>> running and never blocking. (Besides the I/O and O/I threads which block
>> when there's no traffic, and usually would be constantly woken up and
>> not waiting blocked.) This way, the TQ threads, only block when there's
>> nothing in the TQ, or are just deconstructed, and reconstructed, in a
>> "pool" of threads, the TQ's executor pool.
>>
>> Enter the ReRoutine
>>
>> The idea of a ReRoutine, a re-routine, is that it is a usual procedural
>> implementation as if it were synchronous, and agnostic of callbacks.
>>
>> It is named after "routine" and "co-routine". It is a sort of co-routine
>> that builds a monad and is aware its originating caller, re-caller, and
>> callback, or, its re-routine caller, re-caller, and callback.
>>
>> The idea is that there are callbacks implicitly at each method boundary,
>> and that nulls are reserved values to indicate the result or lack
>> thereof of re-routines, so that the code has neither callbacks nor any
>> nulls.
>>
>> The originating caller has that the TQ, has a task specification, the
>> session+attachment of the client in the protocol where to write the
>> output, and the command, then the state of the monad of the task, that
>> lives on the heap with the task specification and task object. The TQ
>> consumers or executors or the executor, when a thread picks up the task,
>> it picks up or builds ("originates") the monad state, which is the
>> partial state of the re-routine and a memo of the partial state of the
>> re-routine, and installs this in the thread local storage or
>> ThreadLocal, for the duration of the invocation of the re-routine. Then
>> the thread enters the re-routine, which proceeds until it would block,
>> where instead it queues a command/task with callback to re-call it to
>> re-launch it, and throw a NullPointerException and quits/returns.
>>
>> This happens recursively and iteratively in the re-routine implemented
>> as re-routines, each re-routine updates the partial state of the monad,
>> then that as a re-routine completes, it re-launches the calling
>> re-routine, until the original re-routine completes, and it calls the
>> original callback with the result.
>>
>> This way the re-routine's method body, is written as plain declarative
>> procedural code, the flow-of-control, is exactly as if it were
>> synchronous code, and flow-of-control is exactly as if written in the
>> language with no callbacks and never nulls, and exception-handling as
>> exactly defined by the language.
>>
>> As the re-routine accumulates the partial results, they live on the
>> heap, in the monad, as a member of the originating task's object the
>> task in the task queue. This is always added back to the queue as one of
>> the pending results of a re-routine, so it stays referenced as an object
>> on the heap, then that as it is completed and the original re-routine
>> returns, then it's no longer referenced and the garbage-collector can
>> reclaim it from the heap or the allocator can delete it.
>>
>>
>>
>>
>>
>>
>>
>> Well, for the re-routine, I sort of figure there's a Callstack and a
>> Callback type
>>
>> class Callstack {
>> Stack<Callback> callstack;
>> }
>>
>> interface Callback {
>> void callback() throws Exception;
>> }
>>
>> and then a placeholder sort of type for Callflush
>>
>> class Callflush {
>> Callstack callstack;
>> }
>>
>> with the idea that the presence in ThreadLocals is to be sorted out,
>> about a kind of ThreadLocal static pretty much.
>>
>> With not returning null and for memoizing call-graph dependencies,
>> there's basically for an "unvoid" type.
>>
>> class unvoid {
>>
>> }
>>
>> Then it's sort of figure that there's an interface with some defaults,
>> with the idea that some boilerplate gets involved in the Memoization.
>>
>> interface Caller {}
>>
>> interface Callee {}
>>
>> class Callmemo {
>> memoize(Caller caller, Object[] args);
>> flush(Caller caller);
>> }
>>
>>
>> Then it seems that the Callstack should instead be of a Callgraph, and
>> then what's maintained from call to call is a Callpath, and then what's
>> memoized is all kept with the Callgraph, then with regards to objects on
>> the heap and their distinctness, only being reachable from the
>> Callgraph, leaving less work for the garbage collector, to maintain the
>> heap.
>>
>> The interning semantics would still be on the class level, or for
>> constructor semantics, as with regards to either interning Objects for
>> uniqueness, or that otherwise they'd be memoized, with the key being the
>> Callpath, and the initial arguments into the Callgraph.
>>
>> Then the idea seems that the ThreaderCaller, establishes the Callgraph
>> with respect to the Callgraph of an object, installing it on the thread,
>> otherwise attached to the Callgraph, with regards to the ReRoutine.
>>
>>
>>
>> About the ReRoutine, it's starting to come together as an idea, what is
>> the apparatus for invoking re-routines, that they build the monad of the
>> IOE's (inputs, outputs, exceptions) of the re-routines in their
>> call-graph, in terms of ThreadLocals of some ThreadLocals that callers
>> of the re-routines, maintain, with idea of the memoized monad along the
>> way, and each original re-routine.
>>
>> class IOE <O, E> {
>> Object[] input;
>> Object output;
>> Exception exception;
>> }
>>
>> So the idea is that there are some ThreadLocal's in a static ThreadGlobal
>>
>> public class ThreadGlobals {
>> public static ThreadLocal<MonadMemo> monadMemo;
>> }
>>
>> where callers or originators or ReRoutines, keep a map of the Runnables
>> or Callables they have, to the MonadMemo's,
>>
>> class Originator {
>> Map<? extends ReRoutineMapKey, MonadMemo> monadMemoMap;
>> }
>>
>> then when it's about to invoke a Runnable, if it's a ReRoutine, then it
>> either retrieves the MonadMemo or makes a new one, and sets it on the
>> ThreadLocal, then invokes the Runnable, then clears the ThreadLocal.
>>
>> Then a MonadMemo, pretty simply, is a List of IOE's, that when the
>> ReRoutine runs through the callgraph, the callstack is indicated by a
>> tree of integers, and the stack path in the ReRoutine, so that any
>> ReRoutine that calls ReRoutines A/B/C, points to an IOE that it finds in
>> the thing, then it's default behavior is to return its memo-ized value,
>> that otherwise is making the callback that fills its memo and re-invokes
>> all the way back the Original routine, or just its own entry point.
>>
>> This is basically that the Originator, when the ReRoutine quits out,
>> sort of has that any ReRoutine it originates, also gets filled up by the
>> Originator.
>>
>> So, then the Originator sort of has a map to a ReRoutine, then for any
>> Path, the Monad, so that when it sets the ThreadLocal with the
>> MonadMemo, it also sets the Path for the callee, launches it again when
>> its callback returned to set its memo and relaunch it, then back up the
>> path stack to the original re-routine.
>>
>> One of the issues here is "automatic parallelization". What I mean by
>> that is that the re-routine just goes along and when it gets nulls
>> meaning "pending" it just continues along, then expects
>> NullPointerExceptions as "UnsatisifiedInput", to quit, figuring it gets
>> relaunched when its input is satisfied.
>>
>> This way then when routines serially don't depend on each others'
>> outputs, then they all get launched apiece, parallelizing.
>>
>> Then, I wonder about usual library code, basically about Collections and
>> Streams, and the usual sorts of routines that are applied to the
>> arguments, and how to basically establish that the rule of re-routine
>> code is that anything that gets a null must throw a
>> NullPointerException, so the re-routine will quit until the arguments
>> are satisfied, the inputs to library code. Then with the Memo being
>> stored in the MonadMemo, it's figured that will work out regardless the
>> Objects' or primitives' value, with regards to Collections and Stream
>> code and after usual flow-of-control in Iterables for the for loops, or
>> whatever other application library code, that they will be run each time
>> the re-routine passes their section with satisfied arguments, then as
>> with regards to, that the Memo is just whatever serial order the
>> re-routine passes, not needing to lookup by Object identity which is
>> otherwise part of an interning pattern.
>>
>> void rr1(String s1) {
>>
>> List<String> l1 = rr2.get(s1);
>>
>> Map<String, String> m1 = new LinkedHashMap<>();
>>
>> l1.stream().forEach(s -> m1.put(s, rr3.get(s)));
>>
>> return m1;
>> }
>>
>> See what I figure is that the order of the invocations to rr3.get() is
>> serial, so it really only needs to memoize its OE, Output|Exception,
>> then about that putting null values in the Map, and having to check the
>> values in the Map for null values, and otherwise to make it so that the
>> semantics of null and NullPointerException, result that satisfying
>> inputs result calls, and unsatisfying inputs result quits, figuring
>> those unsatisfying inputs are results of unsatisfied outputs, that will
>> be satisfied when the callee gets populated its memo and makes the
>> callback.
>>
>> If the order of invocations is out-of-order, gets again into whether the
>> Object/primitive by value needs to be the same each time, IOE, about the
>> library code in Collections, Streams, parallelStream, and Iterables, and
>> basically otherwise that any kind of library code, should throw
>> NullPointerException if it gets an "unexpected" null or what doesn't
>> fulfill it.
>>
>> The idea though that rr3 will get invoked say 1000 times with the rr2's
>> result, those each make their call, then re-launch 1000 times, has that
>> it's figured that the Executor, or Originator, when it looks up and
>> loads the "ReRoutineMapKey", is to have the count of those and whether
>> the count is fulfilled, then to no-op later re-launches of the
>> call-backs, after all the results are populated in the partial monad
>> memo.
>>
>> Then, there's perhaps instead as that each re-routine just checks its
>> input or checks its return value for nulls, those being unsatisfied.
>>
>> (The exception handling thoroughly or what happens when rr3 throws and
>> this kind of thing is involved thoroughly in library code.)
>>
>> The idea is it remains correct if the worst thing nulls do is throw
>> NullPointerException, because that's just a usual quit and means another
>> re-launch is coming up, and that it automatically queues for
>> asynchronous parallel invocation each the derivations while resulting
>> never blocking.
>>
>> It's figured that re-routines check their inputs for nulls, and throw
>> quit, and check their inputs for library container types, and checking
>> any member of a library container collection for null, to throw quit,
>> and then it will result that the automatic asynchronous parallelization
>> proceeds, while the re-routines are never blocking, there's only as much
>> memory on the heap of the monad as would be in the lifetime of the
>> original re-routine, and whatever re-calls or re-launches of the
>> re-routine established local state in local variables and library code,
>> would come in and out of scope according to plain stack unwinding.
>>
>> Then there's still the perceived deficiency that the re-routine's method
>> body will be run many times, yet it's only run as many times as result
>> throwing-quit, when it reaches where its argument to the re-routine or
>> result value isn't yet satisfied yet is pending.
>>
>> It would re-run the library code any number of times, until it results
>> all non-nulls, then the resulting satisfied argument to the following
>> re-routines, would be memo-ized in the monad, and the return value of
>> the re-routine thus returning immediately its value on the partial monad.
>>
>> This way each re-call of the re-routine, mostly encounters its own monad
>> results in constant time, and throws-quit or gets thrown-quit only when
>> it would be unsatisfying, with the expectation that whatever
>> throws-quit, either NullPointerException or extending
>> NullPointerException, will have a pending callback, that will queue on a
>> TQ, the task specification to re-launch and re-enter the original or
>> derived, re-routine.
>>
>> The idea is sort of that it's sort of, Java with non-blocking I/O and
>> ThreadLocal (1.7+, not 17+), or you know, C/C++ with non-blocking I/O
>> and thread local storage, then for the abstract or interface of the
>> re-routines, how it works out that it's a usual sort of model of
>> co-operative multithreading, the re-routine, the routine "in the
>> language".
>>
>>
>> Then it's great that the routine can be stubbed or implemented agnostic
>> of asynchrony, and declared in the language with standard libraries,
>> basically using the semantics of exception handling and convention of
>> re-launching callbacks to implement thread-of-control flow-of-control,
>> that can be implemented in the synchronous and blocking for unit tests
>> and modules of the routine, making a great abstraction of
>> flow-of-control.
>>
>>
>> Basically anything that _does_ block then makes for having its own
>> thread, whose only job is to block and when it unblocks, throw-toss the
>> re-launch toward the origin of the re-routine, and consume the next
>> blocking-task off the TQ. Yet, the re-routines and their servicing the
>> TQ only need one thread and never block. (And scale in core count and
>> automatically parallelize asynchronous requests according to satisfied
>> inputs.)
>>
>>
>> Mostly the idea of the re-routine is "in the language, it's just plain,
>> ordinary, synchronous routine".
>>
>>
>>
>
>
> Protocol Establishment
>
> Each of these protocols is a combined sort of protocol, then according
> to different modes, there's established a protocol, then data flows in
> the protocol (in time).
>
>
> stream-based (connections)
> sockets, TCP/IP
> sctp SCTP
> message-based (datagrams)
> datagrams, UDP
>
> The idea is that connections can have state and session state, while,
> messages do not.
>
> Abstractly then there's just that connections make for reading from the
> connection, or writing to the connection, byte-by-byte,
> while messages make for receiving a complete message, or writing a
> complete message. SCTP is sort of both.
>
> A bit more concretely, the non-blocking or asychronous or vector I/O,
> means that when some bytes arrive the connection is readable, and while
> the output buffer is not full a connection is writeable.
>
> For messages it's that when messages arrive messages are readable, and
> while the output buffer is not full messages are writeable.
>
> Otherwise bytes or messages that pile up while not readable/writeable
> pile up and in cases of limited resources get lost.
>
> So, the idea is that when bytes arrive, whatever's servicing the I/O's
> has that the connection has data to read, and, data to write.
> The usual idea is that an abstract Reader thread, will give any or all
> of the connections something to read, in an arbitrary order,
> at an arbitrary rate, then the role of the protocol, is to consume the
> bytes to read, thus releasing the buffers, that the Reader, writes to.
>
> Inputting/Reading
> Writing/Outputting
>
> The most usual idea of client-server is that
> client writes to server then reads from server, while,
> server reads from client then writes to client.
>
> Yet, that is just a mode, reads and writes are peer-peer,
> reads and writes in any order, while serial according to
> that bytes in the octet stream arrive in an order.
>
> There isn't much consideration of the out-of-band,
> about sockets and the STREAMS protocol, for
> that bytes can arrive out-of-band.
>
>
> So, the layers of the protocol, result that some layers of the protocol
> don't know anything about the protocol, all they know is sequences of
> bytes, and, whatever session state is involved to implement the codec,
> of the layers of the protocol. All they need to know is that given that
> all previous bytes are read/written, that the connection's state is
> synchronized, and everything after is read/written through the layer.
> Mostly once encryption or compression is setup it's never toredown.
>
> Encryption, TLS
> Compression, LZ77 (Deflate, gzip)
>
> The layers of the protocol, result that some layers of the protocol,
> only indicate state or conditions of the session.
>
> SASL, Login, AuthN/AuthZ
>
> So, for NNTP, a connection, usually enough starts with no layers,
> then in the various protocols and layers, get negotiated to get
> established,
> combinations of the protocols and layers. Other protocols expect to
> start with layers, or not, it varies.
>
> Layering, then, either is in the protocol, to synchronize the session
> then establish the layer in the layer protocol then maintain the layer
> in the main protocol, has that TLS makes a handsake to establish a
> encryption key for all the data, then the TLS layer only needs to
> encrypt and decrypt the data by that key, while for Deflate, it's
> usually the only option, then after it's setup as a layer, then
> everything other way reads/writes gets compressed.
>
>
> client -> REQUEST
> RESPONSE <- server
>
> In some protocols these interleave
>
> client -> REQUEST1
> client -> REQUEST2
>
> RESPONSE1A <- server
> RESPONSE2A <- server
> RESPONSE1B <- server
> RESPONSE2B <- server
>
> This then is called multiplexing/demultiplexing, for protocols like IMAP
> and HTTP/2,
> and another name for multiplexer/demultiplexer is mux/demux.
>
>
>
>
> So, for TLS, the idea is that usually most or all of the connections
> will be using the same algorithms with different keys, and each
> connection will have its own key, so the idea is to completely separate
> TLS establishment from TLS cryptec (crypt/decryp), so, the layer need
> only key up the bytes by the connection's key, in their TLS frames.
>
> Then, most of the connections will use compression, then the idea is
> that the data is stored at rest compressed already and in a form that it
> can be concatenated, and that similarly as constants are a bunch of the
> textual context of the text-based protocol, they have compressed and
> concatenable constants, with the idea that the Deflate compec
> (comp/decomp) just passes those along concatenating them, or actively
> compresses/decompresses buffers of bytes or as of sequences of bytes.
>
> The idea is that Readers and Writers deal with bytes at a time,
> arbitrarily many, then that what results being passed around as the
> data, is as much as possible handles to the data. So, according to the
> protocol and layers, indicates the types, that the command routines, get
> and return, so that the command routines can get specialized, when the
> data at rest, is already layerized, and otherwise to adapt to the more
> concrete abstraction, of the non-blocking, asynchronous, and vector I/O,
> of what results the flow-machine.
>
>
> When the library of the runtime of the framework of the language
> provides the cryptec or compec, then, there's issues, when, it doesn't
> make it so for something like "I will read and write you the bytes as of
> making a TLS handshake, then return the algorithm and the key and that
> will implement the cryptec", or, "compec, here's either some data or
> handles of various types, send them through", it's to be figured out.
> The idea for the TLS handshake, is basically to sit in the middle, i.e.
> to read and write bytes as of what the client and server send, then
> figuring out what is the algorithm and key and then just using that as
> the cryptec. Then after TLS algorithm and key is established the rest is
> sort of discarded, though there's some idea about state and session, for
> the session key feature in TLS. The TLS 1.2 also includes comp/decomp,
> though, it's figured that instead it's a feature of the protocol whether
> it supports compression, point being that's combining layers, and to be
> implemented about these byte-sequences/handles.
>
>
> mux/demux
> crypt/decrypt
> comp/decomp
> cod/decod
>
> codec
>
>
> So, the idea is to implement toward the concrete abstraction of
> nonblocking vector I/O, while, remaining agnostic of that, so that all
> sorts the usual test routines yet particularly the composition of layers
> and establishment and upgrade of protocols, is to happen.
>
>
> Then, from the byte sequences or messages as byte sequences, or handles
> of byte sequences, results that in the protocol, the protocol either way
> in/out has a given expected set of alternatives that it can read, then
> as of derivative of those what it will write.
>
> So, after the layers, which are agnostic of anything but byte-sequences,
> and their buffers and framing and chunking and so on, then is the
> protocol, or protocols, of the command-set and request/response
> semantics, and ordering/session statefulness, and lack thereof.
>
> Then, a particular machine in the flow-machine is as of the "Recognizer"
> and "Parser", then what results "Annunciators" and "Legibilizers", as it
> were, of what's usually enough called "Deserialization", reading off
> from a serial byte-sequence, and "Serialization, writing off to a serial
> byte-sequence, first the text of the commands or the structures in these
> text-based protocols, the commands and their headers/bodies/payloads,
> then the Objects in the object types of the languages of the runtime,
> where then the routines of the servicing of the protocol, are defined in
> types according to the domain types of the protocol (and their
> representations as byte-sequences and handles).
>
> As packets and bytes arrive in the byte-sequence, the Recognizer/Parser
> detects when there's a fully-formed command, and its payload, after the
> Mux/Demux Demultiplexer, has that the Demultiplexer represents any given
> number of separate byte-sequences, then according to the protocol
> anything their statefulness/session or orderedness/unorderedness.
>
> So, the Demultiplexer is to Recognize/Parse from the combined input
> byte-stream its chunks, that now the connection, has any number of
> ordered/unordered byte-sequences, then usually that those are ephemeral
> or come and go, while the connection endures, with the most usual notion
> that there's only one stream and it's ordered in requets and ordered in
> responses, then whether commands gets pipelined and requests need not
> await their responses (they're ordered), and whether commands are
> numbers and their responses get associated with their command sequence
> numbers (they're unordered and the client has its own mux/demux to
> relate them).
>
> So, the Recognizer/Parser, theoretically only gets a byte at a time, or
> even none, and may get an entire fully-formed message (command), or not,
> and may get more bytes than a fully-formed message, or not, and the
> bytes may be a well-formed message, or not, and valid, or not.
>
> Then the job of the Recognizer/Parser, is from the beginning of the
> byte-sequence, to Recognize a fully-formed message, then to create an
> instance of the command object related to the handle back through the
> mux/demux to the multiplexer, called the attachment to the connection,
> or the return address according to the attachment representing any
> routed response and usually meaning that the attachment is the user-data
> and any session data attached to the connection and here of the
> mux/demux of the connection, the job of the Recognizer/Parser is to work
> any time input is received, then to recognize and parse any number of
> fully-formed messages from the input, create those Commands according to
> the protocol, that the attachment includes the return destination, and,
> thusly release those buffers or advance the marker on the Input
> byte-sequence, so that the resources are freed, and later
> Recognizings/Parsing starts where it left off.
>
> The idea is that bytes arrive, the Recognizer/Parser has to determine
> when there's a fully-formed message, consume that and service the
> buffers the byte-sequence, having created the derived command.
>
> Now, commands are small, or so few words, then the headers/body/payload,
> basically get larger and later unboundedly large. Then, the idea is that
> the protocol, has certain modes or sub-protocols, about "switching
> protocols", or modes, when basically the service of the routine changes
> from recognizing and servicing the beginning to ending of a command, to
> recognizing and servicing an arbitrarily large payload, or, for example,
> entering a mode where streamed data arrives or whatever sort, then that
> according to the length or content of the sub-protocol format, the
> Recognizer's job includes that the sub-protocol-streaming, modes, get
> into that "sub-protocols" is a sort of "switching protocols", the only
> idea though being going into the sub-protocol then back out to the main
> protocol, while "switching protocols" is involved in basically any the
> establishment or upgrade of the protocol, with regards to the stateful
> connection (and not stateless messages, which always are according to
> their established or simply some fixed protocol).
>
> This way unboundedly large inputs, don't actually live in the buffers of
> the Recognizers that service the buffers of the Inputters/Readers and
> Multiplexers/Demultiplexers, instead define modes where they will be
> streaming through arbitrarily large payloads.
>
> Here for NNTP and so on, the payloads are not considered arbitrarily
> large, though, it's sort of a thing that sending or receiving the
> payload of each message, can be defined this way so that in very, very
> limited resources of buffers, that the flow-machine keeps flowing.
>
>
> Then, here, the idea is that these commands and their payloads, have
> their outputs that are derived as a function of the inputs. It's
> abstractly however this so occurs is the way it is. The idea here is
> that the attachment+command+payload makes a re-routine task, and is
> pushed onto a task queue (TQ). Then it's figured that the TQ represents
> abstractly the execution of all the commands. Then, however many Task
> Workers or TW, or the TQ that runs itself, get the oldest task from the
> queue (FIFO) and run it. When it's complete, then there's a response
> ready in byte-sequences are handles, these are returned to the attachment.
>
> (The "attachment" usually just means a user or private datum associated
> with the connection to identify its session with the connection
> according to non-blocking I/O, here it also means the mux/demux
> "remultiplexer" attachment, it's the destination of any response
> associated with a stream of commands over the connection.)
>
> So, here then the TQ basically has the idea of the re-routine, that is
> non-blocking and involves the asynchronous fulfillment of the routine in
> the domain types of the domain of object types that the protocol adapts
> as an adapter, that the domain types fulfill as adapted. Then for NNTP
> that's like groups and messages and summaries and such, the objects. For
> IMAP its mailboxes and messages to read, for SMTP its emails to send,
> with various protocols in SMTP being separate protocols like DKIM or
> what, for all these sorts protocols. For HTTP and HTTP/2 it's usual HTTP
> verbs, usually HTTP 1.1 serial and pipelined requests over a connection,
> in HTTP/2 mutiplexed requests over a connection. Then "session" means
> broadly that it may be across connections, what gets into the attachment
> and the establishment and upgrade of protocol, that sessions are
> stateful thusly, yet granularly, as to connections yet as to each request.
>
>
> Then, the same sort of thing is the same sort of thing to back-end,
> whatever makes for adapters, to domain types, that have their protocols,
> and what results the O/I side to the I/O side, that the I/O side is the
> server's client-facing side, while the O/I side is the
> server-as-a-client-to-the-backend's, side.
>
> Then, the O/I side is just the same sort of idea that in the
> flow-machine, the protocols get established in their layers, so that all
> through the routine, then the domain type are to get specialized to when
> byte-sequences and handles are known well-formed in compatible
> protocols, that the domain and protocol come together in their
> definition, basically so it results that from the back-end is retrieved
> for messages by their message-ID that are stored compressed at rest, to
> result passing back handles to those, for example a memory-map range
> offset to an open handle of a zip file that has the concatenable entry
> of the message-Id from the groups' day's messages, or a list of those
> for a range of messages, then the re-routine results passing the handles
> back out to the attachment, which sends them right out.
>
> So, this way there's that besides the TQ and its TW's, that those are to
> never block or be long-running, that anything that's long-running is on
> the O/I side, and has its own resources, buffers, and so on, where of
> course all the resources here of this flow-machine are shared by all the
> flow-machines in the flow-machine, in the sense that they are not shared
> yet come from a common resource altogether, and are exclusive. (This
> gets into the definition of "share" as with regards to "free to share,
> or copy" and "exclusive to share, a.k.a. taking turns, not cutting in
> line, and not stealing nor hoarding".)
>
>
> Then on the O/I side or the backend side, it's figured the backend is
> any kind of adapters, like DB adapters or FS adapters or WS adapters,
> database or filesystem or webservice, where object-stores are considered
> filesystem adapters. What that gets into is "pools" like client pools,
> connection pools, resource pools, that a pool is usually enough
> according to a session and the establishment of protocol, then with
> regards to servicing the adapter and according to the protocol and the
> domain objects that thusly implement the protocol, the backend side has
> its own dedicated routines and TW's, or threads of execution, with
> regards to that the backend side basically gets a callback+request and
> the job is to invoke the adapter with the request, and invoke the
> callback with the response, then whether for example the callback is
> actually the original attachment, or it involves "bridging the unbounded
> sub-protocol", what it means for the adapter to service the command.
>
> Then the adapter is usually either provided as with intermediate or
> domain types, or, for example it's just another protocol flow machine
> and according to the connections or messaging or mux/demux or
> establishing and upgrading layers and protocols, it basically works the
> same way as above in reverse.
>
> Here "to service" is the usual infinitive that for the noun means "this
> machine provides a service" yet as a verb that service means to operate
> according to the defined behavior of the machine in the resources of the
> machine to meet the resource needs of the machine's actions in the
> capabilities and limits of the resources of the machine, where this "I/O
> flow-machine: a service" is basically one "node" or "process" in a usual
> process model, allocated its own quota of resources according to the
> process and its environment model in the runtime in the system, and
> that's it. So, there's servicing as the main routine, then also what it
> means the maintenance servicing or service of the extended routine.
> Then, for protocols it's "implement this protocol according to its
> standards according to the resources in routine".
>
>
> You know, I don't know where they have one of these anywhere, ....
>
>


Click here to read the complete article
1
server_pubkey.txt

rocksolid light 0.9.8
clearnet tor