Magrathean diacritics

User avatar
Dave Keenan
Site Admin
Posts: 2180
Joined: Tue Sep 01, 2015 2:59 pm
Location: Brisbane, Queensland, Australia
Contact:

Re: Magrathean diacritics

Post by Dave Keenan »

cmloegcmluin wrote: Tue Sep 08, 2020 12:43 pm
Dave Keenan wrote: Tue Sep 08, 2020 11:58 am Would it be easy for you to add a flag to make it put out tab-separated value (TSV) format, including tabs between the terms in the monzo, and between the terms and their brackets. This would make it much easier for me to get these lists into a spreadsheet for further manipulation.
After running the find-commas script w/o the --for-forum flag, you could import "dist/pitch/all.txt" to Excel or Sheets and that should work. It uses the tab character as a delimiter and should recognize it automatically.

Anything those scripts log to the console also gets saved in a file that sticks around until you run the script again. You can find them in the dist/ folder. It's not the cleanest implementation or anything yet, but can be quite handy.
I had already tried this by redirecting the output to a file of my choice by appending " > fileOfMyChoice.txt" to the command. There are two problems:
  1. Fields in the same column are padded to a common width using spaces, before being separated with tabs. Excel assumes the spaces are significant and keeps them. That's not such a big deal since I could easily strip spaces from the file with Notepad++ before opening it in Excel. However ...
  2. Monzo terms are only separated by spaces, no tabs.
I kind of like the abbreviations ATE and AAS, though, as opposed to abs3exp and absolute apotome slope which we never had a shorthand for.
Those are fine with me.
User avatar
Dave Keenan
Site Admin
Posts: 2180
Joined: Tue Sep 01, 2015 2:59 pm
Location: Brisbane, Queensland, Australia
Contact:

Re: Magrathean diacritics

Post by Dave Keenan »

cmloegcmluin wrote: Tue Sep 08, 2020 4:29 am
Dave Keenan wrote: Sat Sep 05, 2020 10:50 am Notice how George included a term that attempted to correct for the balance-blindness of SoPF>3. Perhaps we can replace the first two terms with some function of N2D3P9 and simplify the last two terms into one term, given that abs3exp and absApotomeSlope are the same thing here.
I believe the term you are talking about which attempts to correct for the balance-blindness of SoPF>3 is "J". So, I agree that the first two terms, "G" and "J", can be replaced with (a function of) N2D3P9. And then consolidate the "K" and "L" terms too.
Yes. J is what I was referring to. I think we have to replace G+J with some constant times the log of N2D3P9. Or maybe k*sqrt(N2D3P9) or k*N2D3P9. I'm not sure which.
volleo6144 wrote: Sat May 30, 2020 2:43 am The weighted complexity is G+J+K+L, where:
- G (">3") is the SoPF>3
- J ("d+n") is the absolute value of (H-I) times G/5, with H = the number of primes >3 (with multiplicity) in the denominator (smaller value) and I in the numerator (larger value)—7:25k (224:225) has G=17, H=1 (actually -1 in the spreadsheet), I=2, J=(2-1) times 17/5 or 3.4. This is zero for any comma that's one prime against another, so 5:7k, 5:7C, 11:23S, etc. are all zero here.
- K ("3-exp.") is 2^(abs(3exp) - 8.5) times ln(G+2), so 5C (80:81) is 2^(4-8.5)×ln(5+2) = ln(7)/sqrt(512) = 1.95/22.6 = 0.086. This ranks commas with high 3-exponents as really complex (3C is at 7.84 here, and 3s is at 17.2 trillion).
- L ("slope") is like K, but with apotome slope instead of 3-exponent. L = 2^(abs(slope) - 8.5)×ln(G+2). 3C (slope = 10.6) is at 2.88 here, and 3s (slope = 52.8) is at 14.8 trillion.
Okay, so the question becomes: what function of N2D3P9 balances well with some version of the "K" term in George's complexity function. There's a bunch of magic numbers in George's function. Any idea what the 8.5 is (I see that that's the ATE of half a Pythagorean large diesis, which is the size category boundary between C and S)? Or why it's two to this power? Or why we add 2 to the SoPF>3 and then take the natural log? It feels incommensurate to use two different bases between N2D3P9 and the "K" part. I feel like George must have had a target he was aiming for, like he had put things in terms of the relationship between two key ratios he wanted to push to the correct side of each other. Or something along the lines of what Dave was going for when he brought "munging" into the lingo:
Yeah. I think it is something like my munging. I think the 8.5 is just a kind of soft threshold beyond which 3 exponents are strongly penalised.

It's messy how the SOPF>3 enters into both G and K.

Sorry I ran out of time here, and didn't get to respond to the rest of your post.
User avatar
cmloegcmluin
Site Admin
Posts: 1700
Joined: Tue Feb 11, 2020 3:10 pm
Location: San Francisco, California, USA
Real Name: Douglas Blumeyer (he/him/his)
Contact:

Re: Magrathean diacritics

Post by cmloegcmluin »

Dave Keenan wrote: Tue Sep 08, 2020 5:13 pm I agree with treating the monzo as the standard format.
:thumbsup:

I've still got a bunch of subtleties to work out somewhere at the intersection of the complexities of TypeScript, Sagittal, and tuning theory, but I'll spare you the grisly details.
I know you already have code to convert an integer to a monzo, but I thought you might like the puzzle of figuring out how the following works. It's a javascript translation of what I use in Excel to obtain the exponent (i.e. monzo term) for each prime p, for an integer n. It takes advantage of the fact that the largest consecutive integer in double float format is 253 and assumes you wont pass it anything larger.

exponent(n, p) = Math.round(math.log(math.gcd(n, p**(Math.floor(53/Math.log2(p))) ), p) )

But it's only convenient if you have an efficient (Euclidean algorithm) GCD function (as Excel does), e.g. this:
https://mathjs.org/docs/reference/functions/gcd.html
And since it was in the same library, I assumed this too:
https://mathjs.org/docs/reference/functions/log.html
Thanks for the puzzle! The colored parens were quite helpful.

I think I got it. There's two major tricks that happen:
  • The `floor(\frac{53}{log_2(p)})` gives you the maximum safe exponent on `p`, i.e. the largest exponent which `p` can be raised to while keeping the power less than `2^53`.
  • So, when you take the `gcd` of `n` with this number, you’re guaranteed to be given back all the factors of `p` in `n`.
From there it’s a simple matter of taking the log base `p` to get the exponent. I’m not sure why the `\text{round}` is necessary, but that’s not really part of the puzzle.

Cool beans :) I could consider using this in the codebase. I think I remember that when we profiled the LFC scripts they were spending a ton of time computing monzos from integers. Indeed, 5% of the time was spent prime factorizing integers.
Dave Keenan wrote: Tue Sep 08, 2020 5:50 pm Monzo terms are only separated by spaces, no tabs.
Ah crap! Sorry about that.

Yes it's true, I do a mix of spaces and tabs to align things in that format. Somehow, Google Sheets was smart enough to sort it out automatically, and I assumed Excel would be similarly convenient. Check this out: https://docs.google.com/spreadsheets/d/ ... sp=sharing

So if it's not too much trouble, you could import to Google Sheets, and then download. Still a bit of work to clean up the ['s and ⟩'s, though. So it would be even better if I just added a new table formatting module with a TSV target. Would be pretty easy. Just trimming those aligning spaces, and special handling for monzos.

Thanks for being my first non-me user! :)
Dave Keenan wrote: Tue Sep 08, 2020 11:52 pm I think we have to replace G+J with some constant times the log of N2D3P9. Or maybe k*sqrt(N2D3P9) or k*N2D3P9. I'm not sure which.
Sure, or k^N2D3P9. Or N2D3P9^k. Or logN2D3P9k. :man_facepalming: (context here and here if you don't know what I'm alluding to)
I think it is something like my munging.
Alright, my first attempts on the problem will proceed along those lines.

I suggest again that we should bring this back to the developing a notational comma popularity metric thread.
I think the 8.5 is just a kind of soft threshold beyond which 3 exponents are strongly penalised.
Okay. But you're not specifically aware, then, that it's in terms of like something to do with the circle of fifths, or most popular nominal as 1/1 being D vs. G, etc. that kind of stuff. In other words: is it flexible, and/or might there be a psychoacoustically justifiable value for this parameter.
It's messy how the SOPF>3 enters into both G and K.
Agreed. I think that's unnecessary. The goal only seems to be to scale G and K properly in relation to each other.

I wonder how the chips would fall if we just extended N2D3P9 to include the 3's, and treated them like they were in the numerator. If we treated them like they were in the denominator they'd have zero effect since each 3 would be divided by 3, but if we treat them like they're in the numerator, then each one increases the points by a factor of 3/2.

N2D3P9(65/77n) = 200.818
NATE2D3P9(65/77n) = 200.818 * (3/2)^3 = 200.818 * 3.375 = 677.761
vs.
N2D3P9(125/13n) = 97.801
NATE2D3P9(125/13n) = 97.801 * (3/2)^9 = 97.801 * 38.443 = 3759.764

So yeah... it definitely still prefers the existing comma we have for 6 tinas (i.e. 2 minas).

And the other example:

N2D3P9(1/205n) = 233.472
NATE2D3P9(1/205n) = 233.472 * (3/2)^8 = 233.472 * 25.629 = 5983.654
vs.
N2D3P9(1/5831n) = 688.382
NATE2D3P9(1/5831n) = 688.382 * (3/2)^6 = 688.382 * 11.391 = 7841.359

Oh ho! Interesting. So by this measure, we would prefer the 1/205n.

What do you think? It's certainly simple.

I thought it would be a quick experiment to use the test I have already set up for verifying primary commas, but I'm not sure the approach I am testing is appropriate. I know I brought this up before... where was it... checking the secondary comma zones for each symbol... ah. Back on page 6 of the Godthread... why am I not surprised. I think maybe a quarter of the content of the entire forum is constituted by that one thread now, heh...

But yeah, in any case, that condition is *not* true for NATE2D3P9 any more than it was for SoPF>3. It only takes a glance at the precision levels diagram to see that :'::|:'s secondary comma zone is an enclave of the secondary comma zone for :)|:, and since the 5s has much lower N2D3P9 than the 19s, by the logic of the test as I've written it now, the 5s should be the comma for :)|: . Which we know there's a lot more considerations involved here, and maybe it comes down to the complex relationship between the High and Ultra levels (or maybe more accurately the complex relationship between the Promethean and Herculean symbol subsets). Anywho...
Sorry I ran out of time here, and didn't get to respond to the rest of your post.
I think you addressed most of it, except for the stuff about badness. But we probably should save that stuff to later and not try to work on it simultaneously with this already-complex-enough problem of the usefulness.

What would be relevant to resolve, at least, would be the boundary between error and usefulness. Was I right to say it the point where we start considering EDO-ability is when we move on to badness (and thus "developing a notational comma popularity metric" topic being a subtopic of "Just Intonation notations" is the correct home for it)?
User avatar
Dave Keenan
Site Admin
Posts: 2180
Joined: Tue Sep 01, 2015 2:59 pm
Location: Brisbane, Queensland, Australia
Contact:

Re: Magrathean diacritics

Post by Dave Keenan »

cmloegcmluin wrote: Wed Sep 09, 2020 5:32 am I think I got it. There's two major tricks that happen:
...
From there it’s a simple matter of taking the log base `p` to get the exponent. I’m not sure why the `\text{round}` is necessary, but that’s not really part of the puzzle.
You got it. The round() is necessary because of the nature of floating point math. The result of the log may sometimes end in something like .000...1 or .999... (really an error in the least significant bit or two in the binary representation).
Cool beans :) I could consider using this in the codebase. I think I remember that when we profiled the LFC scripts they were spending a ton of time computing monzos from integers. Indeed, 5% of the time was spent prime factorizing integers.
I can't guarantee it will be faster, but to have a chance of that you'd need to precompute:

maxPrimePower[i] = prime[i]**(Math.floor(53/Math.log2(prime[i])))

so in the repeated calcs you're only doing:

exponent(n, i) = Math.round(math.log(math.gcd(n, maxPrimePower[i]), prime[i]) )

That's more like what I really do in Excel.
Somehow, Google Sheets was smart enough to sort it out automatically, and I assumed Excel would be similarly convenient. Check this out: https://docs.google.com/spreadsheets/d/ ... sp=sharing
Amazing.
So if it's not too much trouble, you could import to Google Sheets, and then download. Still a bit of work to clean up the ['s and ⟩'s, though. So it would be even better if I just added a new table formatting module with a TSV target. Would be pretty easy. Just trimming those aligning spaces, and special handling for monzos.
That would be great. Thanks. With the monzos, in TSV it would be best if all the closing angle-brackets ended up in the same rightward column, so we don't have columns with some numbers and some angle-brackets. And can you please put a BOM at the start of the file. Excel currently shows the angle brackets as a series of three characters: ⟩ because it can't tell that the file is in UTF-8.

Something unrelated that I just remembered to mention: Can you please ensure your software names the Pythagorean comma as "3C", not "1C", and any other 3-limit commas similarly. I'm pretty sure we agreed that the only "comma" name that would use the lone number "1" would be the unison "1u".
Dave Keenan wrote: Tue Sep 08, 2020 11:52 pm I think we have to replace G+J with some constant times the log of N2D3P9. Or maybe k*sqrt(N2D3P9) or k*N2D3P9. I'm not sure which.
Sure, or k^N2D3P9. Or N2D3P9^k. Or logN2D3P9k. :man_facepalming: (context here and here if you don't know what I'm alluding to)
No. F those. :) I'm pretty sure that to behave anything like sopfr, the function of N2D3P9 has to be linear or compressive, not expansive.


I want to see N2D3P9 plotted against sopfr for the first hundred or so ratios in N2D3P9 order, to get a feel for the approximate relationship.
Alright, my first attempts on the problem will proceed along those lines.

I suggest again that we should bring this back to the developing a notational comma popularity metric thread.
If we took it there, I'd feel obliged to make it general for all notational commas, whereas here we can take shortcuts that rely on their small size and close spacing as tinas.
I think the 8.5 is just a kind of soft threshold beyond which 3 exponents are strongly penalised.
Okay. But you're not specifically aware, then, that it's in terms of like something to do with the circle of fifths, or most popular nominal as 1/1 being D vs. G, etc. that kind of stuff. In other words: is it flexible, and/or might there be a psychoacoustically justifiable value for this parameter.
I think it's slightly flexible. Psychoacoustics are not relevant. Yes, it's to do with the chain (not circle) of fifths and the fact that we can only go to 2 sharps or 2 flats max, and we'd really like to avoid having more than one. And since George takes the absolute value of the 3-exponent, he's assuming 1/1 is D (the point of symmetry) or not far from it, on the chain of fifths.
I wonder how the chips would fall if we just extended N2D3P9 to include the 3's, and treated them like they were in the numerator. If we treated them like they were in the denominator they'd have zero effect since each 3 would be divided by 3, but if we treat them like they're in the numerator, then each one increases the points by a factor of 3/2.
...
Oh ho! Interesting. So by this measure, we would prefer the 1/205n.

What do you think? It's certainly simple.
I definitely like its simplicity. Good thinking. Unfortunately, I don't think it penalises large 3-exponents enough. An ATE greater than 14 should be pretty much OOTQ no matter what other properties the comma has. That is the case with George's metric, which involves adding a munged 3-exponent rather than multiplying. But maybe we need to take the log of N2D3P9 before it makes sense to add a munged 3-exponent to it.
It only takes a glance at the precision levels diagram to see that :'::|:'s secondary comma zone is an enclave of the secondary comma zone for :)|:, and since the 5s has much lower N2D3P9 than the 19s, by the logic of the test as I've written it now, the 5s should be the comma for :)|: . Which we know there's a lot more considerations involved here, and maybe it comes down to the complex relationship between the High and Ultra levels (or maybe more accurately the complex relationship between the Promethean and Herculean symbol subsets). Anywho...
I don't think we have to get into stuff like that. I think it can be done with only N2D3P9 and ATE if they are combined in the right way.
I think you addressed most of it, except for the stuff about badness. But we probably should save that stuff to later and not try to work on it simultaneously with this already-complex-enough problem of the usefulness.
Agreed.
What would be relevant to resolve, at least, would be the boundary between error and usefulness. Was I right to say it the point where we start considering EDO-ability is when we move on to badness (and thus "developing a notational comma popularity metric" topic being a subtopic of "Just Intonation notations" is the correct home for it)?
I think we only need to go back to DANCPM if we are developing a comma usefulness metric with general application, and yes, that would require that we consider EDO-ability. But that wouldn't need to involve error or badness. Here's how I see our current terminology: "Usefulness" would be a combination of (popularity or N2D3P9) and (3-exponent and/or slope). "Badness" would be a combination of "usefulness" and error.
User avatar
Dave Keenan
Site Admin
Posts: 2180
Joined: Tue Sep 01, 2015 2:59 pm
Location: Brisbane, Queensland, Australia
Contact:

Re: Magrathean diacritics

Post by Dave Keenan »

Here's N2D3P9 versus sopfr for N2D3P9 < 903.

Image

And here it is again with N2D3P9 on a log axis.

Image

Clearly we need to take the log of N2D3P9 to make it work in a modified "Secor complexity" metric.

Just reading off the graph, 4.5 × lb(N2D3P9) ≈ 15 × log10(N2D3P9) ought to just plug straight in.
Attachments
LogN2d3p9VsSopfr.png
(14.52 KiB) Not downloaded yet
N2d3p9VsSopfr.png
(18.98 KiB) Not downloaded yet
User avatar
Dave Keenan
Site Admin
Posts: 2180
Joined: Tue Sep 01, 2015 2:59 pm
Location: Brisbane, Queensland, Australia
Contact:

Re: Magrathean diacritics

Post by Dave Keenan »

So I'm getting that something like

4.5×lb(N2D3P9) + 8×2^(ATE-8.5)

should be similar to Secor complexity.

Call it 4.5×lb(N2D3P9) + 9×2^(ATE-8.5) then eliminate the common factors since they don't affect ranking.

lb(N2D3P9) + 2×2^(ATE-8.5)

= lb(N2D3P9) + 2^(ATE-7.5)

Exponentiating that whole thing also doesn't affect ranking. That would be

N2D3P9 × 2^(2^(ATE-7.5))

ATE	N2D3P9 multiplier = 2^(2^(ATE-7.5))
0	1.003836474
1	1.007687666
2	1.015434433
3	1.031107087
4	1.063181825
5	1.130355594
6	1.277703768
7	1.632526919
8	2.665144143
9	7.102993301
10	50.45251384
11	2545.456153
12	6479347.025
13	4.19819E+13
14	1.76248E+27

That looks like it's doing the right kind of thing. We only have to adjust that 7.5 value, which is the ATE at which N2D3P9 gets doubled. Call it the DATE. My mapping from George's formula was all very rough and I ignored the J (balance ) term. Maybe bump DATE up to 9.

ATE	N2D3P9 multiplier = 2^(2^(ATE-DATE)) where DATE = 9
0	1.00135472
1	1.002711275
2	1.005429901
3	1.010889286
4	1.021897149
5	1.044273782
6	1.090507733
7	1.189207115
8	1.414213562
9	2
10	4
11	16
12	256
13	65536
14	4294967296
User avatar
cmloegcmluin
Site Admin
Posts: 1700
Joined: Tue Feb 11, 2020 3:10 pm
Location: San Francisco, California, USA
Real Name: Douglas Blumeyer (he/him/his)
Contact:

Re: Magrathean diacritics

Post by cmloegcmluin »

Dave Keenan wrote: Wed Sep 09, 2020 10:45 am I can't guarantee it will be faster, but to have a chance of that you'd need to precompute:

maxPrimePower[i] = prime[i]**(Math.floor(53/Math.log2(prime[i])))
That makes sense. Thanks.
With the monzos, in TSV it would be best if all the closing angle-brackets ended up in the same rightward column, so we don't have columns with some numbers and some angle-brackets.
Yup, that's what I would want too.
And can you please put a BOM at the start of the file. Excel currently shows the angle brackets as a series of three characters: ⟩ because it can't tell that the file is in UTF-8.
Whoa. Can't say I ever saw the word "endianness" before today... Okay, can do.
Something unrelated that I just remembered to mention: Can you please ensure your software names the Pythagorean comma as "3C", not "1C", and any other 3-limit commas similarly. I'm pretty sure we agreed that the only "comma" name that would use the lone number "1" would be the unison "1u".
Yes I think you're right. Okay, yes, I'll correct that soon.
I want to see N2D3P9 plotted against sopfr for the first hundred or so ratios in N2D3P9 order, to get a feel for the approximate relationship.
Yeah, that'd be super cool to see ;)
I suggest again that we should bring this back to the developing a notational comma popularity metric thread.
If we took it there, I'd feel obliged to make it general for all notational commas, whereas here we can take shortcuts that rely on their small size and close spacing as tinas.
Fine by me. I hope we don't regret if we do want to find a general comma no pop rank and wish we'd used it on the tinas. But honestly at this point I think my hypothetical future regretful self would accept the fact that my present impatient self felt he'd spent enough time on this problem already. :)
Yes, it's to do with the chain (not circle) of fifths and the fact that we can only go to 2 sharps or 2 flats max, and we'd really like to avoid having more than one. And since George takes the absolute value of the 3-exponent, he's assuming 1/1 is D (the point of symmetry) or not far from it, on the chain of fifths.
Alright. I think that checks out with the DATE of 9 you settled on; with ATE of 10 causing a 4x factor on N2D3P9 and 11 causing a 16x factor, it seems fair to say that ATE of 10 would be the last reasonable ATE that would ever win out as a comma (the last "ITQ" comma? Or am I hyperacronymizing?), and I think 10 would be the maximum count of fifths you could go from D before requiring a double sharp/flat.
Dave Keenan wrote: Wed Sep 09, 2020 12:09 pm Here's N2D3P9 versus sopfr for N2D3P9 < 903.
That's so cool looking! The clearly visible threads running through it remind me of the patterns in the Sacks spiral: https://upload.wikimedia.org/wikipedia/ ... 100000.png
Dave Keenan wrote: Wed Sep 09, 2020 12:46 pm
ATE	N2D3P9 multiplier = 2^(2^(ATE-DATE)) where DATE = 9
Great work. Interesting that it works out that the 3 term of the monzo causes exponential growth while the 5+ terms cause power growth. Actually the 3-term causes double exponential growth. This appears to be an established concept: https://en.wikipedia.org/wiki/Double_ex ... l_function

There's something cute about 9 cropping up in both N2D3P9 and this. But I wouldn't fixate on it at the expense of utility.

If we settle on this as our usefulness metric, I can build it into the code and/or get some updated results per half-tina.
User avatar
Dave Keenan
Site Admin
Posts: 2180
Joined: Tue Sep 01, 2015 2:59 pm
Location: Brisbane, Queensland, Australia
Contact:

Re: Magrathean diacritics

Post by Dave Keenan »

cmloegcmluin wrote: Wed Sep 09, 2020 1:51 pm I hope we don't regret if we do want to find a general comma no pop rank and wish we'd used it on the tinas. But honestly at this point I think my hypothetical future regretful self would accept the fact that my present impatient self felt he'd spent enough time on this problem already. :)
I hate to say it, but now I'm thinking the only way to validate this comma usefulness metric, and to tune its parameters (like DATE) is to apply it to all the existing extreme-precision JI notation commas under the half-apotome, and minimise the sum-of-squares or something. That means including another multiplier: 2^(2^(AAS-DAAS)).

A much harder problem would be to optimise it to maximally-justify our existing choices of one comma over another in each extreme bucket.

An interesting case (in addition to the 5s vs 19s one you mentioned) is 3C. It has N2D3P9 of 1 and ATE of 12 that results in a multiplier of 256. But I don't know what else it's competing with for that slot. No extreme comma under the half apotome has ATE > 12.
If we settle on this as our usefulness metric, I can build it into the code and/or get some updated results per half-tina.
Actually, I can see some serious floating-point overflow and underflow issues with the form:
N2D3P9 × 2^(2^(ATE-DATE)) × 2^(2^(AAS-DAAS))

Can you build it into the code as lb(N2D3P9) + 2^(ATE-DATE) + 2^(AAS-DAAS) with switches to change DATE and DAAS (separately) from their default values of 9 and 9.

OK. You were right. :) Time to go back to the developing a notational comma popularity metric thread.
User avatar
cmloegcmluin
Site Admin
Posts: 1700
Joined: Tue Feb 11, 2020 3:10 pm
Location: San Francisco, California, USA
Real Name: Douglas Blumeyer (he/him/his)
Contact:

Re: Magrathean diacritics

Post by cmloegcmluin »

Dave Keenan wrote: Wed Sep 09, 2020 3:09 pm OK. You were right. :) Time to go back to the developing a notational comma popularity metric thread.
We'll be right back...!
User avatar
volleo6144
Posts: 81
Joined: Mon May 18, 2020 7:03 am
Location: Earth
Contact:

Re: Magrathean diacritics

Post by volleo6144 »

Dave Keenan wrote: Wed Sep 09, 2020 3:09 pm Actually, I can see some serious floating-point overflow and underflow issues with the form:
N2D3P9 × 2^(2^(ATE-DATE)) × 2^(2^(AAS-DAAS))
float overflows at 2^128 = 2^2^7 or ATE=16*, and double overflows at 2^1024 = 2^2^10 or ATE=19*. You'll be fine—I'm sure no comma that has 311 as a factor in either side (177147:183920, 64.96¢, has ATE=11, AAS=14.9997, and 2^(2^(AAS-9) + 2^(2^(ATE-9)) = 2.922e+20, which fits in a float) will be useful anyway.

Apotome slope equals 3-exponent minus the comma's size in units of 3A / 7 ≈ 16.24¢ (very, very close to 9°665), so it's essentially the same as 3-exponent for schisminas (455n has ATE = 2 and AAS = 2.026).

* or AAS, and assuming DATE = DAAS = 9
I'm in college (a CS major), but apparently there's still a decent amount of time to check this out. I wonder if the main page will ever have 59edo changed to green...
Post Reply