Dave Keenan wrote: ↑Tue Sep 08, 2020 5:13 pm

I agree with treating the monzo as the standard format.

I've still got a bunch of subtleties to work out somewhere at the intersection of the complexities of TypeScript, Sagittal, and tuning theory, but I'll spare you the grisly details.

I know you already have code to convert an integer to a monzo, but I thought you might like the puzzle of figuring out how the following works. It's a javascript translation of what I use in Excel to obtain the exponent (i.e. monzo term) for each prime p, for an integer n. It takes advantage of the fact that the largest consecutive integer in double float format is 2

^{53} and assumes you wont pass it anything larger.

exponent(n, p) = Math.round

(math.log

(math.gcd

(n, p**

(Math.floor

(53/Math.log2(p)

)) ), p

) )
But it's only convenient if you have an efficient (Euclidean algorithm) GCD function (as Excel does), e.g. this:

https://mathjs.org/docs/reference/functions/gcd.html
And since it was in the same library, I assumed this too:

https://mathjs.org/docs/reference/functions/log.html

Thanks for the puzzle! The colored parens were quite helpful.

I think I got it. There's two major tricks that happen:

- The `floor(\frac{53}{log_2(p)})` gives you the maximum safe exponent on `p`, i.e. the largest exponent which `p` can be raised to while keeping the power less than `2^53`.
- So, when you take the `gcd` of `n` with this number, you’re guaranteed to be given back
*all *the factors of `p` in `n`.

From there it’s a simple matter of taking the log base `p` to get the exponent. I’m not sure why the `\text{round}` is necessary, but that’s not really part of the puzzle.

Cool beans

I could consider using this in the codebase. I think I remember that when we profiled the LFC scripts they were spending a ton of time computing monzos from integers.

Indeed, 5% of the time was spent prime factorizing integers.

Dave Keenan wrote: ↑Tue Sep 08, 2020 5:50 pm

Monzo terms are

*only *separated by spaces, no tabs.

Ah crap! Sorry about that.

Yes it's true, I do a mix of spaces and tabs to align things in that format. Somehow, Google Sheets was smart enough to sort it out automatically, and I assumed Excel would be similarly convenient. Check this out:

https://docs.google.com/spreadsheets/d/ ... sp=sharing
So if it's not too much trouble, you could import to Google Sheets, and then download. Still a bit of work to clean up the ['s and ⟩'s, though. So it would be even better if I just added a new table formatting module with a TSV target. Would be pretty easy. Just trimming those aligning spaces, and special handling for monzos.

Thanks for being my first non-me user!

Dave Keenan wrote: ↑Tue Sep 08, 2020 11:52 pm

I think we have to replace G+J with some constant times the log of N2D3P9. Or maybe k*sqrt(N2D3P9) or k*N2D3P9. I'm not sure which.

Sure, or k^N2D3P9. Or N2D3P9^k. Or log

_{N2D3P9}k.

(context

here and

here if you don't know what I'm alluding to)

I think it is something like my munging.

Alright, my first attempts on the problem will proceed along those lines.

I suggest again that we should bring this back to the

developing a notational comma popularity metric thread.

I think the 8.5 is just a kind of soft threshold beyond which 3 exponents are strongly penalised.

Okay. But you're not specifically aware, then, that it's in terms of like something to do with the circle of fifths, or most popular nominal as 1/1 being D vs. G, etc. that kind of stuff. In other words: is it flexible, and/or might there be a psychoacoustically justifiable value for this parameter.

It's messy how the SOPF>3 enters into both G and K.

Agreed. I think that's unnecessary. The goal only seems to be to scale G and K properly in relation to each other.

I wonder how the chips would fall if we just extended N2D3P9 to include the 3's, and treated them like they were in the numerator. If we treated them like they were in the denominator they'd have zero effect since each 3 would be divided by 3, but if we treat them like they're in the numerator, then each one increases the points by a factor of 3/2.

N2D3P9(65/77n) = 200.818

NATE2D3P9(65/77n) = 200.818 * (3/2)^3 = 200.818 * 3.375 = 677.761

vs.

N2D3P9(125/13n) = 97.801

NATE2D3P9(125/13n) = 97.801 * (3/2)^9 = 97.801 * 38.443 = 3759.764

So yeah... it definitely still prefers the existing comma we have for 6 tinas (i.e. 2 minas).

And the other example:

N2D3P9(1/205n) = 233.472

NATE2D3P9(1/205n) = 233.472 * (3/2)^8 = 233.472 * 25.629 = 5983.654

vs.

N2D3P9(1/5831n) = 688.382

NATE2D3P9(1/5831n) = 688.382 * (3/2)^6 = 688.382 * 11.391 = 7841.359

Oh ho! Interesting. So by this measure, we would prefer the 1/205n.

What do you think? It's certainly simple.

I thought it would be a quick experiment to use the test I have already set up for verifying primary commas, but I'm not sure the approach I am testing is appropriate. I know I brought this up before... where was it... checking the secondary comma zones for each symbol... ah.

Back on page 6 of the Godthread... why am I not surprised. I think maybe a quarter of the content of the entire forum is constituted by that one thread now, heh...

But yeah, in any case, that condition is *not* true for NATE2D3P9 any more than it was for SoPF>3. It only takes a glance at the

precision levels diagram to see that

's secondary comma zone is an enclave of the secondary comma zone for

, and since the 5s has much lower N2D3P9 than the 19s, by the logic of the test as I've written it now, the 5s should be the comma for

. Which we know there's a lot more considerations involved here, and maybe it comes down to the complex relationship between the High and Ultra levels (or maybe more accurately the complex relationship between the Promethean and Herculean symbol subsets). Anywho...

Sorry I ran out of time here, and didn't get to respond to the rest of your post.

I think you addressed most of it, except for the stuff about badness. But we probably should save that stuff to later and not try to work on it simultaneously with this already-complex-enough problem of the usefulness.

What would be relevant to resolve, at least, would be the boundary between error and usefulness. Was I right to say it the point where we start considering EDO-ability is when we move on to badness (and thus "developing a notational comma popularity metric" topic being a subtopic of "Just Intonation notations" is the correct home for it)?