developing a notational comma popularity metric

User avatar
Dave Keenan
Site Admin
Posts: 2180
Joined: Tue Sep 01, 2015 2:59 pm
Location: Brisbane, Queensland, Australia
Contact:

Re: developing a notational comma popularity metric

Post by Dave Keenan »

cmloegcmluin wrote: Fri Sep 11, 2020 1:40 am Is that another puzzle? ;) I don't quite see the relationship between sqrt and the log-log plot.
Old modellers trick. A straight line on a log-log plot means a power function, with the slope being the power, in this case 2. So sqrt is its inverse, to convert N2D3P9 back to sopfr.
User avatar
cmloegcmluin
Site Admin
Posts: 1700
Joined: Tue Feb 11, 2020 3:10 pm
Location: San Francisco, California, USA
Real Name: Douglas Blumeyer (he/him/his)
Contact:

Re: developing a notational comma popularity metric

Post by cmloegcmluin »

Oh okay, yeah, that makes sense. Cool trick!

But would you prefer sqrt to lb for any reason? Otherwise, I don't see that we require a substitute.
User avatar
Dave Keenan
Site Admin
Posts: 2180
Joined: Tue Sep 01, 2015 2:59 pm
Location: Brisbane, Queensland, Australia
Contact:

Re: developing a notational comma popularity metric

Post by Dave Keenan »

cmloegcmluin wrote: Fri Sep 11, 2020 6:31 am But would you prefer sqrt to lb for any reason? Otherwise, I don't see that we require a substitute.
I think we need to reconfigure the LHC as an electron positron collider. :)

By which I mean you could use similar methods to those you employed in finding N2D3P9, but the set of functions to be tried would be much smaller, and we would not be minimising a sum of squared errors but maximising the count of "correct" comma assignments below the half-apotome in the Extreme JI notation.

The set of functions can be summarised as:
compress(N2D3P9) + t × expand1(ATE) + s × expand2(AAS)

We would try various functions for "compress" such as lb(N2D3P9) and sqrt(N2D3P9) and perhaps a parameterised N2D3P9^a where 0<a≤1/2. And the inverse of those functions would be candidates for "expand1" and "expand2".
User avatar
cmloegcmluin
Site Admin
Posts: 1700
Joined: Tue Feb 11, 2020 3:10 pm
Location: San Francisco, California, USA
Real Name: Douglas Blumeyer (he/him/his)
Contact:

Re: developing a notational comma popularity metric

Post by cmloegcmluin »

Dave Keenan wrote: Fri Sep 11, 2020 5:43 pm
cmloegcmluin wrote: Fri Sep 11, 2020 6:31 am But would you prefer sqrt to lb for any reason? Otherwise, I don't see that we require a substitute.
I think we need to reconfigure the LHC as an electron positron collider. :)
Aw man, of course that's what you would've meant. I knew this was the goal but something about the word "substitute" threw me off. Maybe if you'd said "alternative" it'd've felt more to me like we were still keeping the lb around too. Anyway, not your fault; if I'd thought about it just a bit more it should've been obvious.

An no big deal. Don't worry — even when I create a bit of unnecessary back-and-forth on this, I've still got plenty of work to do in the code base during the interim.
By which I mean you could use similar methods to those you employed in finding N2D3P9, but the set of functions to be tried would be much smaller, and we would not be minimising a sum of squared errors but maximising the count of "correct" comma assignments below the half-apotome in the Extreme JI notation.
Sure. I could take the straight count of corrects, or I could take some weighted correctness. Like, if you're not correct, the square of the difference between your usefulness score and the correct comma's usefulness? The latter has a bit more of the SED feel from the approach we took with the popularity metric; I think it's just going to have a more aggressively stepped shape.

Another thing to consider is whether we care more about getting the lower precision symbols right. I might suggest we proportion the penalty to the size of the symbol's zone.
The set of functions can be summarised as:
compress(N2D3P9) + t × expand1(ATE) + s × expand2(AAS)

We would try various functions for "compress" such as lb(N2D3P9) and sqrt(N2D3P9) and perhaps a parameterised N2D3P9^a where 0<a≤1/2. And the inverse of those functions would be candidates for "expand1" and "expand2".
Yes, that all makes sense.
User avatar
Dave Keenan
Site Admin
Posts: 2180
Joined: Tue Sep 01, 2015 2:59 pm
Location: Brisbane, Queensland, Australia
Contact:

Re: developing a notational comma popularity metric

Post by Dave Keenan »

cmloegcmluin wrote: Sat Sep 12, 2020 1:09 am Sure. I could take the straight count of corrects, or I could take some weighted correctness. Like, if you're not correct, the square of the difference between your usefulness score and the correct comma's usefulness? The latter has a bit more of the SED feel from the approach we took with the popularity metric; I think it's just going to have a more aggressively stepped shape.

Another thing to consider is whether we care more about getting the lower precision symbols right. I might suggest we proportion the penalty to the size of the symbol's zone.
Those are both excellent ideas.
User avatar
cmloegcmluin
Site Admin
Posts: 1700
Joined: Tue Feb 11, 2020 3:10 pm
Location: San Francisco, California, USA
Real Name: Douglas Blumeyer (he/him/his)
Contact:

Re: developing a notational comma popularity metric

Post by cmloegcmluin »

No update on this front yet. I'm still buried in chores, cleaning up the code base. I could always plunge ahead and add more stuff, but sometimes I reach a tipping point and just feel like if I don't pay down some of the tech debt I'll go nuts.

I did come across this interesting number yesterday: Mills' constant. It's related to prime numbers, and similar to the actual `r` we were finding for the votes on comma popularities (we were finding something close to -1.3, but in the end went with -1). I'm not suggesting we need to reconsider any of the decisions made in developing N2D3P9, or even reference it on the wiki page, but I did think it was a curiosity worth sharing here.
User avatar
Dave Keenan
Site Admin
Posts: 2180
Joined: Tue Sep 01, 2015 2:59 pm
Location: Brisbane, Queensland, Australia
Contact:

Re: developing a notational comma popularity metric

Post by Dave Keenan »

cmloegcmluin wrote: Mon Sep 14, 2020 5:23 am No update on this front yet. I'm still buried in chores, cleaning up the code base. I could always plunge ahead and add more stuff, but sometimes I reach a tipping point and just feel like if I don't pay down some of the tech debt I'll go nuts.
Fair enough.
I did come across this interesting number yesterday: Mills' constant. It's related to prime numbers, and similar to the actual `r` we were finding for the votes on comma popularities (we were finding something close to -1.3, but in the end went with -1). I'm not suggesting we need to reconsider any of the decisions made in developing N2D3P9, or even reference it on the wiki page, but I did think it was a curiosity worth sharing here.
I assume you're referring to the Zipf's law exponent, which we called "z", that you found to be approximately -1.37. While Mills' constant is fascinating, I don't see how it could possibly relate in any way to our "z", given that z is an exponent and Mill's constant is a base, and given that Mills constant only generates a single prime greater than 2 of any musical relevance, namely 11, and that there's nothing special about the middle exponent of 3 that generates Mills' constant, so there are many similar constants.

At first Mills' and similar constants seem almost magical, but in fact they are merely a way of encoding a (very sparse) list of primes into the continuing digits of their decimal fraction, since there is no way to obtain their value without first finding the primes.

This can be made more explicit, and used to generate all primes, with the unnamed constant and recursive function described here: https://en.wikipedia.org/wiki/Formula_f ... all_primes

You might as well just list the primes. The constant does provide some data-compression, if you don't mind doing the work to unpack it. But then we already have efficient algorithms to "unpack" the primes from nothing.

I wasn't aware of any of this until this morning. Thanks for an interesting excursion.
User avatar
volleo6144
Posts: 81
Joined: Mon May 18, 2020 7:03 am
Location: Earth
Contact:

Re: developing a notational comma popularity metric

Post by volleo6144 »

Dave Keenan wrote: Mon Sep 14, 2020 9:10 am and that there's nothing special about the middle exponent of 3 that generates Mills' constant, so there are many similar constants.
Yeah. The reason why the version with 3 is the most well-known is because the 2-version might not actually lead to the sequence starting 2,5,29,853,727613 with each entry being the smallest prime after the square of the previous entry.

Perhaps there's just a ridiculously large prime gap very far out that covers two different squares (which may or may not cover one of the primes in the aforementioned sequence); the question of whether there aren't any is called Legendre's conjecture.
I'm in college (a CS major), but apparently there's still a decent amount of time to check this out. I wonder if the main page will ever have 59edo changed to green...
User avatar
cmloegcmluin
Site Admin
Posts: 1700
Joined: Tue Feb 11, 2020 3:10 pm
Location: San Francisco, California, USA
Real Name: Douglas Blumeyer (he/him/his)
Contact:

Re: developing a notational comma popularity metric

Post by cmloegcmluin »

Thanks to you both for some interesting material re: primes to look into! :) That 2.920050977... number in particular seems exciting, but I think your description of it and Mill's number as mere "compressions" for the primes is on point.

Well, I took the thread off-topic and I'll bring it back...

I don't suppose anyone might have a clever or subtle mark we might add to distinguish 2,3-free classes visually from run-of-the-mill ratios? Something to indicate their 5-roughness and/or their being super (or rather, not being sub, since they include 1/1) and/or being related to musical notation?

A "c" might be nice, for "comma", the note C, or "class"... but less than ideal since a "c" is already used to distinguish cents representations from plain decimals.

Anyway, this is not super pressing or anything. I was just thinking that I struggled a bit early on appreciating that we were dealing with 2,3-free classes of ratios, not actual ratios that were members of a 2,3-free subset of ratios, and that a visual signifier might have forced me to confront and recognize that idea. If we come up with something nice maybe we could add it to the wiki article.

It wouldn't even have to be something that'd be designed to be distinct enough from other markings on ratios/quotients to survive abroad in a general sense. Maybe just an apostrophe (read "prime", of all readings...) would suffice. It looks kind of like a bare shaft of the Sagittal notation, for what that's worth, heh...
User avatar
Dave Keenan
Site Admin
Posts: 2180
Joined: Tue Sep 01, 2015 2:59 pm
Location: Brisbane, Queensland, Australia
Contact:

Re: developing a notational comma popularity metric

Post by Dave Keenan »

The 2,3-equivalence-class of 7/5 can be written using standard set-builder notation as
{7/5 ×2n×3m | n,m ∈ }
So you might try abbreviating that, preferably in a way that would also work for conventional octave-equivalent (i.e. 2-equivalent) pitch classes such as
{5/3 ×2n | n ∈ }

An obvious first step would be to reduce it to simply
7/5 ×2n×3m
In monzo form that would be
[n m -1 1⟩
The next step might be to make the variable factors into a subscript.
7/5×2n×3m
Next would be to drop the n and m because they are too small.
7/5×2×3
Then drop the multiply signs and separate the 2 and 3 with a comma.
7/52,3

Without the comma it could be mistaken for a radix, i.e. base 23. In the octave-equivalent case
5/32
it shouldn't be mistaken as indicating base 2 as there are digits greater than 1 in the fraction. In the case of
1/12
we'd have to rely on the context to make it clear that this is indicating the 2-equivalent class and not base 2 notation.
Post Reply