developing a notational comma popularity metric

Post Reply
User avatar
Dave Keenan
Site Admin
Posts: 2180
Joined: Tue Sep 01, 2015 2:59 pm
Location: Brisbane, Queensland, Australia
Contact:

Re: developing a notational comma popularity metric

Post by Dave Keenan »

If you're interested in optimisation algorithms that can avoid getting stuck in local minima, read about "simulated annealing". But if you start thinking about implementing such an algorithm yourself, for this project, then I have to think we're expending way too much effort to decide some ratios for tina symbols that almost no one will use.
cmloegcmluin wrote: Mon Jun 29, 2020 11:11 am
Dave Keenan wrote: Mon Jun 29, 2020 10:39 am And I agree we should call if gpf(). You said "gcp()" in one place. Was that an error?
I did that in a couple places, so it was a bit more than a typo... the part of my brain that deals in three-letter-acronyms-starting-with-g subconsciously gravitates toward Google Cloud Platform, it would seem. I've corrected the errors. Thank you for pointing them out.
Daniel Dennett coined the word "thinko" for those. :)
Dave Keenan wrote: Mon Jun 29, 2020 10:39 am What is the argument for why a composer would prefer ratios with lower sopf, as opposed to merely lower sopfr?
I agree the metric should with little contention represent things composers are likely to make their decisions on. For me, sopf seems pretty reasonable (as reasonable as π did, anyway), because many scales are based on just generators which are iterated repeatedly. Does it not seem reasonable to you that we should give ratios like 625/1 even the littlest bit of a pass for using only 5's? And wouldn't we want to punish e.g. 5.11/7.13 vs. 13.13/5.5?

I feel like sopfr, sopf, and gpf are each capturing a different aspect of the harmonic content which is important to capture: sopfr the overall harmonic content, sopf the variety of it, and gpf the extremeness of it. My stats vocab is lacking here; perhaps someone else knows better terms for these abstract aspects of data than "variety" and "extremeness".

That all said, I'm not precious about sopf just because I brought it up.
You make a good case. I read this as, "When you've used a given prime once, the cost of repeating it is less than the cost of the initial use. Sopfr treats repeats as having the same cost as the initial use. Sopf treats repeats as having zero cost.

I wonder if it would make sense to instead use a single intermediate function between sopaf() and sopafr() that costed repeats at some fractional power of the repeat number (which is the absolute value of the prime exponent). I suggest, in the interests of keeping it pronounceable, we call it sopafry().
User avatar
cmloegcmluin
Site Admin
Posts: 1700
Joined: Tue Feb 11, 2020 3:10 pm
Location: San Francisco, California, USA
Real Name: Douglas Blumeyer (he/him/his)
Contact:

Re: developing a notational comma popularity metric

Post by cmloegcmluin »

volleo6144 wrote: Mon Jun 29, 2020 12:00 pm I'm pretty sure some 2,3-reduced ratios are less popular than others because they don't really have that many full ratios that are that useful: 1:31 is useful as a 31:32 quartertone, while 1:29 ... doesn't exactly have anything like that going for it.
This is a continuation of an earlier exchange of ours:
cmloegcmluin wrote: Thu Jun 25, 2020 6:46 am
volleo6144 wrote: Thu Jun 25, 2020 4:20 am Making the sort come out the same is also going to be particularly hard, because 1:29 is less popular than 1:31.
That could mean ... the weight on primes may need to be more complex than something like what we've looked at already, p0.9, i.e. it may need to take harmonic concepts into account. If someone has an argument for why the 31st harmonic is more useful to compose with than the 29th, and can manifest said argument in the form of a mathematical formula, we could experiment with that.
So your observation about the 31:32 quartertone is a perfect example of the type of "good argument" I was looking for. As for "manifest[ing] said argument in the form of a mathematical formula", however, I'm concerned that the prospect of generalizing these types of arguments, in order to essentially capture the usefulness of ratios in terms of satisfying the most needed harmonic functions, may be an order of magnitude more complex than the stuff we're grappling with here so far.

------
Dave Keenan wrote: Mon Jun 29, 2020 12:21 pm If you're interested in optimisation algorithms that can avoid getting stuck in local minima, read about "simulated annealing". But if you start thinking about implementing such an algorithm yourself, for this project, then I have to think we're expending way too much effort to decide some ratios for tina symbols that almost no one will use.
I'm at peace with the time we're spending here. It's about more than the specific applications, to me anyway. I probably won't implement SA myself, but learning all these concepts is a lot of fun, and fulfilling! Thanks as always for spreading the knowledge around.
Dave Keenan wrote: Mon Jun 29, 2020 12:21 pm Daniel Dennett coined the word "thinko" for those. :)
YES! I have needed this word. I shall spread the gospel of the thinko.
Dave Keenan wrote: Mon Jun 29, 2020 12:21 pm You make a good case. I read this as, "When you've used a given prime once, the cost of repeating it is less than the cost of the initial use. Sopfr treats repeats as having the same cost as the initial use. Sopf treats repeats as having zero cost.

I wonder if it would make sense to instead use a single intermediate function between sopaf() and sopafr() that costed repeats at some fractional power of the repeat number (which is the absolute value of the prime exponent). I suggest, in the interests of keeping it pronounceable, we call it sopafry(), or perhaps sopafre() since there's little reason to confuse this exponent with Euler's number ≈ 2.718.
Yes, that is another way of putting what I was trying to get it. You've understood me exactly. Your proposal is similar to something I said earlier:
cmloegcmluin wrote: Mon Jun 22, 2020 4:50 am we may want to treat the second time a prime appears in the same ratio differently (the 2nd 5 in 25/1 may have a different impact)
And now that I have the code in place, it's obvious how straightforward it would be to raise the term of the monzo to some power (or map it to some logarithmic base), just as the prime that term represents is raised to some power (or mapped to some logarithmic base). I have to go make dinner now, but I'm excited to report back on the fruits of consolidating those two elements in the metric together. Great friggin' idea.
User avatar
cmloegcmluin
Site Admin
Posts: 1700
Joined: Tue Feb 11, 2020 3:10 pm
Location: San Francisco, California, USA
Real Name: Douglas Blumeyer (he/him/his)
Contact:

Re: developing a notational comma popularity metric

Post by cmloegcmluin »

I can get the sum of squares down to 0.005554718785262, which is close enough, I think, to the previous find to prefer this simplification:

5rcu(c1/c2)) = max(sop1.501fr0.87(c1), sop1.501fr0.87(c2)) + 0.662×min(sop1.501fr0.87(c1), sop1.501fr0.87(c2)) + 0.425×gpf(c1/c2)

Where 5rcu means "5-rough comma unpopularity". We can save the "nu" (𝜈) for when we actually have the full notational unpopularity metric. Of course if anyone has any better names go ahead; I haven't given it a ton of consideration.

The subscript a in the sopafry is to say that in this case I found that a logarithmic shape for the primes fit better. But a sublinear exponent fits better for the terms.

If we really think both should be sublinear exponents, the best fit I could get was 0.0057104024991967, with k=0.66, a=0.62, y=0.858, s=0.192. In other words, when you switch to using a logarithmic weight on the primes, the prime limit has to do more work.

Part of me feels like there could be a way to fold prime limit in too. The metric does differentiate between 25/7 and 19 now... well, wait, though... it's actually in the opposite way we need. Back here we wanted 19 to be ranked worse than 25/7, but if you flatten out the primes (whether with a sublinear exponent or a logarithm) then 19 will actually beat 25/7. So the prime limit is necessary to correct for that. And yes, I am not getting near 0.005555 for any parameter settings when s=0.
User avatar
Dave Keenan
Site Admin
Posts: 2180
Joined: Tue Sep 01, 2015 2:59 pm
Location: Brisbane, Queensland, Australia
Contact:

Re: developing a notational comma popularity metric

Post by Dave Keenan »

Do you mean "u" (coefficient of gpf(n/d)) here when you say "s" (coefficient of sopf(n/d))?

"s" and "u" usage based on: viewtopic.php?p=1922#p1922

Ah but I see that was swapped relative to your earlier usage here: viewtopic.php?p=1909#p1909

And you have "5rcp" where I think you mean "5rcu".

I too found that the coefficient of gpf(n/d) (a.k.a. prime limit) came out negative, which loses psychological plausibility.
User avatar
Dave Keenan
Site Admin
Posts: 2180
Joined: Tue Sep 01, 2015 2:59 pm
Location: Brisbane, Queensland, Australia
Contact:

Re: developing a notational comma popularity metric

Post by Dave Keenan »

OK. I see that "s" was the coefficient for prime limit from way back here: viewtopic.php?p=1825#p1825.

So you just need to correct your usage here: viewtopic.php?p=1922#p1922

It's looking like this excursion into sopafry might be a dead-end, since you got lower error by including the sopf(n/d) term, and that had a plausible positive coefficient on the prime limit.
User avatar
Dave Keenan
Site Admin
Posts: 2180
Joined: Tue Sep 01, 2015 2:59 pm
Location: Brisbane, Queensland, Australia
Contact:

Re: developing a notational comma popularity metric

Post by Dave Keenan »

No! Not a dead-end. I realise now, that the negative "s" coefficient on gpf(n/d) (prime-limit) only occurs when you take the log of the primes as weights (thereby making it like Tenney height).
cmloegcmluin wrote: Mon Jun 29, 2020 3:24 pm The subscript a in the sopafry is to say that in this case I found that a logarithmic shape for the primes fit better. But a sublinear exponent fits better for the terms.
It's confusing, calling that log-base "a" too. I'm going to think of it as alpha "α". sopαfry It's a pity alpha hardly looks different from "a", in the font I see it in here. But it probably won't matter because ....
If we really think both should be sublinear exponents, the best fit I could get was 0.0057104024991967, with k=0.66, a=0.62, y=0.858, s=0.192. In other words, when you switch to using a logarithmic weight on the primes, the prime limit has to do more work.
Yes I really think both p and r should have sublinear power functions applied, because when you apply a log to p you need that psychologically-implausible negative coefficient on the prime limit to counteract its apparent excess.
Part of me feels like there could be a way to fold prime limit in too.
I wondered about that too. But I've since convinced myself it's not possible, although I can't easily explain it.

By doing an actual sort (as a manually issued command) I have confirmed your sum of squares figure of 0.005710402 for (k=0.66, a=0.62, y=0.858, s=0.192), although I haven't confirmed that it's the minimum, or even a minimum.

Some more good new is that I am now finding values of those coefficients close to yours, by using the Excel Solver. This is the result of two major innovations.
First, I gave another degree of freedom to the monotonic function I'm using in lieu of sorting, to obtain the rank. Instead of simply est_rank = bmetric, I'm now using est_rank = m×bmetric. Typical values are m ≈ 0.5, b ≈ 1.46.
Second, I'm omitting the error for the ratio 1/1 from the sum of squared errors that the Solver is told to minimise.

When I do both of those things, Excel finds a minimum at (k=0.644, a=0.671, y=0.866, s=0.201). When I do an actual sort with those values, I get sum of squares = 0.006612407, showing that your values are better (k=0.66, a=0.62, y=0.858, s=0.192).

I now trust your optimisations.

So how much worse is the minimum sum-of-squares if we force s=0, or y=1, or both? And what are the optimum values of the other coefficients in those 3 cases?

You earlier claimed:
cmloegcmluin wrote: Mon Jun 29, 2020 6:16 am Without using sopf at all, the best fit I can find is 0.006905116251199162 (k=0.397, a=0.872, s=0, u=0.168, z=-1, cut-off-point=80).
But this was with usages of "s" and "u" swapped, so this should be what we would now call (k=0.397, a=0.872, y=1, s=0.168). But when I put in those values, and do the actual sort, I get sum-of-squares 0.02022529. Nearly three times what you claimed.

When I use the Solver (with my rankifying function instead of a sort), with y fixed at 1, I get a minimum at a very different place, namely (k=0.711, a=0.686, y=1, s=0.324). And when I apply a sort to this, I get sum-of-squares 0.007801175.

I similarly get (k=0.641, a=0.878, y=0.825, s=0) with sum-of-squares 0.009200898,
and (k=0.744, a=1.041, y=1, s=0) with sum-of-squares 0.008898435.

I note that good-old-fashioned sopfr(n*d) alone (i.e. k=1, a=1, y=1, s=0) has sum-of-squares 0.011375524. That's almost double what we get for (k=0.66, a=0.62, y=0.858, s=0.192), sum-of-squares 0.005710402.

The case of maximally-negative correlation (ranks exactly reversed) gives sum-of-squares 3.019814879. Randomly shuffled ranks (near zero correlation) seem to give sum-of-squares around 2.5.
User avatar
Dave Keenan
Site Admin
Posts: 2180
Joined: Tue Sep 01, 2015 2:59 pm
Location: Brisbane, Queensland, Australia
Contact:

Re: developing a notational comma popularity metric

Post by Dave Keenan »

I have another psychologically-plausible function of the prime exponents, that might improve the metric (reduce the sum-of-squares error), and might eliminate the need for a non-unity exponent on the repetitions, and might eliminate the need to include the prime limit. In other words, it might allow y=1 and s=0. It is the count of prime factors with repetition, copfr(), equivalent to sop0fr(). And of course you could also have copf() or an intermediate copfry(). But I suggest trying a simple copfr() first.

I tried this:

max(sopafr(num), sopafr(den)) + k * min(sopafr(num), sopafr(den)) + c * copfr(num/den)

But the optimum copfr coefficient (which I'll call "c") is implausibly negative.

I'm looking at optimum values around k=0.95, a=0.23, c=-1.25.

[Edit: Changed variable name "w" to "c".]
User avatar
cmloegcmluin
Site Admin
Posts: 1700
Joined: Tue Feb 11, 2020 3:10 pm
Location: San Francisco, California, USA
Real Name: Douglas Blumeyer (he/him/his)
Contact:

Re: developing a notational comma popularity metric

Post by cmloegcmluin »

I had swapped s and u in that one post somehow. I corrected it. And the 5ncu. Sorry for the confusion.
Dave Keenan wrote: Mon Jun 29, 2020 4:23 pm It's looking like this excursion into sopafry might be a dead-end, since you got lower error by including the sopf(n/d) term, and that had a plausible positive coefficient on the prime limit.
Here's where I think you began misunderstanding me.

I do get a lower error by including the sopf(n/d) term, yes: 0.005358. But I was trying to say that I believe 0.005555, or even 0.005710, are close enough to that to warrant switching over to the sopafry formula. Do you agree with that statement?*

What I meant when I said "when you switch to using a logarithmic weight on the primes, the prime limit has to do more work" was that the biggest difference between my 0.005555 SoS with logarithmically weighted primes and 0.005710 SoS with sublinear exponentially weighted primes was the s, or coefficient on the prime limit, and that s was much larger (0.425) for logarithmically weighted primes and much smaller (0.192) for sublinear exponentially weighted primes.

And what I meant when I said "Part of me feels like there could be a way to fold prime limit in too. The metric does differentiate between 25/7 and 19 now... well, wait, though... it's actually in the opposite way we need. Back here we wanted 19 to be ranked worse than 25/7, but if you flatten out the primes (whether with a sublinear exponent or a logarithm) then 19 will actually beat 25/7. So the prime limit is necessary to correct for that. And yes, I am not getting near 0.005555 for any parameter settings when s=0." was that the relationship between α and s is more interesting than I had previously thought, because in some sense they are tugging against each other, not with each other. α is shaving off relatively more from bigger primes overall, so they don't hurt as much, but then s gives a calculated punctuated punch for the one biggest prime. So by including α and s we are in a sense saying that composers are more okay using bigger primes than the size of the primes themselves as numbers would suggest, but they still make sure that their very biggest prime is not so big. In other words, "correct for that" meant the same thing as "has to do more work" in the quote from me in the previous paragraph. And my primary point in that context was that folding prime limit into sopafry could not work because of this tension between their agendas. And it looks like you've also convinced yourself it can't be done. Perhaps this is the explanation you were looking for?

I hadn't even tried negative coefficients on the prime limit.
Dave Keenan wrote: Mon Jun 29, 2020 6:27 pm No! Not a dead-end. I realise now, that the negative "s" coefficient on gpf(n/d) (prime-limit) only occurs when you take the log of the primes as weights (thereby making it like Tenney height).
So I'm not sure where you're getting that. I still have a positive weight on the prime limit. And it gets bigger when I use a logarithmic α rather than a sublinear exponent α, because per that chart I showed earlier, logarithmic α shaves even more off bigger primes than sublinear exponent α, so s has to do even more work.
Yes I really think both p and r should have sublinear power functions applied, because when you apply a log to p you need that psychologically-implausible negative coefficient on the prime limit to counteract its apparent excess.
Do you still prefer sublinear exponents (should I be calling them power instead of exponential?) if we don't need negative coefficients on the prime limit?
I note that good-old-fashioned sopfr(n*d) alone (i.e. k=1, a=1, y=1, s=0) has sum-of-squares 0.011375524. That's almost double what we get for (k=0.66, a=0.62, y=0.858, s=0.192), sum-of-squares 0.005710402.

The case of maximally-negative correlation (ranks exactly reversed) gives sum-of-squares 3.019814879. Randomly shuffled ranks (near zero correlation) seem to give sum-of-squares around 2.5.
These are great figures to judge our efforts against. Thanks for gathering those.

My screw-up with s and u in that previous post is confusing the hell out of me. And I'm disappointed to hear that Excel is giving you such different results. I will have to redo my work.
I suggest trying a simple copfr() first.
I've actually already implemented copfr in the code base in the module for finding tina commas :) I'll plug it into the mix and see if it can move the needle.

Re: the name, it made me think of a fried food I ate in Mexico. I did a web search for "sopa fry" but all I got were pictures of soup, since "sopa" is the Spanish word for soup. I finally figured out the food was called a "sope". I dunno how you feel about making that alpha into an epsilon? Then we could have some tasty fried sopes!

Image

Unfortunately I am "firefighter" at work today so I have to be constantly monitoring our systems in case something goes wrong. I may not be able to work on this again until your tomorrow, Dave.

*One thought that is bouncing around in my head is: let's say in 25 years microtonality is huge and something like Sevish's scale workshop has a ton of usage statistics. And someone needs something like our formula here (hopefully they won't if Sagittal is the dominant notation system?!) so they run it against this much richer data set. I think we're on the right side of history if we lean toward the simplest formula we can, even if we lose a few points on the SoS score.
Attachments
20140528-sopes-plated-primary-750x563.jpg
(76.31 KiB) Not downloaded yet
User avatar
cmloegcmluin
Site Admin
Posts: 1700
Joined: Tue Feb 11, 2020 3:10 pm
Location: San Francisco, California, USA
Real Name: Douglas Blumeyer (he/him/his)
Contact:

Re: developing a notational comma popularity metric

Post by cmloegcmluin »

You earlier claimed:
cmloegcmluin wrote: Mon Jun 29, 2020 6:16 am Without using sopf at all, the best fit I can find is 0.006905116251199162 (k=0.397, a=0.872, s=0, u=0.168, z=-1, cut-off-point=80).
But this was with usages of "s" and "u" swapped, so this should be what we would now call (k=0.397, a=0.872, y=1, s=0.168). But when I put in those values, and do the actual sort, I get sum-of-squares 0.02022529. Nearly three times what you claimed.
I'm afraid that earlier claim of mine may as well be nonsense. I cannot reproduce it. And I do not have it stored in my logs anywhere. I honestly have no idea how that ended up on the page. Very sorry.
When I use the Solver (with my rankifying function instead of a sort), with y fixed at 1, I get a minimum at a very different place, namely (k=0.711, a=0.686, y=1, s=0.324). And when I apply a sort to this, I get sum-of-squares 0.007801175.
I am happy to report that my code finds the exact same SoS for those parameters (0.007801175376411953, to be precise). That's a relief to me that we've both (you I never doubted... me on the other hand...) got enough of a handle on this to have implemented independently tools that agree with each other.

My JavaScript can beat your Excel Solver, though: with k=0.753, a=0.517, y=1, s=0.285 I can get 0.007112287. Does that check out on your end?

Though I don't think we're much considering metrics where u=0 and y=1, right? I think we're considering either ones where u > 0 (we include sopaf along with sopafr) or y < 1 (we fold sopaf into sopafr as sopafry).
User avatar
Dave Keenan
Site Admin
Posts: 2180
Joined: Tue Sep 01, 2015 2:59 pm
Location: Brisbane, Queensland, Australia
Contact:

Re: developing a notational comma popularity metric

Post by Dave Keenan »

Thanks for sorting all that out. I can't check your latest because I'm hot on the trail of a prime weighting function that may eliminate the need to include prime-limit/gpf in the mix. It looks a lot like log3(p)-1.
Post Reply