developing a notational comma popularity metric

User avatar
Dave Keenan
Site Admin
Posts: 1088
Joined: Tue Sep 01, 2015 2:59 pm
Location: Brisbane, Queensland, Australia
Contact:

Re: developing a notational comma popularity metric

Post by Dave Keenan »

That's great, thanks. I concur with your final sentiment above.

No I don't still need you to do those earlier ones.

I'm now thinking I will investigate the denominator having it's own independent equivalents of α, w and y. Call them A, W and Y. "c" would then not be needed as it would be taken care of by w and W. But that would be back to 6 parameters, so I'd hope some would prove unnecessary.

Or maybe I'll try applying copfr only to the denominator (and only sopawfry to the numerator, as we do now). I note that copfr(den) = sopAWfrY(den) when A = ∞, W = 1, Y = 1, because log(x) = 0.

But now I'm wondering how we should define "numerator" and "denominator". Is the numerator the one with the largest sopawfry or the largest copfr?

User avatar
Dave Keenan
Site Admin
Posts: 1088
Joined: Tue Sep 01, 2015 2:59 pm
Location: Brisbane, Queensland, Australia
Contact:

Re: developing a notational comma popularity metric

Post by Dave Keenan »

cmloegcmluin wrote:
Tue Jun 30, 2020 3:44 pm
You're right, and I should have noticed that. Yes, let's make it w. I can go back and edit my previous post from "d" to "w" too.
Thanks. Please do. I've now edited my "w"s to "c"s where necessary.
I'm not sure what "real" name you're referring to. As soon as we went beyond sopfr and sopf, aren't these all new things without established/"real" names? Or do you mean something else by that?
Something else. By "real" names, I meant names that obey the rules for function names in math expressions, that allow humans to parse them correctly based on purely lexical or syntactic information, not the detailed context-sensitive semantic information that we are only able to use because we grew it ourselves. These rules are similar to the rules for function names in most programming languages, namely that they should consist only of alphabetic characters, or at least should not contain special characters that are themselves function names like √ or brackets like {...}, or subscripts or superscripts except at the end.
These unwieldy names are helpful during the development process. Once we reach the final step of naming these functions nicely for the outside world we can rely a lot more on the descriptions of what exactly the functions do, and reduce the name to something optimized for pronounceability, e.g. "soapfar" for "sum of adjusted prime factors (with) adjusted repetition", or something like that.
Agreed.
It might be cool if we added a plugin to the forum for LaTeX or MathJax. That might help get these formulas across in a less disgusting and/or intimidating way :)
At this point, I think it would just slow us down. We seem to be communicating with each other just fine. And most of what we've written will represent dead-ends by the time we're finished.
Do you still want me to check those, even though I've since found one with 0.004250806?
No.
Actually, scratch that. I think your method will need to fulfill the role of the "home stretch": finding the exact values down to the millionths place or whatnot. The way I'm doing things, it's not really tractable to look deeper than the thousandths place. So if you're already working with SoS-billionths, I'm not going be able to help you get any more precise.
Not at all. The only reason I'm giving max sig figs for SoS is I'm too lazy to round them, and to give confidence we're computing the same thing. Don't forget that my spreadsheet can't find the true minima of SoS because it has to use a proxy SoS, based on a proxy for the estimated rank (the rank derived from the candidate metric), that does not require actually sorting the ratios based on the metric.

My spreadsheet can get into the right ballpark, but only your code can find the true minimum nearby. I can also find the true SoS for any set of parameters you give me (or for the minimum I find in the proxy SoS), by actually doing the sort. But I can't find where the minimum is in the true SoS.

My proxy to avoid sorting is currently to calculate the estimated rank from the metric using est_rank = m×bmetric + , where m, b and get included, along with our model parameters α w y c etc, as cells for the Solver to adjust in order to minimise the proxy SoS.

The proxy SoS varies smoothly with the model parameters, while the true SoS is stepped. i.e. it consists of many small flat areas with sudden jumps between them. A jump occurs whenever a parameter change causes the metrics for two different ratios to go past each other (i.e. change their ordering) which causes the two ratios to swap ranks. Because of these flat spots, there is no point in fine-tuning the parameters beyond a certain level.
I can't figure out how that works. Is it using a logarithmic identity I'm not familiar with? I'm interested, certainly, since it seems like you found a way to consolidate the count of primes into their sum.
No logarithmic identity involved.

sopfr(i), where i is an integer, is the weighted sum of the terms in the monzo for i, where each prime has a weight equal to that prime.

copfr(i) is simply the sum of the terms in the monzo. But you could also say that it's the weighted sum where each prime has a weight of 1.

And so you could say that w × copfr(i) is the weighted sum where each prime has a weight of w.

So sopfr(i) + w × copfr(i) is the weighted sum where each prime has a weight of that prime plus w.

So sopfr(i) + w × copfr(i) can be called a soapfr(i) where "ap" stands for "adjusted prime" as you suggested, and the adjustment is p → p + w.

Similarly, we can define solpfr(i) as the weighted sum where the weights are the base-2-logs of the primes. Solpfr(i) is also known as the Tenney height of i.

Then solpfr(i) + w × copfr(i) can be called a soapfr(i) where the adjustment is p → log2(p) + w.

P.S. I didn't get to try any of the things I mentioned in the previous message.

User avatar
cmloegcmluin
Site Admin
Posts: 787
Joined: Tue Feb 11, 2020 3:10 pm
Location: San Francisco, California, USA
Real Name: Douglas Blumeyer
Contact:

Re: developing a notational comma popularity metric

Post by cmloegcmluin »

Dave Keenan wrote:
Tue Jun 30, 2020 6:59 pm
The proxy SoS varies smoothly with the model parameters, while the true SoS is stepped. i.e. it consists of many small flat areas with sudden jumps between them. A jump occurs whenever a parameter change causes the metrics for two different ratios to go past each other (i.e. change their ordering) which causes the two ratios to swap ranks. Because of these flat spots, there is no point in fine-tuning the parameters beyond a certain level.
I admit I still don't fully understand the limitations of your setup, but I do better understand their implications now. I had noticed the stepped nature of the SoS and understood its cause. Perhaps it would be more appropriate to – in the end at least – return the ranges for each parameter within which the minimum SoS is found, rather than a collection of arbitrary points within those ranges.
Then solpfr(i) + w × copfr(i) can be called a soapfr(i) where the adjustment is p → log2(p) + w.
"solpfr" is great – thanks for that.

This project was on my mind as I dozed off to sleep last night, and freed from the constraints of the screen and keyboard, I worked out this relationship in my head. So it's interesting how on one hand we want copfr to be positive, because psychologically speaking, the more primes in your ratio, the less popular it should be. But we were finding the best w to be negative, meaning that each individual prime in a ratio needed to do a flat amount of less damage to the popularity; this part I don't have a psychological explanation for, so much as I shrug it off as basically a recognition of the fact that the best fit line will involve not only getting the logarithmic "slope" correct but also getting the intercept correct. Or maybe it all adds (or subtracts ;) ) up to copfr being psychologically explainable as a negative, in the sense not that more primes actually improves your popularity, but that adding more primes matters less, relatively speaking than what those primes are.
Dave Keenan wrote:
Tue Jun 30, 2020 4:34 pm
I will investigate the denominator having it's own independent equivalents of α, w and y. Call them A, W and Y. "c" would then not be needed as it would be taken care of by w and W. But that would be back to 6 parameters, so I'd hope some would prove unnecessary.
Back before we consolidated sopafr and sopaf together as sopafry, I did experiment with different k and a on sopaf than on sopafr. I may have mentioned this earlier. What I may not have mentioned is that I moved away from this, even removing the capacity for this from my code, replacing it with the following comment: "That they would be different doesn't feel psychologically motivated". Ever since you dropped that concept into this topic I have been trying to keep it close to top of mind.

I don't mean to discourage experimentation. I think we're finding again and again as we try to pin this elusive metric down that what works is initially surprising but then comes into focus.

I may, after all, experiment with putting the whole darn thing to a power or base, to see if that moves the needle, even though I likened it to a parade of hallucinogenic pink elephants. Might as well give it a try.
Or maybe I'll try applying copfr only to the denominator (and only sopawfry to the numerator, as we do now). I note that copfr(den) = sopAWfrY(den) when A = ∞, W = 1, Y = 1, because log(x) = 0.

But now I'm wondering how we should define "numerator" and "denominator". Is the numerator the one with the largest sopawfry or the largest copfr?
If we consolidate them in together, maybe this question is moot? Or maybe I'm a bit too confused to directly answer the question. I don't exactly know where to go from here.

We seem to have found that k=0 c≠0 gives better results. And c can be transmuted into w, which is good because it's a simplification. But if c transmutes into w, and k is still 0, then we no longer have any part of the metric that would react in any way to what's in the smaller of the numerator of the denominator, however that's defined (before c transmuted to w, and it was applying to a copafr which did not split num and den, that was an extent to which we were considering the smaller of the two).

We could also try k as a power or base, instead of a scalar.

User avatar
cmloegcmluin
Site Admin
Posts: 787
Joined: Tue Feb 11, 2020 3:10 pm
Location: San Francisco, California, USA
Real Name: Douglas Blumeyer
Contact:

Re: developing a notational comma popularity metric

Post by cmloegcmluin »

And on the other hand, we've experimented with a and y as powers and bases, but couldn't they be scalars too (I'm using scalar here to mean "something we multiply by" – let me know if there's a better term for that). I mean, if we were back to something as simple as sopfr, a scalar on p would not make a difference in ranking (changing the scalar could never cause any two ratios' ranks to switch) but couldn't there be some situations where it could work?

And have we discussed copfr vs copf? I think I was using plain old copfr, but we could even use copfar.
e.g. with ar = √r the 5-rough monzo [9 4 0 1〉would have

sopfr  9*5 + 4*7 + 0*11 + 1*13 = 86
sopf   1*5 + 1*7 + 0*11 + 1*13 = 25
sopfar 3*5 + 2*7 + 0*11 + 1*13 = 42
copfr  9*1 + 4*1 + 0*1  + 1*1  = 14
copf   1*1 + 1*1 + 0*1  + 1*1  = 3
copfar 3*1 + 2*1 + 0*1  + 1*1  = 6 

If we used ap = p + w where w = 1, then we'd have consolidated copfr, copf, and copfar into sopfr, sopf, and sopfar, respectively (so I'm not even using a base or a power in my ap here, but I'm just proving the point, I think, that if we use w we no longer need to speak of cop...):

soapfr  9*(5 + 1) + 4*(7 + 1) + 0*(11 + 1) + 1*(13 + 1) = 100 = 86 + 14
soapf   1*(5 + 1) + 1*(7 + 1) + 0*(11 + 1) + 1*(13 + 1) =  28 = 25 + 3
soapfar 3*(5 + 1) + 2*(7 + 1) + 0*(11 + 1) + 1*(13 + 1) =  48 = 42 + 6

User avatar
cmloegcmluin
Site Admin
Posts: 787
Joined: Tue Feb 11, 2020 3:10 pm
Location: San Francisco, California, USA
Real Name: Douglas Blumeyer
Contact:

Re: developing a notational comma popularity metric

Post by cmloegcmluin »

Could we not also collapse soapfr, soapf, and soapfar into one function – which I'd still call "soapfar" – if we allowed r to get its own w (maybe call it v?)?

soapfar (3 + 1)*(5 + 1) + (2 + 1)*(7 + 1) + (0 + 0)*(11 + 1) + (1 + 1)*(13 + 1) = 76 = 48 + 28

In other words, it looks like we can represent every dimension we've brought up for describing prime content in a single function. (well, except prime limit, prime counting fn, which we haven't spoken as much about recently)
The only thing is that we wouldn't be able to assign different k's, a's, y's, w's, v's etc. across either of those two aspects we'd just collapsed (sum vs. count, w/ repetition vs. w/o repetition). But again I'm not sure that to have those parameters vary across those aspects would be justifiable as psychologically motivated.

I know we've been saying similar things to this, so I apologize if I'm being redundant or being wishy-washy or vacillating too much.

User avatar
Dave Keenan
Site Admin
Posts: 1088
Joined: Tue Sep 01, 2015 2:59 pm
Location: Brisbane, Queensland, Australia
Contact:

Re: developing a notational comma popularity metric

Post by Dave Keenan »

cmloegcmluin wrote:
Wed Jul 01, 2020 9:19 am
And on the other hand, we've experimented with a and y as powers and bases, but couldn't they be scalars too (I'm using scalar here to mean "something we multiply by" – let me know if there's a better term for that).
Yeah. A scalar is just a quantity that has size but no direction, as opposed to a vector. I understand you were attracted by its similarity to "scaler", i.e. something that scales. I'd just call "something we multiply by" a "multiplier". You could also call it a "coefficient", which basically means a constant multiplier as opposed to a variable one.
I mean, if we were back to something as simple as sopfr, a scalar on p would not make a difference in ranking (changing the scalar could never cause any two ratios' ranks to switch) but couldn't there be some situations where it could work?
Maybe. But I can't think of one.
If we used ap = p + w where w = 1, then we'd have consolidated copfr, copf, and copfar into sopfr, sopf, and sopfar, respectively (so I'm not even using a base or a power in my ap here, but I'm just proving the point, I think, that if we use w we no longer need to speak of cop...):
Your "ap = p + w" would read better to me as
ap = p → p + w
similar to assigning the abbreviated form of an anonymous function in javascript, or
ap(p) = p + w.

User avatar
Dave Keenan
Site Admin
Posts: 1088
Joined: Tue Sep 01, 2015 2:59 pm
Location: Brisbane, Queensland, Australia
Contact:

Re: developing a notational comma popularity metric

Post by Dave Keenan »

cmloegcmluin wrote:
Wed Jul 01, 2020 6:21 am
Perhaps it would be more appropriate to – in the end at least – return the ranges for each parameter within which the minimum SoS is found, rather than a collection of arbitrary points within those ranges.
That's harder than it sounds. The region is unlikely to be a simple box (= hyperrectangle), but rather a more complex polytope (hyperpolyhedron). You'd have to at least find the coordinates of all its vertices to have a hope of characterising it. Even finding a maximum box within the polytope would be very difficult. Not worth the trouble. Just finding a nicely-rounded (or simple rational as you've been doing) set of numbers that are inside, would be good enough.
[Negative w] I don't have a psychological explanation for, so much as I shrug it off as basically a recognition of the fact that the best fit line will involve not only getting the logarithmic "slope" correct but also getting the intercept correct.
Yes! That's how I read it — ever since I found log3(p)-0.9 as a good fit to the prime weights the Solver found when I let it vary them all independently. One can also say that psychologically, primes 2 and 3 ought to have near-zero cost. 5 is kind of the "first significant prime" in terms of JI composition. But I may be indulging in a just-so story.
Back before we consolidated sopafr and sopaf together as sopafry, I did experiment with different k and a on sopaf than on sopafr. I may have mentioned this earlier. What I may not have mentioned is that I moved away from this, even removing the capacity for this from my code, replacing it with the following comment: "That they would be different doesn't feel psychologically motivated". Ever since you dropped that concept into this topic I have been trying to keep it close to top of mind.
Well yes. I didn't think there was any justification for treating the numerator (or "big" side) different from the denominator (or "small" side), except in regard to how unbalanced they were. But since the revelation of k=0, c≠0, I plan to try treating them a whole lot different. Starting with letting the Solver find the ideal weight for each prime in the numerator, separately from the ideal weight for each prime in the denominator. I'll initially decide which is numerator by using good-old-fashioned sopfr (that we used to call SoPF) as "bigness".

Then I'll see what simple function of p is a good fit to those optimal prime weights for the numerator, and separately, what simple function of p is a good fit to those optimal prime weights for the denominator.

I'll also try a different repeat-compressing-exponent for the numerator versus the denominator. It makes more sense to use uppercase for the one on the "big" side — so Y as in rY for the numerator and y as in ry for the denominator.
I don't mean to discourage experimentation. I think we're finding again and again as we try to pin this elusive metric down that what works is initially surprising but then comes into focus.
I agree. But again, I worry a little about just-so stories.

User avatar
Dave Keenan
Site Admin
Posts: 1088
Joined: Tue Sep 01, 2015 2:59 pm
Location: Brisbane, Queensland, Australia
Contact:

Re: developing a notational comma popularity metric

Post by Dave Keenan »

cmloegcmluin wrote:
Wed Jul 01, 2020 11:42 am
I know we've been saying similar things to this, so I apologize if I'm being redundant or being wishy-washy or vacillating too much.
I'm not sure I understood what you wrote prior to that, but I'm not sure it matters. Why don't we just try our different things and report back on whether it gives a lower SoS, or fewer parameters, or simpler functions, than the best so far.

User avatar
cmloegcmluin
Site Admin
Posts: 787
Joined: Tue Feb 11, 2020 3:10 pm
Location: San Francisco, California, USA
Real Name: Douglas Blumeyer
Contact:

Re: developing a notational comma popularity metric

Post by cmloegcmluin »

Dave Keenan wrote:
Wed Jul 01, 2020 11:43 am
You could also call it a "coefficient"
I'll use coefficient, thanks. And I'll leave coefficients off the table with respect to adjustments of the primes and the monzo terms, then.
Your "ap = p + w" would read better to me as
ap = p → p + w
similar to assigning the abbreviated form of an anonymous function in javascript
Now you're speaking my language! Okay, I agree that's an improvement, and I'll use that here on out.
Dave Keenan wrote:
Wed Jul 01, 2020 1:27 pm
That's harder than it sounds. The region is unlikely to be a simple box (= hyperrectangle), but rather a more complex polytope (hyperpolyhedron). You'd have to at least find the coordinates of all its vertices to have a hope of characterising it. Even finding a maximum box within the polytope would be very difficult. Not worth the trouble. Just finding a nicely-rounded (or simple rational as you've been doing) set of numbers that are inside, would be good enough.
You're absolutely right. Okay, I'll stick to a nice-ish coordinate somewhere inside the hyperpolyhedron, then.
since the revelation of k=0, c≠0, I plan to try treating them a whole lot different. Starting with letting the Solver find the ideal weight for each prime in the numerator, separately from the ideal weight for each prime in the denominator. I'll initially decide which is numerator by using good-old-fashioned sopfr (that we used to call SoPF) as "bigness".
I guess I'm still a bit confused about this bit. If there's a "revelation of k=0, c≠0" is it that we actually throw away the entire denominator? Or maybe in order to avoid potential confusion with the numerator and denominator of the input ratio we should call these the "greaterator" and "lesserator", in which case I mean we throw away the lesserator?

Maybe you're a couple steps ahead of me, but I think where I am at is: I have proven to myself that in the end we can recombine sopfr, sopf, copfr, and copf all up into one big soapfar, so until then, I plan to use each of them individually with their own versions of the weights/adjustments/parameters we're discussing. And yes, I do hope as many of them come out about the same so that I can combine as many back up together.
Dave Keenan wrote:
Wed Jul 01, 2020 2:03 pm
I'm not sure I understood what you wrote prior to that, but I'm not sure it matters. Why don't we just try our different things and report back on whether it gives a lower SoS, or fewer parameters, or simpler functions, than the best so far.
I agree. The nature of where we're at now is calling for it. I basically spent my evening just refactoring and cleaning my code up so I can manipulate these possibilities more nimbly. I look forward to reporting back soon!

User avatar
Dave Keenan
Site Admin
Posts: 1088
Joined: Tue Sep 01, 2015 2:59 pm
Location: Brisbane, Queensland, Australia
Contact:

Re: developing a notational comma popularity metric

Post by Dave Keenan »

OK. I have a new candidate metric, with only two parameters. Call the side of the 2,3-reduced ratio having the biggest sopfr the numerator n. The other side is the denominator d.

metric(n, d) = soolpfcr(n, w, y) + mcopfr(d)

"soolpfcr" stands for "sum of offset log of prime-factors with compressed repeats".
The log is base 2. The offset is w < 0 and the compression consists in raising the repeat-counts to the power y < 1.

"mcopfr" stands for "modified count of prime factors with repeats".
The modification to the count consists of counting each factor of 5 as only a half count. All higher prime factors count as 1 as usual.

Take the monzo for the numerator to be
n = [, n5 n7 n11 n13 n17 ...
and the monzo for the denominator to be
d = [, d5 d7 d11 d13 d17 ...

soolpfcr(n, w, y) = sum over primes p from 5 to max_prime of ( [log2(p) + w] × npy )

mcopfr(d) = sum over primes p from 5 to max_prime of ( if p=5 then dp/2 else dp )

I get optimal values of w = -1.453, y = 0.863 giving SoS = 0.00651.

That's not as low as some earlier candidates, but pretty good for only 2 parameters.

Post Reply