developing a notational comma popularity metric

User avatar
cmloegcmluin
Site Admin
Posts: 1700
Joined: Tue Feb 11, 2020 3:10 pm
Location: San Francisco, California, USA
Real Name: Douglas Blumeyer (he/him/his)
Contact:

Re: developing a notational comma popularity metric

Post by cmloegcmluin »

Phew, wonders what a good night's sleep will do for ya!

I was definitely a bit burnt out at the end of the day yesterday. I think what exhausted me was less the problem itself, but moreso disappointment at my own intellectual laziness. I need to stop stabbing around and just keep cleaning up my code and systematically extending it, in particular with the component I described earlier to automate and optimize the types of decisions I make when spelunking for fits. So I can trust what I find.

Anything worth doing is worth doing right, right? And so I'm re-dedicated to solving this.
Dave Keenan wrote: Fri Jul 03, 2020 3:40 pm So it's really r → ry + r?t:0
Yes.

Before I had to actually implement it, in this post, I thought that t was the mechanism to collapse coapfar into soapfar. Upon implementing it, though, and coming across that complexity, I'm considering eliminating it.

Okay allow me to talk through this territory, perhaps mostly for my own benefit...

We've established that one can use w to consolidate a counting function into a summing function when all of its adjustments and/or modifications are the same.

Similarly, one can use t to consolidate a without-repetitions function into a with-repetitions function when all of its adjustments and/or modifications are the same. Except that in t's case, it has to be a ternarified-t.

However I think it's worth keeping w and t on the principle that we want to continue exploring combinations of counting and summing functions with different adjustments and/or modifications, as well as combinations of without-repetitions and with-repetitions functions with different adjustments and/or modifications.

But I would say, now, that to combine a summing and counting function, but then also include w, should be suspect. Probably there is some way to handle the prime counting aspect of the data with just one of the two: w, or c.

And I would go on to ask, with respect to t, due to its ternarification complication, why in tarnation would we use it? I think its original use as a means to consolidate a counting function into a summing function should be abandoned, and its use should be restricted to the counting functions, where the complexity of checking for presence of a term in the monzo is already a thing.
Dave Keenan wrote: Fri Jul 03, 2020 3:40 pm Some good news. I believe that when you use soapfar only, with the same soapfar for num and den, then you can always make a = 2 by dividing w by log2(a). That eliminates one parameter. You might confirm that this gives the same SoS (maybe tomorrow). :)
We may have different definitions of parameter count.

I agree that when we use both a and w, we can shift weight between them to find one (and possibly both) of them close to something that feels psychologically motivated. But I hadn't noticed it before. Good observation.

But to me, even if we lock one of these k, a, w, y, etc. in to some number, 2, log3, whatever... it's still a parameter.

I think our goal is to find the right balance of metric complexity and accuracy. With SoPF>3 we found (well, some ancient Sagittal demideities did anyway) an excellent local maximum. We're looking for now one that still has a good balance, but with higher accuracy.

We have an objective metric (metametric?) for accuracy: sum-of-squares. Our target was thrown out at 0.0055. This target may have been polluted by some bad code. I suggest we set the target at 0.005687762 which is half of SoPF>3, or in other words, twice as good. That's still pretty close to 0.0055, admittedly, but I think we can do it, and I would like to say at the end of all this that we at least doubled the accuracy.

And we seem to be gathering our efforts around "parameter count" as a measure of complexity. I don't think we'll ever have something as objective as SoS on this end. But I doubt we'll have significant disagreement on what constitutes a chunk of complexity. So the whole >3 part is a given, so I think we set that aside. In which case the original SoPF>3 I think has one chunk: summing primes. Were we to make it a SoPF>3 + CoPF>3, it would have been two chunks: summing primes, and counting primes. Were we to make it a SoPF>3 + 0.5 * CoPF>3, it would have been three chunks: summing primes, counting primes, and weighting primes by a coefficient. Is this checking out for you? I'm basically measuring complexity in the number of clauses required to describe the metric. In which case that 0.004250806 SoS I found would be seven chunks:
  1. start with sopfr
  2. but only for the numerator
  3. and primes to the log base 1.994
  4. then subtract 2.08 from each prime
  5. and raise the repetitions to the 0.455 power
  6. also include a copfr
  7. but weight it by the coefficient 0.577
(I note that were we to use ternarified-t in a formula, it would also take up 2 chunks of complexity: one for its existence/value, and the other for the rule that it only applies to non-zero terms.)

Do you agree with this conception of complexity, and if so, have a recommended target? I feel like it has to be at least 3, because we are trying to get this metric to respect
  • the difference between 5/1 and 7/1 (the fundamental bit, which SoPF>3 is doing just fine)
  • the difference between 7/5 and 35/1 (one of our new goals)
  • the difference between 25/7 and 17/1 (one of our new goals)
But realistically I don't see us halving sum of squares from SoPF>3 without at least 4 chunks of complexity.

This all sounds rather reasonable to me, but it's only once in a blue moon when I dismiss an idea of yours and you haven't had a really clear reason behind it. So what is it that you're thinking when you say that "we can set the log base, a = 2, so it's no longer a parameter"?
Dave Keenan wrote: Fri Jul 03, 2020 5:24 pm
I think I'm about ready to turn in and suggest that the one I found earlier, with only 4 parameters, is the way to go:

SoS 0.004250806
k = 0
a = 1.994
y = 0.455
w = -2.08
c = 0.577 (the weight on copfr)

It's "4" parameters, but involves 2 top-level submetrics: soapfar and copfr.
No matter how I try, I cannot reproduce anything like this SoS for this metric. At first I thought I was getting close to the same SoS, until I realised there was only one zero after the decimal point. I get a SoS a little more than 10 times greater!

I believe you only got that result because of the aforementioned bug.

I realise now, that this metric can't work, because it gives the same rank to 11/5 and 11/7, and it gives the same rank to 13/5, 13/7, 13/11, and the same to 13/25 and 13/35, and to 17/5, 17/7, 17/11, 17/13, etc.
I re-ran my code with the fractional ranks patch, but I still get the same 0.004250806 result.

It does give different results for 11/5 and 11/7, because the copfr acts on both the numerator and denominator.

Ah, I expect I wasn't completely clear. Mine is a simple copfr. It does not have k, a, y, or w applied.

I'm more okay with it having different a, y, and w than I am with it having a different k. The different k's feel unmotivated. Perhaps if I could find a different c for which both k's were the same and the SoS was still low...
Dave Keenan wrote: Fri Jul 03, 2020 10:28 pm I also found this less-fragile minimum with the same 4 parameter metric and a=2.

SoS 0.005589449
k = 0.213895488
y = 0.642099097
w = -2.048657352
c = 0.551650547
Yeah, so I can't recreate that either. I suspect it's something you're doing on the copfr side that I'm not doing. But I tried flipping a bunch of combinations of switches and can't find it.
All your base are belong to dos.
Somebody set up us the log!
User avatar
cmloegcmluin
Site Admin
Posts: 1700
Joined: Tue Feb 11, 2020 3:10 pm
Location: San Francisco, California, USA
Real Name: Douglas Blumeyer (he/him/his)
Contact:

Re: developing a notational comma popularity metric

Post by cmloegcmluin »

Another thing that occurred to me is that k might work better as a power or base instead of a coefficient. So I'll try that soon too. (Along with power or base weights on each soapfar/copfr/gpf submetric instead of coefficients, as I suggested before but haven't tried yet).

It's not that I'm not interested in your suggestion to weight each prime separately. But my code is definitely not set up for that sort of investigation. And I do feel like requiring our metric to carry around its own val for the first 20 or so primes, essentially, may not be in the spirit of the game. At that point, why not just carry around the top 80 ratio vote counts?
User avatar
Dave Keenan
Site Admin
Posts: 2180
Joined: Tue Sep 01, 2015 2:59 pm
Location: Brisbane, Queensland, Australia
Contact:

Re: developing a notational comma popularity metric

Post by Dave Keenan »

cmloegcmluin wrote: Sat Jul 04, 2020 3:33 am Phew, wonders what a good night's sleep will do for ya!
Glad to hear it. You seem to have been doing more work than me on this, despite holding down a job, where I am not.
Anything worth doing is worth doing right, right? And so I'm re-dedicated to solving this.
I'm not sure if anything like this is ever "solved". More likely we'll eventually just give up on trying to make further improvements and settle on the best so far. Reductio ad exhaustion. And as you note below, "best so far" is even hard to define and agree upon.
Before I had to actually implement it, in this post, I thought that t was the mechanism to collapse coapfar into soapfar. Upon implementing it, though, and coming across that complexity, I'm considering eliminating it.
Right. So you realise now, that r?t:0t doesn't give you coapfar. It seems to me to only give you t × coapf, since it ignores repeats. But I note that ar = r → r?t:0 is equivalent to ar = r → t × r0 if we define 00 = 0 (as is not usually done).
Okay allow me to talk through this territory, perhaps mostly for my own benefit...

We've established that one can use w to consolidate a counting function into a summing function when all of its adjustments and/or modifications are the same.

Similarly, one can use t to consolidate a without-repetitions function into a with-repetitions function when all of its adjustments and/or modifications are the same. Except that in t's case, it has to be a ternarified-t.

However I think it's worth keeping w and t on the principle that we want to continue exploring combinations of counting and summing functions with different adjustments and/or modifications, as well as combinations of without-repetitions and with-repetitions functions with different adjustments and/or modifications.
I think "t" is highly suspect, for the same reason that copf is suspect, because e.g. it treats 625 the same as 5.
But I would say, now, that to combine a summing and counting function, but then also include w, should be suspect. Probably there is some way to handle the prime counting aspect of the data with just one of the two: w, or c.
Agreed.
And I would go on to ask, with respect to t, due to its ternarification complication, why in tarnation would we use it? I think its original use as a means to consolidate a counting function into a summing function should be abandoned, and its use should be restricted to the counting functions, where the complexity of checking for presence of a term in the monzo is already a thing.
I totally agree.
We may have different definitions of parameter count.

I agree that when we use both a and w, we can shift weight between them to find one (and possibly both) of them close to something that feels psychologically motivated. But I hadn't noticed it before. Good observation.
I'm claiming that we can choose any base we like, and by dividing w and c by lognew_base(old_base) we are only dividing the whole metric by a constant, which doesn't change how it sorts.

But to me, even if we lock one of these k, a, w, y, etc. in to some number, 2, log3, whatever... it's still a parameter.
Perhaps a would seem less like still being a parameter if I said make it e, instead of 2. Then we can write ln(x) instead of loge(x). But the real point is that, if we can change it to any value we like, and by changing other parameters recover the same ranking and hence the same SoS, then it's not a parameter. Or at least it's not something that has to be tuned to match the data.
I think our goal is to find the right balance of metric complexity and accuracy.
Agreed.
With SoPF>3 we found (well, some ancient Sagittal demideities did anyway) an excellent local maximum. We're looking for now one that still has a good balance, but with higher accuracy.
Agreed.
We have an objective metric (metametric?) for accuracy: sum-of-squares. Our target was thrown out at 0.0055. This target may have been polluted by some bad code. I suggest we set the target at 0.005687762 which is half of SoPF>3, or in other words, twice as good. That's still pretty close to 0.0055, admittedly, but I think we can do it, and I would like to say at the end of all this that we at least doubled the accuracy.
I'm not willing to give a fixed threshold like that. As you say, I might be willing to trade lower complexity for higher error, say 0.0066.
And we seem to be gathering our efforts around "parameter count" as a measure of complexity. I don't think we'll ever have something as objective as SoS on this end. But I doubt we'll have significant disagreement on what constitutes a chunk of complexity. So the whole >3 part is a given, so I think we set that aside. In which case the original SoPF>3 I think has one chunk: summing primes. Were we to make it a SoPF>3 + CoPF>3, it would have been two chunks: summing primes, and counting primes. Were we to make it a SoPF>3 + 0.5 * CoPF>3, it would have been three chunks: summing primes, counting primes, and weighting primes by a coefficient. Is this checking out for you? I'm basically measuring complexity in the number of clauses required to describe the metric. In which case that 0.004250806 SoS I found would be seven chunks:
  1. start with sopfr
  2. but only for the numerator
  3. and primes to the log base 1.994
  4. then subtract 2.08 from each prime
  5. and raise the repetitions to the 0.455 power
  6. also include a copfr
  7. but weight it by the coefficient 0.577
(I note that were we to use ternarified-t in a formula, it would also take up 2 chunks of complexity: one for its existence/value, and the other for the rule that it only applies to non-zero terms.)

Do you agree with this conception of complexity,
Yes. This is good thinking. I was looking too narrowly at parameter count. Thanks.
and if so, have a recommended target? I feel like it has to be at least 3, because we are trying to get this metric to respect
  • the difference between 5/1 and 7/1 (the fundamental bit, which SoPF>3 is doing just fine)
  • the difference between 7/5 and 35/1 (one of our new goals)
  • the difference between 25/7 and 17/1 (one of our new goals)
But realistically I don't see us halving sum of squares from SoPF>3 without at least 4 chunks of complexity.
Agreed.
This all sounds rather reasonable to me, but it's only once in a blue moon when I dismiss an idea of yours and you haven't had a really clear reason behind it. So what is it that you're thinking when you say that "we can set the log base, a = 2, so it's no longer a parameter"?
I hope I have explained that above.
Dave Keenan wrote: Fri Jul 03, 2020 5:24 pm
SoS 0.004250806
k = 0
a = 1.994
y = 0.455
w = -2.08
c = 0.577 (the weight on copfr)

It's "4" parameters, but involves 2 top-level submetrics: soapfar and copfr.
No matter how I try, I cannot reproduce anything like this SoS for this metric. At first I thought I was getting close to the same SoS, until I realised there was only one zero after the decimal point. I get a SoS a little more than 10 times greater!

I believe you only got that result because of the aforementioned bug.

I realise now, that this metric can't work, because it gives the same rank to 11/5 and 11/7, and it gives the same rank to 13/5, 13/7, 13/11, and the same to 13/25 and 13/35, and to 17/5, 17/7, 17/11, 17/13, etc.
I re-ran my code with the fractional ranks patch, but I still get the same 0.004250806 result.
The term "fractional ranks" worries me. I never have any fractional ranks when computing the true SoS. How do fractional ranks make any sense here?

I sort on the metric. If two ratios have the same value for the metric, their ordering is arbitrary*, but they are assigned different consecutive integer ranks. *I think Excel just preserves their initial ordering. I deliberately added some random noise to the metric and saw that the SoS changed on every recalc.
It does give different results for 11/5 and 11/7,
Whaaat! :shock: :o
because the copfr acts on both the numerator and denominator
Sure, but the point is that the only thing acting on the denominator is copfr. k=0 means the soapfar only acts on the numerator. So if the only difference between ratios is the choice of single prime in the denominator, then they must have the same value for the metric.
Ah, I expect I wasn't completely clear. Mine is a simple copfr. It does not have k, a, y, or w applied.
I understood that. Same here. Remember I can reproduce your SoS exactly, when I bump k up to 0.038 instead of 0.
I'm more okay with it having different a, y, and w than I am with it having a different k.
I'm unsure what you mean by "it" here. It seems it should refer to the copfr, but I am not aware of k having anything to do with copfr. copfr only has c.
The different k's feel unmotivated. Perhaps if I could find a different c for which both k's were the same and the SoS was still low...
Totally lost here. I thought k was only used in one place. I thought we were talking about a metric of this form:

soapfar(n) + k × soapfar(d) + c × copfr(n×d)
Dave Keenan wrote: Fri Jul 03, 2020 10:28 pm I also found this less-fragile minimum with the same 4 parameter metric and a=2.

SoS 0.005589449
k = 0.213895488
y = 0.642099097
w = -2.048657352
c = 0.551650547
Yeah, so I can't recreate that either. I suspect it's something you're doing on the copfr side that I'm not doing. But I tried flipping a bunch of combinations of switches and can't find it.
OK. Well I think if you clear up my confusion immediately above, we might figure out what's going on here.
User avatar
Dave Keenan
Site Admin
Posts: 2180
Joined: Tue Sep 01, 2015 2:59 pm
Location: Brisbane, Queensland, Australia
Contact:

Re: developing a notational comma popularity metric

Post by Dave Keenan »

cmloegcmluin wrote: Sat Jul 04, 2020 3:38 am It's not that I'm not interested in your suggestion to weight each prime separately. But my code is definitely not set up for that sort of investigation. And I do feel like requiring our metric to carry around its own val for the first 20 or so primes, essentially, may not be in the spirit of the game. At that point, why not just carry around the top 80 ratio vote counts?
There would be two vals. One for the numerator and one for the denominator. Such a complex but highly accurate metric would not be the final metric, only a stage on the way.

The point is to, in effect, ask the numerator and denominator separately: If you could have any prime weights (and repeat-count compression) you like, what would it be? Then we look at the shape of those vals, as functions of p, and we say something like: Well you can't have anything that complex, but it looks like you, numerator, might be happy with something of the form µp = log2(p) + w. And it looks like you, denominator could get by with something of the form δp = k × (p - 2). And then we optimise those w and k (and y and v) parameters.

In my spreadsheet, right from the start I had those two vals. Of course their terms were calculated from the parameters a and w. But it was easy as cake, piece of pie, to tell the Excel Solver to fiddle the terms of those vals directly, instead of fiddling a and w, in order to minimise my proxy SoS. So I've already done the above, and will post the graphs in due course. But I am interested to know what they look like when minimising the true SoS, something my setup can't do.
User avatar
Dave Keenan
Site Admin
Posts: 2180
Joined: Tue Sep 01, 2015 2:59 pm
Location: Brisbane, Queensland, Australia
Contact:

Re: developing a notational comma popularity metric

Post by Dave Keenan »

Here are the popularity vals (and their logarithmic trendlines) that I get for numerator and denominator when numerator is defined simply as the largest side of the ratio, and I fit to the first 106 ratios excluding ranks 1, 91 and 101, and I allow separate powers of repeat-count for numerator npy and denominator dpv. These fit as y = 0.86871884, v = 0.712508978.

Image
Attachments
popularityVals.png
(42.78 KiB) Not downloaded yet
User avatar
cmloegcmluin
Site Admin
Posts: 1700
Joined: Tue Feb 11, 2020 3:10 pm
Location: San Francisco, California, USA
Real Name: Douglas Blumeyer (he/him/his)
Contact:

Re: developing a notational comma popularity metric

Post by cmloegcmluin »

Dave Keenan wrote: Sat Jul 04, 2020 8:59 am Right. So you realise now, that r?t:0t doesn't give you coapfar. It seems to me to only give you t × coapf, since it ignores repeats. But I note that ar = r → r?t:0 is equivalent to ar = r → t × r0 if we define 00 = 0 (as is not usually done).
No, what I've got isn't:

r → r?t:0

it's:

r → ry + r?t:0

like you originally said.

Otherwise I agree with the points you make.

------
I'm not willing to give a fixed threshold like that. As you say, I might be willing to trade lower complexity for higher error, say 0.0066.
Perhaps we'll just know it when we see it. I keep hoping that one of these days my code will spit out something great.

------
The term "fractional ranks" worries me. I never have any fractional ranks when computing the true SoS. How do fractional ranks make any sense here?
I learned about them when reading the Wikipedia article on Spearman's rank correlation coefficient, back when you brought it to the conversation: https://en.wikipedia.org/wiki/Spearman% ... alculation
There's some other detail that was helpful to me here:
https://en.wikipedia.org/wiki/Ranking#F ... 2_ranking)

It makes sense to me. Compare what happened in my buggy (overly-benefit-of-the-doubt-giving) code to what it does now. In the buggy code, a big string of ratios that all tied for the same unpopularity/antivotes would get ranked in a sequence e.g. 7, 8, 9, 10. With fractional ranks, they all get assigned to the average: 8.5, 8.5, 8.5, 8.5. It becomes clear when you see what happens with the sum-of-squares. Let's say otherwise the metric is working out, so we'd be comparing this stretch against 7, 8, 9, 10 in the real data. So my buggy code would contribute 0 to the SoS, since all four ranks are pefect matches. Whereas the appropriate fractional-ranked code is going to give 1.52 + 0.52 + 0.52 + 1.52. Which feels right because the metric should have differentiated all of those ratios but it failed to do so.

I wouldn't expect you to "have" any fractional ranks unless you specifically designed your system to compute them.

Sorry if you already understood all that and I've insulted your intelligence. You could have cut me off right away if we were discussing in person :) Again, I'd bet what you were concerned about was something I've completely failed to anticipate and/or something more important. Or maybe just chalk it up to how things work differently in Excel than in my code.

But actually, just in explaining this, it has occurred to me that perhaps we should be using fractional ranks in the real data too. The first place with a tie in the Scala stats is between 17/11 and and 77/5. I've got those labeled as rank 35 and 36, respectively. I'm suggesting that they should actually both be labeled as rank 35.5.

Since it's fairly unlikely that our candidate metric will nail every single tie in the data, if we did this, we'd probably nick all of our SoS numbers a bit. But I do think it'd be a truer measure. Let me know what you think.

------
Totally lost here. I thought k was only used in one place. I thought we were talking about a metric of this form:

soapfar(n) + k × soapfar(d) + c × copfr(n×d)
Sorry I confused you. I am talking about a metric of that form, yes. But there's a subtlety to the way my thinking about k is shaped by the way my code works, I think, which you do not share with me. When I said that my soapfar and copfr had different k's, I meant that copfr essentially had k=1. Every submetric in my code has its own k, a, w, y, t, etc.

Well, on one hand that's good that we're talking about the same metric. But then I guess one or the other of us must have a bug in their code. I'd bet on it being me... (don't freak out yet... the next paragraph has already ID'd the bug)
Sure, but the point is that the only thing acting on the denominator is copfr. k=0 means the soapfar only acts on the numerator. So if the only difference between ratios is the choice of single prime in the denominator, then they must have the same value for the metric.
Jeez. Well, yes, of course, you're right. I think I must have left a 0.038 in for k somewhere when I thought I had it at 0. That 0.038 really makes a big difference!!! Sorry to have freaked us both out and wasted our time.

Does Dennett have a word for a bug that's in the programmer's brain? Well, I guess that's what we have PEBKAC for.

------
Dave Keenan wrote: Sat Jul 04, 2020 9:45 am There would be two vals. One for the numerator and one for the denominator. Such a complex but highly accurate metric would not be the final metric, only a stage on the way.
Dave Keenan wrote: Sat Jul 04, 2020 11:42 am Here are the popularity vals (and their logarithmic trendlines) that I get for numerator and denominator when numerator is defined simply as the largest side of the ratio, and I fit to the first 106 ratios excluding ranks 1, 91 and 101, and I allow separate powers of repeat-count for numerator npy and denominator dpv. These fit as y = 0.86871884, v = 0.712508978.
They say a picture speaks a 1,000 words. Seeing this graph caused your idea to instantly click for me. Had I really understood your intent I would have strongly encouraged us to pursue this. Excellent stuff! Yes, I think we should focus on these powers and see what we can get.

Unfortunately I can't do a quick check on this, since while my code supports a different y per submetric, it isn't organized in a way where it can do what you've been calling big y and little y. Sorry. I had considered adding a new parameter, maybe "j", that would be the coefficient on the numinator, which would normally be 1, of course, but would be necessary if I ever wanted to look at only the diminuator (sorry I still prefer diminuator, haha) — but even that wouldn't do the trick right away since I've got all these assumptions build into the code about how each submetric type only appears once in the metric. Sorry to bore you with the implementation details. I'll think about how best to achieve that soon.

------

Well I began the day with high hopes to build an automation layer onto the infrastructure I laid down for jackhammering away at the possibility space... but I instead spent the whole day cleaning up and refactoring. I imagine from your end this could be a disappointment. But believe me, it was worth it.
User avatar
Dave Keenan
Site Admin
Posts: 2180
Joined: Tue Sep 01, 2015 2:59 pm
Location: Brisbane, Queensland, Australia
Contact:

Re: developing a notational comma popularity metric

Post by Dave Keenan »

cmloegcmluin wrote: Sat Jul 04, 2020 1:03 pm There's some other detail that was helpful to me here:
https://en.wikipedia.org/wiki/Ranking#F ... 2_ranking)
... Which feels right because the metric should have differentiated all of those ratios but it failed to do so.
...
Sorry if you already understood all that and I've insulted your intelligence. ...
But actually, just in explaining this, it has occurred to me that perhaps we should be using fractional ranks in the real data too. ...

Since it's fairly unlikely that our candidate metric will nail every single tie in the data, if we did this, we'd probably nick all of our SoS numbers a bit. But I do think it'd be a truer measure. Let me know what you think.
No. I did not already understand all that. Everything you say above makes perfect sense, including the fact that the data itself should use fractional ranks. Thanks!

And whadyaknow. I find that Excel has a built in RANK(element, array) function that doesn't even require the array to be sorted! It happens to do 1224-style ranking of ties, but that can be turned into 1 2.5 2.5 4 as follows.
=RANK(element, array)+(COUNTIF(array, element)-1)/2
Sorry I confused you. I am talking about a metric of [the form soapfar(n) + k × soapfar(d) + c × copfr(n×d)], yes. But there's a subtlety to the way my thinking about k is shaped by the way my code works, I think, which you do not share with me. When I said that my soapfar and copfr had different k's, I meant that copfr essentially had k=1. Every submetric in my code has its own k, a, w, y, t, etc.
Erm. I'm still confused. On the one hand you seem to be saying that you have a kcopfr(n×d) which is the same as "c" above (that we've both been using in communication on this for a long time). But on the other hand you're saying that this kcopfr(n×d) = 1, when we were talking about a case where c = 0.577.

I understand you also have a ksoapfar(n) = 1 and a ksoapfar(d) which is just our standard "k".

[Edit: Don't bother responding to this yet. Keep reading. I think I figure it out by the end of this post.]
They say a picture speaks a 1,000 words. Seeing this graph caused your idea to instantly click for me. Had I really understood your intent I would have strongly encouraged us to pursue this. Excellent stuff! Yes, I think we should focus on these powers and see what we can get.
I worry that you haven't actually understood it from the picture and maybe need to go back and re-read the words, as the picture has absolutely nothing to do with any "powers".
Unfortunately I can't do a quick check on this, since while my code supports a different y per submetric, it isn't organized in a way where it can do what you've been calling big y and little y.
I haven't called anything big Y and little y for a long time (that was a momentary aberration). I've been calling them y and v where y is the exponent of the repeat-counts for the numinator, and v is the exponent of the repeat-counts for for the deminuator.

But no matter what they are called, why isn't this an example of "a different y per submetric". ysoapfar(n) = y and ysoapfar(d) = v?
Sorry. I had considered adding a new parameter, maybe "j", that would be the coefficient on the numinator, which would normally be 1, of course, but would be necessary if I ever wanted to look at only the diminuator (sorry I still prefer diminuator, haha) — but even that wouldn't do the trick right away since I've got all these assumptions build into the code about how each submetric type only appears once in the metric. Sorry to bore you with the implementation details. I'll think about how best to achieve that soon.
I think implementation details are exactly what I need here.

An hypothesis just came to me that would explain how we're failing to communicate here.

I'm thinking that although

soapfar(n) + k × soapfar(d) + c × copfr(n×d)

looks like 3 submetrics to me — I could even see it as 4 submetrics if I rewrite it as

soapfar(n) + k × soapfar(d) + c × copfr(n) + c × copfr(d)

 — it is only 2 submetrics to you because you see it as

[soapfar(n) + ks × soapfar(d)] + c × [copfr(n) + kc × copfr(d)].

Is that correct? And so your code has a single y for both soapfar(n) and soapfar(d) because you don't consider them separate submetrics.

Well it turns out that having a separate y for those is really unimportant in regard to this investigation of the ideal coefficient for each term of each monzo (numinator monzo and deminuator monzo). I can force my v to be the same as my y and the graphs of ideal coefficients (the terms of the vals) change very little. I can even force y = v = 1.
User avatar
Dave Keenan
Site Admin
Posts: 2180
Joined: Tue Sep 01, 2015 2:59 pm
Location: Brisbane, Queensland, Australia
Contact:

Re: developing a notational comma popularity metric

Post by Dave Keenan »

Here's what I get when I set y = v = 1. But I've also added more primes by including the data for the first 187 ratios, excluding 1/1, 211/11, 67/19, 433/125, 83/61. And I used fractional ranks for the data.

Image
Attachments
popularityValsY1V1.png
(46.7 KiB) Not downloaded yet
User avatar
Dave Keenan
Site Admin
Posts: 2180
Joined: Tue Sep 01, 2015 2:59 pm
Location: Brisbane, Queensland, Australia
Contact:

Re: developing a notational comma popularity metric

Post by Dave Keenan »

When you've switched your data to fractional ranks too. Can we have a sanity check with the following soapfar-only parameter settings:

a = 2
w = -1.415
k = 0.632
y = 0.858 (v = y)
c = 0
t = 0
x = 0
s = 0

I hope you get
SoS = 0.008325554
and that this is the minimum SoS when only w, k, and y (=v) are allowed to vary.
User avatar
cmloegcmluin
Site Admin
Posts: 1700
Joined: Tue Feb 11, 2020 3:10 pm
Location: San Francisco, California, USA
Real Name: Douglas Blumeyer (he/him/his)
Contact:

Re: developing a notational comma popularity metric

Post by cmloegcmluin »

Dave Keenan wrote: Sat Jul 04, 2020 4:37 pm No. I did not already understand all that. Everything you say above makes perfect sense, including the fact that the data itself should use fractional ranks. Thanks!
Glad I caught that, then! I'll re-run some of these numbers soon then.
And whadyaknow. I find that Excel has a built in RANK(element, array) function that doesn't even require the array to be sorted! It happens to do 1224-style ranking of ties, but that can be turned into 1 2.5 2.5 4 as follows.
=RANK(element, array)+(COUNTIF(array, element)-1)/2
Clever trick. I salute you!
 — it is only 2 submetrics to you because you see it as

[soapfar(n) + ks × soapfar(d)] + c × [copfr(n) + kc × copfr(d)].

Is that correct? And so your code has a single y for both soapfar(n) and soapfar(d) because you don't consider them separate submetrics.
Exactly correct. I promise I'm not intentionally speaking in riddles! I really am striving for unambiguity here. It's just friggin' hard. (I think your AYBABTU reference was particularly astute, since the fidelity of translation between my language and your language about this material is still improving!)
Well it turns out that having a separate y for those is really unimportant in regard to this investigation of the ideal coefficient for each term of each monzo (numinator monzo and deminuator monzo). I can force my v to be the same as my y and the graphs of ideal coefficients (the terms of the vals) change very little. I can even force y = v = 1.
They say a picture speaks a 1,000 words. Seeing this graph caused your idea to instantly click for me. Had I really understood your intent I would have strongly encouraged us to pursue this. Excellent stuff! Yes, I think we should focus on these powers and see what we can get.
I worry that you haven't actually understood it from the picture and maybe need to go back and re-read the words, as the picture has absolutely nothing to do with any "powers".
Okay, I've reread it.

The powers I was referring to were "These fit as y = 0.86871884, v = 0.712508978." Those are powers right? Maybe my mistake had been assuming that those were the grand conclusion of that subproject. Was the grand conclusion actually the coefficients per term of the monzo?

If I'm interpreting the charts correctly, we could say that there's something a bit "off" about 17's in the denominator; for some reason we need them to contribute fewer antivotes than they otherwise would in order for our metric to correlate best with the real data. And for 41's in the numerator, on the other hand, we need them to contribute more antivotes than they otherwise would.

I was assuming these per-term coefficients would not exist in the final metric, and that you were using them as a stepping-stone to find a best-fit line which would then give us our y (and v) powers for the final metric.
Post Reply