I was definitely a bit burnt out at the end of the day yesterday. I think what exhausted me was less the problem itself, but moreso disappointment at my own intellectual laziness. I need to stop stabbing around and just keep cleaning up my code and systematically extending it, in particular with the component I described earlier to automate and optimize the types of decisions I make when spelunking for fits. So I can trust what I find.

Anything worth doing is worth doing right, right? And so I'm re-dedicated to solving this.

Yes.

Before I had to actually implement it, in this post, I thought that t was the mechanism to collapse coapfar into soapfar. Upon implementing it, though, and coming across that complexity, I'm considering eliminating it.

Okay allow me to talk through this territory, perhaps mostly for my own benefit...

We've established that one can use w to consolidate a counting function into a summing function when all of its adjustments and/or modifications are the same.

Similarly, one can use t to consolidate a without-repetitions function into a with-repetitions function when all of its adjustments and/or modifications are the same. Except that in t's case, it has to be a ternarified-t.

However I think it's worth keeping w and t on the principle that we want to continue exploring combinations of counting and summing functions with different adjustments and/or modifications, as well as combinations of without-repetitions and with-repetitions functions with different adjustments and/or modifications.

But I would say, now, that to combine a summing and counting function, but then also include w, should be suspect. Probably there is some way to handle the prime counting aspect of the data with just one of the two: w, or c.

And I would go on to ask, with respect to t, due to its ternarification complication, why in tarnation would we use it? I think its original use as a means to consolidate a counting function into a summing function should be abandoned, and its use should be restricted to the counting functions, where the complexity of checking for presence of a term in the monzo is already a thing.

We may have different definitions of parameter count.Dave Keenan wrote: ↑Fri Jul 03, 2020 3:40 pmSome good news. I believe that when you use soapfar only, with the same soapfar for num and den, then you can always make a = 2 by dividing w by log2(a). That eliminates one parameter. You might confirm that this gives the same SoS (maybe tomorrow).

I agree that when we use both a and w, we can shift weight between them to find one (and possibly both) of them close to something that feels psychologically motivated. But I hadn't noticed it before. Good observation.

But to me, even if we lock one of these k, a, w, y, etc. in to some number, 2, log

_{3}, whatever... it's still a parameter.

I think our goal is to find the right balance of metric complexity and accuracy. With SoPF>3 we found (well, some ancient Sagittal demideities did anyway) an excellent local maximum. We're looking for now one that still has a good balance, but with higher accuracy.

We have an objective metric (metametric?) for accuracy: sum-of-squares. Our target was thrown out at 0.0055. This target may have been polluted by some bad code. I suggest we set the target at 0.005687762 which is half of SoPF>3, or in other words, twice as good. That's still pretty close to 0.0055, admittedly, but I think we can do it, and I would like to say at the end of all this that we at least doubled the accuracy.

And we seem to be gathering our efforts around "parameter count" as a measure of complexity. I don't think we'll ever have something as objective as SoS on this end. But I doubt we'll have significant disagreement on what constitutes a chunk of complexity. So the whole >3 part is a given, so I think we set that aside. In which case the original SoPF>3 I think has one chunk: summing primes. Were we to make it a SoPF>3 + CoPF>3, it would have been two chunks: summing primes, and counting primes. Were we to make it a SoPF>3 + 0.5 * CoPF>3, it would have been three chunks: summing primes, counting primes, and weighting primes by a coefficient. Is this checking out for you? I'm basically measuring complexity in the number of clauses required to describe the metric. In which case that 0.004250806 SoS I found would be

*seven*chunks:

- start with sopfr
- but only for the numerator
- and primes to the log base 1.994
- then subtract 2.08 from each prime
- and raise the repetitions to the 0.455 power
- also include a copfr
- but weight it by the coefficient 0.577

Do you agree with this conception of complexity, and if so, have a recommended target? I feel like it has to be at least 3, because we are trying to get this metric to respect

- the difference between 5/1 and 7/1 (the fundamental bit, which SoPF>3 is doing just fine)
- the difference between 7/5 and 35/1 (one of our new goals)
- the difference between 25/7 and 17/1 (one of our new goals)

This all sounds rather reasonable to me, but it's only once in a blue moon when I dismiss an idea of yours and you haven't had a really clear reason behind it. So what is it that

*you're*thinking when you say that "we can set the log base, a = 2, so it's no longer a parameter"?

I re-ran my code with the fractional ranks patch, but I still get the same 0.004250806 result.Dave Keenan wrote: ↑Fri Jul 03, 2020 5:24 pmNo matter how I try, I cannot reproduce anything like this SoS for this metric. At first I thought I was getting close to the same SoS, until I realised there was only one zero after the decimal point. I get a SoS a little more than 10 times greater!I think I'm about ready to turn in and suggest that the one I found earlier, with only 4 parameters, is the way to go:

SoS 0.004250806

k = 0

a = 1.994

y = 0.455

w = -2.08

c = 0.577 (the weight on copfr)

It's "4" parameters, but involves 2 top-level submetrics: soapfar and copfr.

I believe you only got that result because of the aforementioned bug.

I realise now, that this metric can't work, because it gives the same rank to 11/5 and 11/7, and it gives the same rank to 13/5, 13/7, 13/11, and the same to 13/25 and 13/35, and to 17/5, 17/7, 17/11, 17/13, etc.

It does give different results for 11/5 and 11/7, because the copfr acts on both the numerator and denominator.

Ah, I expect I wasn't completely clear. Mine is a simple copfr. It does not have k, a, y, or w applied.

I'm more okay with it having different a, y, and w than I am with it having a different k. The different k's feel unmotivated. Perhaps if I could find a different c for which both k's were the same and the SoS was still low...

Yeah, so I can't recreate that either. I suspect it's something you're doing on the copfr side that I'm not doing. But I tried flipping a bunch of combinations of switches and can't find it.Dave Keenan wrote: ↑Fri Jul 03, 2020 10:28 pmI also found this less-fragile minimum with the same 4 parameter metric and a=2.

SoS 0.005589449

k = 0.213895488

y = 0.642099097

w = -2.048657352

c = 0.551650547

Somebody set up us the log!All your base are belong to dos.