Terminology for multiplicative equivalents of common additive concepts

User avatar
cmloegcmluin
Site Admin
Posts: 1704
Joined: Tue Feb 11, 2020 3:10 pm
Location: San Francisco, California, USA
Real Name: Douglas Blumeyer (he/him/his)
Contact:

Re: terminology for multiplicative equivalents of common additive concepts

Post by cmloegcmluin »

I finally got around to writing this up on the xenharmonic wiki, since I use it in a few of my other theory pages there: https://en.xen.wiki/w/Undirected_value
User avatar
cmloegcmluin
Site Admin
Posts: 1704
Joined: Tue Feb 11, 2020 3:10 pm
Location: San Francisco, California, USA
Real Name: Douglas Blumeyer (he/him/his)
Contact:

Re: Terminology for multiplicative equivalents of common additive concepts

Post by cmloegcmluin »

I apologize because this post goes beyond the title of this topic: instead of extrapolating concepts from the additive operation tier to the multiplicative operation tier, I am here extrapolating them another tier higher yet, up to the power tier.



Lambdas

Lambdas are the power equivalent of deltas and qoppas.
  • A delta is a familiar value from the most basic tier, the additive tier: the difference between a number and the immediately preceding number in some list.
  • Earlier in this thread, Dave and I introduced the qoppa, its multiplicative equivalent: the quotient between a number and the immediately preceding number in some list.
  • Next, we can consider the power equivalent, a lambda, and this is the logquotient (Here is my post where I define this: viewtopic.php?t=575. I recommend you review it for my preferred notation, which I'll use in a couple places here.) between a number and the immediately preceding number in some list.
Viewing these from another angle, we can say:
  • a delta tells us what number we have to add to 𝑎ₙ, to reach 𝑎ₙ₊₁, and
  • a qoppa tells us what number we have to multiply 𝑎ₙ by, to reach 𝑎ₙ₊₁,
  • a lambda tells us what number we have to raise 𝑎ₙ to, to reach 𝑎ₙ₊₁.
Here's a table comparing deltas, qoppas, and lambdas, using the Fibonacci sequence as an example:

entry value Δ (delta) Ϙ (qoppa) Λ (lambda)
𝑎₀ 0
𝑎₁ 1 1
𝑎₂ 1 0 1
𝑎₃ 2 1 2
𝑎₄ 3 1 1.5 1.585
𝑎₅ 5 2 1.667 1.465
𝑎₆ 8 3 1.6 1.292
𝑎₇ 13 5 1.625 1.233
𝑎₈ 21 8 1.615 1.187
𝑎₉ 34 13 1.619 1.158

Note how the first several lambdas are not defined. The first defined lambda is 1.585 because the log base 𝑎₃ of 𝑎₄ = log₂3 (in my preferred notation, 3 /_ 2). The next lambda is 1.465 because the log base 𝑎₄ of 𝑎₅ = log₃5 (5 /_ 3), etc.
I chose the name "lambdas" following the same pattern we extended for qoppas: taking the letter 'Q' for "quotient" and finding the equivalent Greek letter 'Ϙ' and using its name, just as someone one day took the letter 'D' for "difference" and found the equivalent Greek letter 'Δ', using its name. A bonus for this choice is that the capital letter lambda Λ closely resembles the capital letter delta Δ, so we get a nice visual correspondence between the two.

At the power tier, however, there is more than one analog to deltas and qoppas. This is because — unlike with addition and multiplication — powers are non-commutative. In other words, while an augend and addend can be swapped with no change in sum, and a multiplicand and multiplier can be swapped with no change in product, a base and exponent can not be swapped with no change in power. Thus, while it doesn't matter whether we consider a delta to be the addend or the augend, nor whether we consider a qoppa to be the multiplicand or the multiplier, it does matter whether we consider the intermediate value between numeric list entries to be the base or to be the exponent. When we consider it to be the exponent (and the previous number in the list to be the base), we are looking at a specific type of exponent called a logarithm or logquotient, and so in this context it is a lambda. When we consider it to be the base (and the previous number in the list to be the exponent), we are looking at a specific type of base called a root, and so in this context it is a rho.

Here's a table extending the previous table to include rhos as well:

entry value Δ (delta) Ϙ (qoppa) Λ (lambda) Ρ (rho)
𝑎₀ 0
𝑎₁ 1 1
𝑎₂ 1 0 1 1
𝑎₃ 2 1 2 2
𝑎₄ 3 1 1.5 1.585 1.732
𝑎₅ 5 2 1.667 1.465 1.710
𝑎₆ 8 3 1.6 1.292 1.516
𝑎₇ 13 5 1.625 1.233 1.378
𝑎₈ 21 8 1.615 1.187 1.264
𝑎₉ 34 13 1.619 1.158 1.183

Note how the first rho is not defined, like the first qoppa, on account of 𝑎₀ = 0. The first defined rho is 1 because the 𝑎₁ᵗʰ root of 𝑎₂ is ¹√1 = 1. The next rho is 2 because the 𝑎₂ᵗʰ root of 𝑎₃ is ¹√2 = 2. The next rho is 1.732 because the 𝑎₃ᵗʰ root of 𝑎₄ = ²√3. Etcetera.

I chose the name "rhos" again following the same pattern, taking the letter 'R' for "root" and finding the equivalent Greek letter 'Ρ' and using its name. I do recognize that "rhos" sounds like "rows", which ain't great because table data rows will often be relevant wherever rhos come up. Another minus is that the Greek letter 'Ρ' looks just like the Latin letter 'P'. Well, I somehow feel like lambdas are much more natural than rhos anyway, so perhaps these are acceptable problems.

Though in private correspondence with Dave, neither of us found any particular use for these analogs.



Logmod

There is also a power-tier equivalent of modulus and reduce: logdivide by 𝑛 until you reach a value less than 𝑛. This would be called logmodulus, or logmod for short. (It could also have been "logreduce", but "logmod" just has such a nice ring to it.)

Example:
65 logmod 2 ≈ 1.373, because
65          /_ 2 ≈ 6.022; > 2, so repeat.
  6.022 /_ 2 ≈ 2.590; > 2, so repeat.
  2.590 /_ 2 ≈ 1.373; done.
Or in other words, 65 /_ 2 /_ 2 /_ 2 ≈ 1.373.

As with reduce, 1 will always be the lower asymptotic bound, and 𝑛 will be the upper bound (not asymptotic).
But just as there were two power-tier analogs for deltas — lambdas and rhos —  there's a second analog for modulus, too, namely: repeatedly taking the 𝑛ᵗʰ root until less than 𝑛. Maybe this'd be called "raduce", taking "rad" from "radication", the technical term for taking the root ("radix" means "root" in Latin)? Nah, that's too cute. How about rootmodulus, or rootmod for short.

Example:
65 rootmod 2 ≈ 1.685, because
65          /^ 2 ≈ 8.062; > 2, so repeat.
  6.022 /^ 2 ≈ 2.839; > 2, so repeat.
  2.590 /^ 2 ≈ 1.685; done.
Or in other words, 65 /^ 2 /^ 2 /^ 2 ≈ 1.685.

Here's one way to think about it, in terms of agnosticism, like how absolute and undirected value are agnostic to positivity and directedness, respectively:
  • Modulus 𝑛 gives a number agnostic to where it's found relatively within consecutive ranges whose upper bounds are found by repeated addition: 𝑛, 𝑛+𝑛, 𝑛+𝑛+𝑛, 𝑛+𝑛+𝑛+𝑛...
  • Reduce 𝑛 gives a number agnostic to where it's found relatively within consecutive ranges whose upper bounds are found by repeated multiplication, 𝑛, 𝑛×𝑛, 𝑛×𝑛×𝑛, 𝑛×𝑛×𝑛⨯𝑛...
  • Rootmodulus 𝑛 gives a number agnostic to where it's found relatively within consecutive ranges whose upper bounds are found by repeated power by a constant, 𝑛, 𝑛^𝑛, (𝑛^𝑛)^𝑛, ((𝑛^𝑛)^𝑛)^𝑛...
  • Logmodulus 𝑛 gives a number agnostic to where it's found relatively within consecutive ranges whose upper bounds are found by repeated power (of a constant), 𝑛, 𝑛^𝑛, 𝑛^(𝑛^𝑛), 𝑛^(𝑛^(𝑛^𝑛))...
As you can see, the ranges for the final two — the two power-tier modulus operations — differ only in how the copies of 𝑛 are grouped: from the right (as is the standard order of operations), or from the left.
Here's the first several of those ranges visualized, for 𝑛 = 2, for all for modulus variations:


We can see that the logmodulus ranges grow even more extremely than those of the rootmodulus. If you're familiar with the concept of function growth, we can say the ranges for rootmod exhibit power growth, while the ranges for logmod exhibit an even more extreme growth known as exponential growth. Think of it this way: when we repeatedly take the previous value and raise it to the 𝑛 power (grouped-from-the-left), this is a constant exponent, like power growth; when we repeatedly take the previous value and raise 𝑛 to that power (grouped-from-the-right), it's a constant base, like exponential growth.
Here's a table showing off, for each modulus function with 𝑛 = 1.69, the series of values repeatedly being reduced:

modulus reduce rootmodulus logmodulus
10.14 23.29808512 203856.1615 9708370.563
8.45 13.78584918 1385.516624 30.66061426
6.76 8.15730721 72.26730489 6.52333296
5.07 4.826809 12.58825688 3.574010811
3.38 2.8561 4.475764394 2.427328364
1.69 1.69 2.427328364 1.69
0 1 1.69 1

And here's that graphed:


Notice that while regular modulus reduces to between 0 and 𝑛, and reduce and logmodulus reduce to between 1 and 𝑛, rootmodulus can't effectively reduce to between 𝑛 and one of these identities. The logical choices are either to reduce to between 𝑛 and 𝑛^𝑛, or between 𝑛 and 𝑛/^𝑛. I think the former, for simplicity and positivity, is preferable.

What would either of these actually get used for, though? Beats me.

I haven't thought about operator symbols for logmod or rootmod, like how % is often used for plain mod. It doesn't even look like we came up with one for reduce; only the function notation redₙ().



(others)

In the original post for this topic, Dave and I found superratio, subratio, and undirected value to be the multiplicative equivalents of positive, negative, and absolute value at the additive tier. I have not found there to be any meaningful analogs to these at the power tier. As Dave pointed out to me by email, the main problem here is that powers are non-commutative, and while there's an identity for exponents, which is 1 (like the multiplicative identity), there's no identity for bases, i.e. there is no constant that you can raise to the power \(x\) and get back \(x\). (Solve for \(y\) in \(x^y = x\) and you get \(y = \log_x{x} = \frac{\ln{x}}{\ln{x}} = 1\); solve for \(y\) in \(y^x = x\) and you just get \(\sqrt[x]{x}\) which has no fixed value, e.g. square root of 2 does not equal cube root of 3.) I'll leave it at this for now.

  • As a final note, if readers are interested in a generalization of the relationship across the additive, multiplicative, and power tier, including the tier below addition which is called the successive tier, or the tiers above power which begin with the tetrative tier, I suggest you check out this resource: https://en.wikipedia.org/wiki/Hyperoper ... ost_common)
User avatar
cmloegcmluin
Site Admin
Posts: 1704
Joined: Tue Feb 11, 2020 3:10 pm
Location: San Francisco, California, USA
Real Name: Douglas Blumeyer (he/him/his)
Contact:

Re: Terminology for multiplicative equivalents of common additive concepts

Post by cmloegcmluin »

Okay, one more. Piling on the crankery. And also extending something to the power tier that has already been extended from the additive tier to the multiplicative tier.

Everyone knows about the plus-minus sign:
±
But have you heard of the analogous times-over sign?

Unfortunately, this sign is actually called the "division times" sign, a name which in several ways fails analogousness.

So while 3 plus-minus 2 gives the range of values between 3 + 2 = 5 and 3 - 2 = 1,
3 times-over 2 gives the range of values between 3 × 2 = 6 and 3 / 2 = 1.5.

This concept could be extended further, to the power tier.
But it would extend in two different ways, because commutativity breaks down at that tier.

We'd have power-root, with symbol looking like a cross between a caret and a reversed radical:



e.g. 3 power-root 2 gives the range of values between 3² = 3 ^ 2 = 9 and 3 ^ (1/2) = ²√3 = 3 /^ 2 = 1.732.

And we'd have power-log, with symbol looking like a cross between a caret and the L-shaped division bar for enhanced situations that I proposed for logdivision, which comes out looking like a Star of David, sliced halfway across:



e.g. 3 power-log 2 gives the range of values between between 3² = 3 ^ 2 = 9 and log₂3 = ₂√3 = 3 /_ 2 = 1.585.

I can see that there are many more possibilities for ranges like these.
Like 3 ? 2 which gives the range between 3² and log₂3.
But these two are the most obvious ones to me, and I'm too lazy to look any further.

Dave has noted that: "
when the second operand is close to 1 (say n between 1 and 1.2) this is typically expressed as:
±(n-1)×100%, e.g. ⋇1.2 is expressed as ±20%.
It's not quite the same thing, but it's close.
"
User avatar
cmloegcmluin
Site Admin
Posts: 1704
Joined: Tue Feb 11, 2020 3:10 pm
Location: San Francisco, California, USA
Real Name: Douglas Blumeyer (he/him/his)
Contact:

Re: Terminology for multiplicative equivalents of common additive concepts

Post by cmloegcmluin »

I'm not quite sure why I did this, but I wrote a post to my personal blog this morning, and only just now realized that it basically could have belonged on this thread, which lately has become less about moving from the additive tier to the multiplicative tier, but from any tier to a higher tier. It's about generalizing function growth pattern names from the power tier to higher tiers: https://cmloegcmluin.wordpress.com/2024 ... nd-beyond/
User avatar
רועיסיני
Posts: 68
Joined: Tue Apr 11, 2023 12:11 am
Real Name: Roee Sinai

Re: Terminology for multiplicative equivalents of common additive concepts

Post by רועיסיני »

cmloegcmluin wrote: Sat Mar 30, 2024 6:10 am e.g. 3 power-log 2 gives the range of values between between 3² = 3 ^ 2 = 9 and log₂3 = ₂√3 = 3 /_ 2 = 1.585.
There is a slight difference between this suggestion and the three other this-or-that suggestions, which is that in the additive case of \(\pm\) the right operand is always one of the numbers added, and the question is whether the left operand is added to it or the sum, similarly in the case of \(\divideontimes\) the right operand is one thing that is multiplied by another, and you can choose whether the left operand is this other thing or the product. The power-root operation also works along these lines, as the right operand is always the exponent, and the left operand can be either the base or the power, however in your suggestion above the right operand is either the exponent or the base, depending on whether you want to interpret this sign as power or as logdivision. It would make more sense to have the right operand always mean the base and the left operand to either mean the exponent or the power, so 3 power-log 2 is either \(2^3\) or \(_2\sqrt 3\), where 2 is always the base. This may be what you intended to write anyway because you repeated this as the definition of the ? operator, but I'm really not sure. Also, this operation, applied to arbitrary numbers \(a\) and \(b\) essentially takes \(a\) one step forward or backward in an exponentiation tower with base \(b\), e.g. \(8 = 2^3\) and so 8 power-log 2 means between \(3\) and \(2^8 = 2^{2^3}\).
If we're already at it, there is a third operation that can be used - a root-or-log b, where b is the power and you're either taking the ath root of it or the base a logarithm (i.e. either \(\sqrt[a]b\) or \(_a\sqrt b\)). I'm not sure what meaning it has, if any, but it completes the triangle nicely.
User avatar
cmloegcmluin
Site Admin
Posts: 1704
Joined: Tue Feb 11, 2020 3:10 pm
Location: San Francisco, California, USA
Real Name: Douglas Blumeyer (he/him/his)
Contact:

Re: Terminology for multiplicative equivalents of common additive concepts

Post by cmloegcmluin »

רועיסיני wrote: Sun Apr 07, 2024 4:52 am
cmloegcmluin wrote: Sat Mar 30, 2024 6:10 am e.g. 3 power-log 2 gives the range of values between between 3² = 3 ^ 2 = 9 and log₂3 = ₂√3 = 3 /_ 2 = 1.585.
There is a slight difference between this suggestion and the three other this-or-that suggestions, which is that in the additive case of \(\pm\) the right operand is always one of the numbers added, and the question is whether the left operand is added to it or the sum, similarly in the case of \(\divideontimes\) the right operand is one thing that is multiplied by another, and you can choose whether the left operand is this other thing or the product. The power-root operation also works along these lines, as the right operand is always the exponent, and the left operand can be either the base or the power, however in your suggestion above the right operand is either the exponent or the base, depending on whether you want to interpret this sign as power or as logdivision. It would make more sense to have the right operand always mean the base and the left operand to either mean the exponent or the power, so 3 power-log 2 is either \(2^3\) or \(_2\sqrt 3\), where 2 is always the base. This may be what you intended to write anyway because you repeated this as the definition of the ? operator, but I'm really not sure.
I intended to present both options. I didn't think about it too hard, though. You make a good case for the ? operator being the better candidate for power-log, though.
Also, this operation, applied to arbitrary numbers \(a\) and \(b\) essentially takes \(a\) one step forward or backward in an exponentiation tower with base \(b\), e.g. \(8 = 2^3\) and so 8 power-log 2 means between \(3\) and \(2^8 = 2^{2^3}\).
If we're already at it, there is a third operation that can be used - a root-or-log b, where b is the power and you're either taking the ath root of it or the base a logarithm (i.e. either \(\sqrt[a]b\) or \(_a\sqrt b\)). I'm not sure what meaning it has, if any, but it completes the triangle nicely.
Agreed. Thanks for engaging with my crankery, hehe.
Post Reply