**The New Mathematics**

## Fundamental Consequences

There’s a tendency in our society to understand the history of human thought as a more or less linear progression from primitive to sophisticated. As we think of Western civilization’s technological progress, from horse and buggy, to manned space flight, it’s easy to view our revolutionary capabilities in science and technology as the pinnacle of human achievement, and to suppose that there is no other way forward, but along the way we have traveled.

However, the ancient Hebrews envisioned our days and characterized them, not as the pinnacle of civilization’s progress, but rather as the deterioration of civilization’s worth, inferior to the quality of previous civilizations. The image of the relatively inferior status of modern nations was explained by the Hebrew Daniel, when he saw and interpreted the king of Babylon’s dream, portraying the progressive degradation of the quality of earth’s civilizations, from that time to this.

According to this vision, the ancient Babylonian kingdom was the highest quality civilization in the world, followed by the inferior, but stronger, Persians, who were followed by the still more inferior, but stronger, Greeks, then by the vastly more inferior and stronger Romans, and finally by the remnants of the Romans, mixed in with the conquering Barbarians, the most inferior, totally fragmented, uncivilized nations of all, who were as clay mixed in with the metal of the Romans. The Romans were as iron compared to the more highly prized bronze of the Greeks and to the sliver of the Persians and to the gold of the Babylonians.

Of course, in the end, all of this is irrelevant, as the vision portrayed all of these old kingdoms as being replaced by a new Hebrew kingdom, which would come rolling forth like a stone down a mountain, smashing the image of Western civilization’s heritage, in all these old kingdoms, to dust. Consequently, the dust of the pulverized image simply blows away, like chaff in the wind, and disappears!

But what does this have to do with modern mathematics and science? We don’t know much about the mathematics and science of the Persians and Babylonians, and what we do know comes to us primarily from the Greeks, who learned from the Babylonians, the Persians, and the Egyptians (who, like the Asians, were never a world dominating nation, but nevertheless were sometimes significant players in mathematics and science).

Clearly, however, the *strength* of the Persians, relative to the Babylonians, and the Greeks, relative to the Persians, and the Romans, relative to the Greeks, and, in general, the modern nations relative to the ancient ones, is based, in part at least, on the progress of technology. Whether it is based on advanced strategic technology, such as provides greater sustenance, infrastructure and internal strength for the nation as a whole, or on advanced tactical technology, providing for improved weapons, communications and mobility to the nation’s armies, navies, and air forces, technology has always played a crucial role in the strength of civilizations.

The interesting aspect of this in the present context is that, while it shows us how understanding the simple fundamentals of mathematics and science makes a profound difference in the power and technological capabilities of nations, it also shows us that there may be nothing particularly enduring about it either. Civilizations come and go, and the particular aspect of their understanding of math and science that made them capable of great feats of organization, engineering and technological exploitation, comes and goes with them.

From the smallest of means, proceeds that which is great, the ancients said. For example, who could have guessed that the ability of a few Renaissance scientists to deal with the esoteric concepts of irrational and negative numbers would eventually lead to the modern ability to transcend the technology of the ancients so dramatically? But so it is. Without the ability to abstract the square root of 2 and -1, the whole of modern technology would be impossible.

However, knowing this, we are soon lead to ask what other, simple, fundamentals might we be missing? The fundamentals that some future civilization (perhaps the triumphant kingdom of the Hebrews foreseen by Daniel) might discover, might enable them to transcend our technology as much as we have transcended that of the ancients (or even more).

In thinking about this, one might be tempted to revisit the whole notion of irrational and imaginary numbers, the foundation of modern technology, and seek to understand what it is about this whole approach that makes it so powerful. If there is one way to do this, might there be another, maybe even better way to do it?

Of course, readers of these blogs know that here at the LRC we believe there is, and that we are taking our clues on how to proceed from the works of Hamilton, Grassmann, Clifford, Hestenes, and Larson. Hamilton showed us how defining numbers, as traditionally taken for granted, leaves algebra without a suitable scientific basis. Grassmann showed us that there is an underlying connection between geometry and algebra that the Greeks couldn’t make, and Clifford showed us how the two directions of each dimension forms an algebra. Thanks to the work of Hestenes, which brought the works of Grassmann and Clifford to light, we are provided with tremendous insight into the underlying nature of complex and quaternion numbers, and the imaginary numbers that they are built with.

Finally, none of it would have even caught our attention had it not been for the transcendent work of Larson. It is his brilliant recognition and intriguing development of the new and unfamiliar notions of scalar motion that provides us with the motivation for digging into all these ancient mysteries, driving us to uncover the old foundations, in search of new insight into what makes modern math and physics tick.

What we have found astounds us. Could it be that, as Thales and Pythagoras apparently learned from their predecessors, “Everything is number,” after all? The fact that this faith seemed horribly contradicted by the theorem of triangular squares, that squaring the circle could only be approximated, and that the hare could never catch up with the tortoise, and that today, after centuries of effort, we can now name irrational numbers, use them in the calculus to send robots to explore particular parcels of Martian terrain, use computers to calculate π out to a gazillion decimal places, and work with infinite sets, as easily as the Greeks worked with integers, appears to make the whole issue moot.

“Who cares, if the Greeks thought all was number,” one might think. “Our technology, our math and our science reach so far beyond anything ever dreamed of by the Greeks, that it’s patently clear that we have overcome their intellectual obstacles. Let’s just move on.” Ironically, however, that’s just what we can’t do, and the reason that we can’t do it is that the essence of these very same obstacles stands in our way. We now know that nature is both discrete, definitely measured, like numbers, and, at the same time, continuous, infinitely divisible, like distance.

Yet, in spite of the vaunted “work arounds” of our modern mathematics, which have served us so magnificently, irrational, transcendental, and imaginary numbers, finite and infinite sets, etc, we still cannot do as nature does and seamlessly combine the continuous with the discrete. It is frustrating in the extreme. It appears that, if the ancients taught Thales and Pythagoras that all was number, then they were probably just hopelessly naive and the Greeks were simply beguiled by their priestly robes and their high social status. If we can’t do it today, surely the ancient Babylonians and Persians couldn’t do it either.

That may be so, but it doesn’t mean that they didn’t have a valuable insight into numbers and geometry, which has since been lost, one that might prove to be the key to doing what we so desperately want to do. For instance, even though their approximation of π might have been very rough, compared to our very refined approximation, how do we know that it doesn’t matter, in the end? Of course, barring some unexpected archeological find, we are not likely to ever know more about how the ancients thought than the ancient Greeks did, who were in direct contact with them. The point is not, however, that the ancients had the answers we seek. They probably didn’t, but they may have thought about the fundamentals in a way that hasn’t occurred to us, which could prove to be the key for finding the answers.

As it turns out, there are many intriguing clues that the way the ancients thought about numbers, is close to the new way we are thinking about them here at the LRC. In the next post, we will get into some of the details of this.

## Natural Numbers

As discussed in the last post, it seems like the only consistent way to produce the natural numbers is via a natural progression of points; that is, the 0D mathematical series

1^{0}, 2^{0}, 3^{0} …

must be actually

1*2^{0}/2^{0}, 2*2^{0}/2^{0}, 3*2^{0}/2^{0}, …

because, when, starting with space and time only, there are no “things” to count, which implies that the natural series,

1^{1}, 2^{1}, 3^{1} …

is mathematically incorrect, as an initial condition in a space|time progression, since

1*2^{1}/2^{0}, 2*2^{1}/2^{0}, 3*2^{1}/2^{0}…,

is a natural progression of *double* magnitudes (one in each “direction”) not single magnitudes. Therefore, as a space|time progression, the natural 1D mathematical series *necessarily* begins with 2, not 1, and increases by 2, 1D, magnitudes, not 1:

2*1^{1}, 4*1^{1}, 6*1^{1}…,

while the natural series,

1^{2}, 2^{2}, 3^{2} …,

is also incorrect, because

1*2^{2}/2^{0}, 2*2^{2}/2^{0}, 3*2^{2}/2^{0}, …

is the natural mathematical progression of area, which begins with 2^{2 }= 4, 2D, magnitudes, not 1, or 2, increasing the base of the series by a factor of 2:

4*1^{2}, 16*1^{2}, 36*1^{2}….

Finally, the natural 3D series:

1^{3}, 2^{3}, 3^{3} …

is also incorrect, as a space|time progression, because it is actually,

1*2^{3}/2^{0}, 2*2^{3}/2^{0}, 3*2^{3}/2^{0}, …,

which is the natural progression of volume, its magnitudes beginning with 2^{3} = 8, 3D, magnitudes, not 1, not 2, not 4, again increasing the base of the previous series by a factor of 2:

8*1^{3}, 64*1^{3}, 216*1^{3} …

All of this means, among other things, that the algebra of these numbers begins with the pseudoscalar value of an n-dimensional progression (2^{n}), not its scalar value (2^{0}); that is, each series begins with the corresponding right side of the tetraktys, not the left side. This is because one line has two directions, and one area has four directions, not two, and it is therefore incorrect to write the progression of 1D magnitudes beginning with the scalar magnitude 1 (2^{0}), or to write the progression of area beginning with the 2^{1},^{ }or 1D, pseudoscalar magnitude. Likewise, one volume has eight directions, not two, and not four, and therefore the natural volumetric series must begin with eight cubic scalars, not one. To accurately denote this, we need to rewrite the 1D progression as

(1*2)^{1}/(1*2)^{0}, (2*2)^{1}/(2*2)^{0}, (3*2)^{1}/(3*2)^{0}, …,

the 2D progression as

(1*2)^{2}/(1*2)^{0}, (2*2)^{2}/(2*2)^{0}, (3*2)^{2}/(3*2)^{0}, …,

and the 3D progression as

(1*2)^{3}/(1*2)^{0}, (2*2)^{3}/(2*2)^{0}, (3*2)^{3}/(3*2)^{0}, ….

It’s important to recognize that, when the uniform 3D progression is *measured* from a given point (2^{0} = 1), at t_{n} - t_{0}, the apparent one-dimensional interval characterizes the expanding volume by its 1D radius. However, to calculate the true 1D interval, which is the diameter of the volume, the radius must be doubled; to calculate the true 2D interval, the doubled radius, the diameter, must be squared, and to calculate the 3D interval, it must be cubed:

2*1^{1} = 2r = d,

4*1^{2} = d^{2},

8*1^{3} = d^{3}

However, this brings us face to face with the age old problem of the quadrature, or of “squaring the circle,” because the 2D space component of the 3D space|time expansion must expand geometrically over time, or circularly, and the 3D component must expand spherically, while the algebraic square and the algebraic cube are necessarily rectilinear, and therefore an issue of 2D and 3D numerical integration, or quadrature, and cubature, as it’s sometimes referred to, arises.

That this problem is related to the foundations of quantum mechanics is indicated, when it’s recognized that only one point on the surface of an expanding circle, or sphere, can be measured at any given time. Special relativity makes it impossible to *simultaneously* specify t_{n} at any more than one point on the 2D, or 3D, surface of the expansion, because points on these surfaces are always moving apart. Therefore, we are brought back to consider the physical enigma of point/wave duality, and the mathematical dilemma of quadrature, and the logical challenge of unifying the concept of the discrete numbers of algebra with the concept of the smooth functions of geometry.

In the next post, we will discuss how the ancient way of dealing with these fundamental issues turns out to be remarkably congruent with our ideas of the space|time progression; that is, what has been called the “mediato/duplatio” (halving/doubling) method of ancient reckoning, intimately associated with the notion of the tetraktys, turns out to be our “factor of 2,” playing in the space|time progression series, as described above, and we will discuss the correspondence between them next time.

This topic is very interesting as it relates the modern concept of rotation, implemented with complex numbers, to our new concept of 3D expansion, implemented with scalars and pseudoscalars, which is a crucial point to understand, I believe.

## LRC Seminar - Scalar Algebra

In the previous posts, we’ve seen how to define the “directions” of natural numbers, by defining number as Hamilton did, as order in progression, instead of increased or diminished magnitude. By taking two of these progression-defined numbers, as the two, reciprocal, aspects of one progression, as Larson did, and by defining two interpretations of these reciprocal numbers, we have been able to establish two groups, one group under addition, analogous to the integers, with an identity element of 0, and one group under multiplication, analogous to the fractions, with an identity element of 1.

In the previous post, we showed how combining the unit magnitudes of the positive and negative “directions” defines a two unit “distance,” or interval, analogous to a spatial distance, with two algebraic “directions,” one negative and one positive:

1|2 + 2|1 = 3|3 = 0

(1|2) = -(2|1)

(2|1) = -(1|2)

However, the fact that the pipe symbol indicates that the reciprocal relation is to be interpreted as the value of the difference (sum of opposite signs) between the numerator and the denominator means that any number n|n can be used as the identity element, and since the quantities in the numerator and denominator are defined in terms of order in progression, rather than as increased, or diminished, quantities, it is necessary to recognize how those quantities differ; that is, how does 1 become 2 and 2 become 3, or 1?

With non-reciprocal numbers defined as magnitudes, 1 becomes 2, when two independent magnitudes are summed:

1 + 1 = 2, 2 + 1 = 3,

which represents an arbitrary action of addition.

However, with non-reciprocal numbers defined as order in progression, 1 becomes 2 and 2 becomes 3, as the progression proceeds:

1, 2, 3, …

but what are the dimensions of these steps of progression? Ordinarily, the absence of a superscript with a number indicates that it is 1-dimensional, and we have seen that in ordinary mathematics, any number raised to the zero power is defined by the law of exponents, as the number 1, since all such numbers

n/n = n^{1}/n^{1} = 1^{1-1} = 1^{0} = 1.

However, as we saw in the last post, this definition is problematic, theoretically, because it means that the unit cube, 1^{3}, must be defined as

n^{4}/n^{1} = 1^{4 -1} = 1^{3} = 1

and since, in a 3D system, we can’t define the four-dimensional *unit* required to do this, confusion results. Fortunately, we avoid this problem in the mathematics of reciprocal numbers, because the dimensions of the numbers express their *inherent* dual “directions” (positive and negative), which gives meaning to 1^{0}, as a number with no degree of freedom. So, we simply start with dimension 0, at the top of the tetraktys, meaning there is no, dual, degree of freedom in the initial number of the tetraktys. It simply corresponds to a geometric point.

Ordinarily, we would regard a progression of reciprocal numbers, with two, reciprocal, aspects, as an ordered series of 0-dimensional units, or scalars, which would constitute a series of points, not 1D lines. Yet, an unexpressed exponent is assumed to equal 1. So, writing the series,

1|1, 2|2, 3|3, …

implies an exponent of 1 in the numerator and the denominator, but in this case we can’t subtract the exponent of the denominator from the exponent of the numerator, because the pipe symbol indicates that the reciprocal operation of the reciprocal number is not multiplication (division), but addition (subtraction). Therefore, the exponents must be the same in both cases, because the subtraction operation (actually sum of opposite signs) wouldn’t be valid otherwise, since we can’t subtract (add) two numbers with different exponents, or dimensions.

Yet, from our knowledge of the tetraktys, we know that the reciprocal of the scalar (dimension 0) is the pseudoscalar (dimension 3, at the 3D level, or bottom of the tetraktys). So, if one of the terms in a reciprocal number is a 0D scalar, the meaning of the pipe symbol, “|”, requires the other term to be the reciprocal of the scalar, the pseudoscalar!

We can see that this makes sense, because the series of reciprocal numbers

1^{0}|1^{0}, 2^{0}|2^{0}, 3^{0}|3^{0}, … = 0^{0}, 0^{0}, 0^{0}, …

is meaningless. A point is only its own reciprocal, when no degree of freedom is present (the n^{0}:n^{0 }numbers at the top of the tetraktys). Its reciprocal, with any non-zero degree of freedom, is always the pseudoscalar. Hence, the 3D pseudoscalar is the appropriate reciprocal of the 0D scalar in the Euclidean space (i.e. the 2^{3} numbers at the bottom of the tetraktys).

Consequently, in the three-dimensional system of numbers (the Grassmann algebra), the progression of reciprocal numbers must take the form

1^{3}|1^{0}, 2^{3}|2^{0}, 3^{3}|3^{0}, …

which is a series of reciprocal numbers expressing a numerical progression of cubes, combined with reciprocally related points, corresponding to the geometric structure of Larson’s cube, with the 0D scalar at the intersection of the stack of 2x2x2 cubes.

However, because the difference between the numerator and the denominator is the difference between reciprocal quantities *of different dimensions*, we can express its value, as some mathematically meaningful result, only if the denominator is always the 0D scalar, while the numerator is the, reciprocal, pseudoscalar, since subtracting 0 from anything is essentially meaningless, and the breaking of the rule of exponents has no consequences in this case. As they say in the gym, no harm, no foul.

However, if we change from the pipe operation to the slash operation, then, according to the same mathematical rules, it’s possible to express the operational result as a meaningful quantity. That is to say,

1^{3}/1^{0}, 2^{3}/2^{0}, 3^{3}/3^{0}, … = 1^{3-0}, 1^{3-0}, 1^{3-0}, … = 1^{3}, 1^{3}, 1^{3}, …

Why is this? I submit that it’s because, in the slash operation, the ratio of reciprocals, as a quotient, defines the unit of a function. So, 1^{3} is a cubic unit of the function, which equates to a cubic pseudoscalar unit per scalar unit. On the other hand, in the pipe operation, 1^{3}|1^{0}, the ratio of reciprocals defines the unit of volume, as a 3D interval, with eight directions, between the 0D point and the 3D cube.

This difference between the two operations enables us to distinguish, in an important manner, the difference between scalar magnitudes of motion, *with* two “directions,” and vector magnitudes of motion, *in* two directions. The difference in the magnitudes is the difference in the point of reference. We represent the opposite direction of a vector, by placing the arrow head at the opposite end of the line:

—————————> or <——————————

However, we represent the opposite “directions” of scalars, by placing the arrow head at both ends of a line, pointing in opposite “directions,” like this:

<————————>

This is because motion, as a 1D scalar magnitude, is an expansion from the center outward, in opposite directions, while motion, as a 1D vector magnitude, is a transference from one end of a line to the other. Thus, a scalar line always has a middle point associated with it, which is not part of a vector line. Therefore, the reciprocal number,

1^{1}|1^{0 }= 1

is a numerical expression of the double headed arrow

<————-0————->

or the result, or interval, we might say, of a 1D scalar expansion outward from a point.

By the same token, the reciprocal number,

1^{2}|1^{0} = 1^{2}

is a numerical expression of the four headed arrow

or the result of a 2D expansion from a point.

Finally, the reciprocal number

1^{3}|1^{0} = 1^{3}

is a numerical expression of the six headed arrow

or the result of a 3D expansion from a point.

The important difference in scalar motion versus vector motion is that the two “directions” in one dimension of scalar motion produce two 1D scalar magnitudes (one in each “direction”), in one unit of time, the four “directions” in two dimensions of scalar motion produce four 2D scalar magnitudes, in one unit of time, while the six “directions” in three dimensions of scalar motion produce eight 3D scalar magnitudes, in one unit of time.

This means that to represent the unit progression of the RST, with reciprocal numbers, we write the series

1^{3}:1^{0}, 2^{3}:2^{0}, 3^{3}:3^{0}, …

where the colon symbol for ratio is used as a general symbol for reciprocity, which can be interpreted as either of the two operations we have defined. Consequently, this gives us two representations of the reciprocal operation: One is a geometric interval, and the other is a function, which produces that interval; that is, one is a representation of a scalar “distance” with two *fixed*, reciprocal, aspects, the scalar and pseudoscalar, while the other is a representation of a function, with two *changing*, reciprocal, aspects, the scalar and pseudoscalar.

On this basis, the 0D scalar progression, or scalar expansion of a point, is

1^{0}:1^{0}, 2^{0}:2^{0}, 3^{0}:3^{0}, … = 1^{0}, 2^{0}, 3^{0}, …^{}

where the expanded scalar intervals, i_{n}, are

i_{n} = 1^{0}|1^{0}, 2^{0}|2^{0}, 3^{0}|3^{0}, … = 1^{0}, 2^{0}, 3^{0}… (0, 0, 0,…)

And the function of the scalar progression, f(p^{0}), which produces them, is

f(p^{0}) = Δ1^{0}/Δ1^{0}.

The 1D scalar progession, or scalar expansion, of a line, is

1^{1}:1^{0}, 2^{1}:2^{0}, 3^{1}:3^{0}, … = 1^{1}, 2^{1}, 3^{1}, …

where the expanded scalar intervals are

i_{n} = 1^{1}|1^{0}, 2^{1}|2^{0}, 3^{1}|3^{0}, … = 1^{1}, 2^{1}, 3^{1} (<-0->, <—0—>, <—-0—->, …)

And the scalar function, which produces them, is

f(p^{1}) = Δ1^{1}/Δ1^{0}

However, notice that this time, due to the fact that there are TWO directions in the ONE dimension, the progression of the 1D units, as opposed to the progression of the 0D units, is an increase in multiples of two 1D units, one “positive” unit, relative to zero, and one negative unit, relative to zero: 2, 4, 6, …, or the 1D progression, P^{1}, is P^{1} = (2*1^{1}), (2*2^{1}), (2*3^{1}), …

Now, the 2D scalar progession, or scalar expansion, of an area, is

1^{2}:1^{0}, 2^{2}:2^{0}, 3^{2}:3^{0}, … = 1^{2}, 2^{2}, 3^{2}, …

where the expanded scalar intervals are

i_{n} = 1^{2}|1^{0}, 2^{2}|2^{0}, 3^{2}|3^{0}, … = 1^{2}, 2^{2}, 3^{2}, …

And the scalar function, which produces them, is

f(p^{2}) = Δ1^{2}/Δ1^{0}

^{}

Again, however, due to the fact that there are TWO directions in each of the TWO dimensions, the progression of the 2D units, as opposed to the progression of the 1D units, is an increase in multiples of four 2D units, two polarized units in two independent directions, relative to zero, and two oppositely polarized units in two opposite independent directions, relative to zero: 4, 16, 36, …, or the 2D progression, P^{2}, is P^{2} = (4*1^{2}), (4*2^{2}), (4*3^{2}), …

Finally, the 3D scalar progession, or scalar expansion, of a volume, is

1^{3}:1^{0}, 2^{3}:2^{0}, 3^{3}:3^{0}, … = 1^{3}, 2^{3}, 3^{3}, …

where the expanded scalar intervals are

i_{n} = 1^{3}|1^{0}, 2^{3}|2^{0}, 3^{3}|3^{0}, … = 1^{3}, 2^{3}, 3^{3}, …

And the scalar function, which produces them, is

f(p^{3}) = Δ1^{3}/Δ1^{0}

^{}

Now, due to the fact that there are TWO directions in each of the THREE dimensions, the progression of the 3D units, as opposed to the progression of the 2D units, is an increase in multiples of eight 3D units, four polarized units in three independent “positive” directions, relative to zero, and four polarized units in three independent “negative” directions, relative to zero: 8, 64, 216, …, or the 3D progression, P^{3}, is P^{3} = (8*1^{3}), (8*2^{3}), (8*3^{3}), …

Of course, in the context of the RST, this immediately raises the possibility of the inverse of these intervals, and the functions, which produce them; that is, it is the progression of the temporal tetraktys, in the form of the temporal 2x2x2 stack of cubes. Would this take the form of

f(p^{-n}) = Δ1^{-n}/Δ1^{0}?

This is heavy stuff!

## LRC Seminar - Explaining the Dimensions of Scalars

In the previous two posts, I’ve sketched how, in the upcoming seminar, I will approach explaining the three properties of scalar numbers that correspond to the three properties of physical magnitudes - quantity, “direction,” and dimension. Essentially, we’ve seen that it is order in reciprocal progressions that defines numerical quantity with the dual “directions” observed in nature, and that these reciprocal quantities can be arithmetically, or algebraically, combined, by resorting to two, operational, interpretations of number, symbolized by a slash and a pipe symbol, and restricting the operations of multiplication (division) and addition (subtraction) to these interpretations, respectively.

Now we come to the question of dimension, the third property of physical magnitudes that we want to define as a corresponding property of numbers. At first this might seem impossible, because scalar quantities, even those with dual “directions,” can’t be rotated as physical quantities can. Yet, while this is certainly true, we need to remember that dimensions of physical magnitudes, i.e. length, width, and depth, are simply independent variables, and that it is through the operation of rotation that the independence of these fundamental physical dimensions is established. However, nothing precludes us from establishing independence of variable quantities through some means other than rotation.

For example, three points, separated in space, are independent points, in terms of their different locations, whether those locations are confined to a one-dimensional line, a two-dimensional plane, or a three-dimensional volume. As explained in the previous posts below, combining a positive and negative quantity, in the form of two reciprocal numbers, defines a “distance,” or an interval, between them; that is,

1|2 + 2|1 = 3|3 = 0, so it follows algebraically that

1|2 = -(2|1), and

2|1 = -(1|2).

Plotting these two quantities on a number line, we get

1|2———-0———-2|1 or -1———-0———-1

So, the difference, or interval, between them is two units

(1|2) - (2|1) = -1 - (1) = -2, or

(2|1) - (1|2) = 1 - (-1) = 2.

Hence, the algebra is non-commutative, because ordinary arithmetic is non-commutative (i.e. the order of operations matters in subtraction) . But the non-commutativity of the subtraction operation is tantamount to a conservation of the “direction” property of the reciprocal number, which we have defined without recourse to an imaginary number, in the form of the square root of -1.

However, the question then arises, what is the square root of the negative quantity, 1|2? Is there a reciprocal number that when raised to the power of 2 equals 1|2? The answer is yes, there is, but to understand it requires us to delve into the meaning of raising numbers to powers and then extracting roots from them. What does it mean to raise the number 1 to the power of 2, and then extracting that power from it, as its root? We are taught in middle school that

1^{0} = 1

1^{1} = 1

1^{2} = 1x1 = 1

1^{3} = 1x1x1 = 1,

and we learn to think of the base number as a factor and the exponent as the number of factors in the product equal to the exponentiation. However, we are also taught to relate these numbers to the dimensions of a coordinate system (which Hestenes likens to catching a debilitating virus). This may be a little confusing to adults (children seldom question it), because, while 1^{1} can readily be understood as a linear unit, 1^{2} as an area unit, and 1^{3} as a cubic unit, in a 3D coordinate system, how is it that 1^{0} = 1? Logically it would follow that it should be analogous to a point at the origin of the coordinate system, but the origin has to be zero, not 1.

The way this is normally explained in terms of a binary operation is that, by a law of exponents, we understand that 1^{1}/1^{1} = 1^{1-1} = 1^{0} = 1, where 1 must be understood as a dimensionless number, a unit with no dimensions (so I guess zero is defined as 0^{1}/0^{1}?!!). Yet, to a jaded adult that seems a little suspect, because 0 and 1 are quite different. Besides, if we go that route, it means that 1^{1} = 1^{2}/1^{1}, 1^{2} = 1^{3}/1^{1}, and 1^{3} = 1^{4}/1^{1}, which also means that 1^{1} * 1^{0} = 1^{1}, or a unit line times a unit point is a unit line, and 1^{1} * 1^{1} = 1^{2}, or a unit line times a unit line is a unit area, and 1^{2} * 1^{1} = 1^{3}, or a unit area times a unit line is a unit volume, and 1^{3} * 1^{1} = 1^{4}, or a unit volume times a unit line is a *what*? A hypervolume? What’s that?

The only thing that we have accomplished, with this law of exponents, is a trade-off. We had no explanation, at one end of the tetraktys, and we traded it for no explanation at the other end of it! Besides that, in what sense is a point times a line equal to a line? Nevertheless we learn to glibly state that any number raised to the zero power is equal to one, without noting that this also requires us to believe that, in order to raise any number to the third power, we must define something as a unit that is clearly indefinable as a unit (i.e 1^{4}). Of course, we do it anyway, because, for most uses, it doesn’t affect us, and a point magnitude, somehow becoming a scalar multiplier of a line magnitude, makes sense in practice, if not in theory.

Fortunately, however, we don’t encounter the same theoretical problem with the dimensions of reciprocal numbers, because we can define dimensions, or powers of a number, as sets of dual “directions” inherent in the numbers. On this basis, we can describe four units using four numbers with increasing sets of “directions”:

1^{0}:1^{0} = units with no dual “directions” (corresponding to geometric points)

1^{1}:1^{1} = units with one set of dual “directions” (corresponding to geometric lines)

1^{2}:1^{2} = units with two sets of dual “directions” (corresponding to geometric areas)

1^{3}:1^{3} = units with three sets of dual “directions” (corresponding to geometric volumes)

where the colon is used as a generic symbol of operation, representing either the slash, or the pipe, symbol of our two operational interpretations of number.

This clarification of the definition of numerical dimensions, as simply the difference in the number of sets of dual “directions,” in a given number, makes it possible to identify a numerical, or scalar, “geometry” with the customary vector geometry of Euclidean three space, when these scalar dimensions are independent variables, which is tantamount to the definition of orthogonality in spatial dimensions.

As Larson first pointed out, with what is now called Larson’s cube, there are a total of eight “directions” possible in a 3D magnitude. These “directions” are analogous to the eight vector directions in the cube, delineated by connecting the eight corners of the cube with four diagonal lines, intersecting at the origin of the cube, when it is formed from a stack of 2x2x2 cubes, as shown in figure 1 below:

**Figure 1.** The Eight Directions of Larson’s Cube

In the next post, we will analyze the cube in terms of eight scalar “directions,” which, as we will see, are eight 3D scalar magnitudes, or, what is tantamount to the same thing, eight 3D numbers, completing the generalization of number as magnitude.

## LRC Seminar (cont)

In the previous post, I discussed the combination of 1/2 and 2/1 as combining two, equal, numbers with opposite “directions.” Since, in ordinary arithmetic, the sum of these two numbers is taken as the sum of a fraction and a whole number that can’t be equal to one another, by definition, this seems strange.

However, as soon as it’s understood that the interpretation of the reciprocal numbers is not a quantitative interpretation, but an operational interpretation, and that there are two operations that can be found, things begin to clear up. The first operational interpretation is the ordinary interpretation of division of whole numbers. Thus, under this interpretation, 1/2 = .5 and 2/1 = 2, but, under the second interpretation, 1/2 = -1 and 2/1 = +1, which shows us that, under the first interpretation of division, 2/1 = 2 is actually +.5, the inverse of 1/2 = -.5, when we take reciprocal “directions” into account.

Of course, it’s true that we can’t ignore the difference that the “direction” of a reciprocal number makes in the relative value, because 2 (+.5 in disguise, we might say) is four times greater than -.5. For example, if we divide +.5 by -.5 on a calculator, we get

.5/-.5 = -1,

not 4. Hence, we have to recognize that +.5 is the operational interpretation of 2/1, but that

1/2 * 2/1 = 2/2 = 1/1 = 1,

just as

.5 * 2 = 1.

In other words, In using reciprocal numbers, it’s best not to interpret the value of the reciprocal relation, until the arithmetic is completed, in order to avoid error and confusion. Indeed, to help eliminate confusion, as much as possible, we use a different symbol, the pipe symbol, to indicate the reciprocity of the number, when it is to be interpreted under addition,

1|2 + 2|1 = 3|3 = 1|1 = 0 (i.e. -1 + 1 = 0).

When the reciprocal relation is to be interpreted under multiplication, we use the customary symbol, the slash symbol, to indicate the reciprocity of the number,

1/2 * 2/1 = 2/2 = 1/1 = 1 (i.e. .5 * 2 = 1).

It has been suggested that we need a different symbol for the sum operation to avoid the confusion with ordinary arithmetic, where

1/2 + 2/1 = .5 + 2 = 2.5

and

-1 * 1 = -1.

However, it’s clear that this is not necessary, if we understand that the addition operation is always used with the reciprocity indicated by the pipe symbol, and multiplication operation is always used with the reciprocity indicated by the slash symbol. In both cases, as long as numerators are combined with numerators, and denominators with denominators, under the appropriate binary operation for the indicated reciprocity, no confusion results.

What about combined operations? For example, what is the meaning of

(1|2)/(2|1) = -1/1 = -1, or (1/2)|(2/1) = (-.5)|(2) = -1.5?

This is problematic, because “direction” is only defined in the reciprocal relation of whole numbers, not in the numbers themselves. Since the numerators and denominators are inverses of each other, the operations should yield the appropriate identities of the respective groups (i.e. 1 and 0). However, if we do what we have always done in the ordinary arithmetic of fractions, invert and multiply the denominator, we get, for the slash reciprocity,

(1|2)/(2|1) = (1|2) * (1|2) = (-1) * (-1) = 1.

If we do the same thing for the piped reciprocity; that is, if we invert the denominator and the operation, which means we invert and add, instead of subtract, we have to recognize that the inversion doesn’t change the “direction” of the denominator,

(1/2)|(2/1) = (1/2) + (1/2) = (-.5) + (+.5) = 0,

which yields the respective identities in each case, as required.

I won’t be able to go into this level of detail in the presentation, certainly, but if the question comes up, I’ll be prepared with the answer: The “directions” of numbers are conserved in the sum and multiplication (subtraction and division) operations of reciprocal numbers. Once that is established, I will proceed to show how we find the third property of numbers, multiple dimensions, clarifying the difference between the powers of a number, as multiple factors, and the dimensions of a number, as independent sets of reciprocal, or dual, “directions.”

That will be in the next post for sure this time.