create account | help/FAQ | contact | links | search | IRC | site news
 Everything Diaries Technology Science Culture Politics Media News Internet Op-Ed Fiction Meta MLP

Section 3: Inner and Outer Products and the Dual operations

----Representation Freedom----

Modern theories of physics all tend to rely upon a belief that the laws expressed in the theory should be invariant with respect to a reference frame in which we choose to represent them. Some physicists will go a step further and state that the laws should be independent of representation all together, even if they don't have a tool to write them that way. Geometric algebras permit a technique for expressing objects while avoiding a choice of representation until the time comes to actually calculate some quantity. They enable the practitioner to avoid explicitly stating a representation too. While this representation-free technique is not fundamentally revolutionary, it does lead to a great reduction of complexity in the notation required to express physical laws.

To a physicist, a reference frame is a set of directions and associated relationships that is used as a backdrop to measure all other objects and interactions. These relations usually include statements requiring the directions to be mutually perpendicular (orthogonality) and have equal length. When a reference frame is chosen, the physical laws and objects can be rendered in that frame. Those renderings are said to be made in a representation.

A plane segment written as M does not explicitly show a reference frame, so the expression is representation-free. That same plane segment written as ∑Mijeij does show the reference frame, so it is not representation-free. Previous statements in earlier sections had equal signs between these two expressions. It would be more accurate to put in a sign that represented a rendering operation into a particular reference frame

Writing in a representation-free style reduces the typographic complexities an author faces when writing expressions. It also reduces the notational clutter a student faces when learning a new concept. These reductions come at a price, however. When the reference frame is explicit, a reader can tell at a glance that M is a plane segment. It is not so obvious in the representation-free style.

In this section and elsewhere, the representation-free style will be used wherever possible. Reference frames will only be shown where there is a significant value added in doing so.

----Prelude----

To properly introduce the inner product, we must develop a part of trigonometry. However, the inner product is not something fundamentally new, so the reader should not be scared off yet. It is a useful combination of regular multiplication and addition, both of which the reader learned about in section one. The inner product happens to reproduce something related to what we mean by 'projection' and trigonometry helps turn geometry's qualitative statements into algebra's quantitative ones.

To start, it is necessary to translate what is meant by projection into our current language. In the purest geometric sense, projection collapses an object onto a reference object. Projecting a line segment onto a reference line produces another line segment that happens to be directed along the reference line. Hold your hand out in sunlight and look at the shadow your palm makes on the ground. The shadow is a projection of a plane segment onto a plane.

----Technical Note----

The shadow of your hand is also a projection of a three dimensional object onto a two dimensional surface. The extra complexity we could explore here won't help us much with our understanding of the inner and outer products, so we will let it drop for now.

The geometric construction for a projected line segment demonstrates a recipe that does not require a numeric measurement of the magnitude called angle. It is assumed the original line segment is drawn at some angle relative to the reference line and that right angles are defined along with some technique for knowing when two angles are of the same size. The recipe for projection will be shown here to set up our meaning for the inner product.
Recipe: Project the line segment AB onto the reference line AO. See Figure 2.

Description:

Draw a circle around B large enough to intersect AO at two distinct points. Label those points P and Q.
Pick one length large enough to act as a radius for circles centered upon P and Q each and draw them such that the circles intersect at two distinct points. Label those two intersection points R and S.
The points B, R, and S are collinear. Draw a line segment connecting B, R, and S. Label where it intersects AO as the point Z.

The line segment AZ is the projection of AB onto AO.
The line segment BZ is the perpendicular remainder of the projection. BZ is effectively subtracted from AB to produce AZ.

The first concept we need from trigonometry is a magnitude geometry people refer to as 'angle'. Some would argue that we have already introduced it with our recipe for projection. With trigonometry, however, we must go a step further and define angles numerically. This step is what makes trigonometry a combination of geometry and algebra. Most readers will already have an intuition for this magnitude and what may be done with it. This intuition is rarely complete, though, so more information is developed here to bring everyone to the same level.

The first departure we make from what most readers know is a change of units. We shall measure our angles in radians instead of degrees. A full circle can be thought of as an angle of 360 degrees or 2π radians, so if one should ever wish to convert radians to degrees, multiply our numbers by 360/2π.

The second departure involves definitions for different kinds of angles. Most everyone is familiar with circular angles, but they have rarely seen hyperbolic angles. There are others, but these two will suffice for us. To ensure the reader can differentiate between the two, both will be carefully defined in the algebraic sense.

Circular Angle:

Imagine a circle of radius R. Imagine two line segments on that circle that start at the center and end at different points on the circle. The arc length between these two points on the circle has a curved length S. The angle between the two line segments is labeled with θ and is numerically equal to S/R. See the left side of Figure 1.
Hyperbolic Angle:
Imagine a hyperbola drawn with both asymptotic lines sketched in for reference. Also imagine two line segments drawn with the same starting points located at the intersection of the asymptotes and landing at different points on the same section of the hyperbola. Define R as the length of a line segment from the asymptote intersection to the closest point on either section of the hyperbola. The arc length between the two end points has a curved length S. The angle between the two line segments is called θ and is numerically equivalent to S/R. See the right side of Figure 1.
With our concept of a projection and our numeric definition for angles, we can define two pairs of functions that link them and provide the final pieces we need to complete our inner and outer products. Anyone familiar with trigonometry will recognize at least the first pair.
Definition: Cosine(θ)---shorthand notation is cos(θ)

Two line segments sharing a starting point and having two distinct end points can be used to represent a circular angle θ. Project one of the line segments onto the line defined by the other. The cosine of the angle is numerically equal to the length of the line segment projection divided by the length of the line segment before projection.

Note that the circular angle uses the first line segment as a radius for the defining circle and the reference line for later projections.

Definition: Sine(θ)---shorthand notation is sin(θ)

Two line segments sharing a starting point and having two distinct end points can be used to represent a circular angle θ. Project one of the line segments onto the line defined by the other. Draw the perpendicular remainder. The sine of the angle is numerically equal to the length of the perpendicular remainder divided by the length of the line segment before projection.

Note that the circular angle uses the first line segment as a radius for the defining circle and the reference line for later projections.

There are similar definitions for hyperbolic functions where hyperbolic angles are used instead of circular angles. The functions are cosh(θ) and sinh(θ) respectively. It may not be obvious to someone with only a little exposure to trigonometry why we would define hyperbolic versions of the trigonometric functions. They are needed, however, when dealing with inner and outer products of objects that have mixed squares both positive and negative.

----Inner Product---- Formal Definition on Vector Spaces

Now we are finally ready to describe our inner product. There are a few simple things it is expected to do, so they will be given first and then a descriptive definition will be given that satisfies them.

1. We expect the inner product of two line segments to produce a number that is equal to the length of both line segments multiplied together times the cosine of the angle between them. This reproduces the dot product defined by users of vectors. Technical Note--- If the line segments have a mix of positive and negative squares, we use cosh(θ) instead. This doesn't happen in R(3,0), but it will in R(3,1).
2. We also expect the inner product of two plane segments to be related to the inner product of two related line segments drawn perpendicular to the plane segments in the right handed sense. This expectation is related to the right hand rule invented for cross products of vectors in a three dimensional world.
3. Finally, we expect the inner product will usually have a lower ranked result than the ranks of any of the operands. At worst, the result will have a rank equal to the rank of the smallest ranked operand. We expect this behavior because projections should never have a higher rank than the object being projected.
Now it is time to write out the definition for the inner product. We will use the mid-height dot (·) to represent the inner product.
Definition: A · B where A and B are from the same geometric algebra
Rule 1: If A, B are of pure rank then
A · B = 1/2 (AB + BA) if the ranks of A and B add to an even number or
A · B = 1/2 (AB - BA) if the ranks of A and B add to an odd number

Rule 2: If A or B are of mixed rank, break each into a sum of pure ranked objects and distribute ( ·) across each sum. After distribution use Rule 1 for each term.
A = ∑ Ai where i = 0, 1, 2, 3 and it represents geometric rank
B = ∑ Bj where j = 0, 1, 2, 3 and it represents geometric rank
So A · B = ∑ Ai ·∑ Bj
= ∑ ∑ Ai · Bj

A few examples will demonstrate whether this definition meets the three expectations we had of it.
Example 11:Perform the inner product for A = e1 + 3e2 and B = 6e2 - 7e3
The sum of the ranks for A and B is two, so rule one requires the use of the symmetric formula.
A · B = 1/2 [ (e1 + 3e2)(6e2 - 7e3) + (6e2 - 7e3)(e1 + 3e2) ]
= 1/2 [ (6e12 + 18e - 7e13 - 21e23) + (-6e12 + 7e13 + 18e + 21e23) ]
= 1/2 [ 18e + 18e ]
= 18e
The rank lowering expectation from expectation three is met. It is not so obvious from this example that expectation one is met, so a couple of examples where the angles between the line segments is known will be given.
Example 12:Perform the inner product for A = 3e1 and B = e3
The sum of the ranks for A and B is two, so rule one requires the use of the symmetric formula.
A · B = 1/2 [ (3e1 e3) + (e3 3e1) ]
= 1/2 [ 3e13 - 3e13 ]
= 0
The magnitude of this result should be zero and it is. This is due to the fact that the two line segments are perpendicular. This makes the cosine of the angle between then vanish since there is no length to the projected line segment of A onto B. This meets expectation one.
Example 13:Perform the inner product for A = 3e1 and B = 4e1
The sum of the ranks for A and B is two, so rule one requires the use of the symmetric formula.
A · B = 1/2 [ (3e1 4e1) + (4e1 3e1) ]
= 1/2 [ 12e+ 12e]
= 12e
Note that the result is the product of the norms for A and B. The angle between A and B is zero, so the cosine of the angle is one. This demonstrates expectation one.
Example 14:Perform the inner product for A = 3e12 and B = 4e12 + 5e13
The sum of the ranks for A and B is four, so rule one requires the use of the symmetric formula.
A · B = 1/2 [ 3e12 (4e12 + 5e13) + (4e12 + 5e13) 3e12 ]
= 1/2 [ -12e + 15e1213 -12e + 15e1312 ]
= -12e + 1/2 [ - 15e23 + 15e23 ]
= -12e
Note that the part of B along e13 did not contribute to the result. This can be understood if A and B are recast as line segments. Let A' = e3 and B' = 4e3 + 5e2. The inner product of A' and B' should have no contribution from the second term of B' since it is strictly perpendicular to all of A'. This is how we meet expectation two. There is a formal way to do this once the dual operation is introduced.
Example 15:Perform the inner product for A = e3 and B = e13 + e23
The sum of the ranks for A and B is three, so rule one requires the use of the antisymmetric formula.
A · B = 1/2 [ (e3)(e13 + e23)-(e13 + e23)(e3) ]
= 1/2 [ (e313 + e323)-(e133 + e233) ]
= 1/2 [ -e1 - e2 - e1 - e2 ]
= -e1 - e2
In this example we see the first case where the projection effect of a higher rank object onto a lower rank object can be seen. The parts of B in the 3-direction were projected out. B itself is a plane segment whose cross section on the 12-plane is a line that runs diagonally between the two axes. The projection effect reduces B to a line segment on that intersection line.

With our definition for an inner product and its relationship to projection, let's look again at example three from section one. In that example, we multiplied two general line segments and wrote a result that avoided expressions that depended on any particular reference frame.

Example 3: (restarted) Multiply two line segments M and N and show the general result.
M N= 1/2( M N + M N) + 1/2( N M - N M) [Just adding zero]
= 1/2( M N + N M) + 1/2 (M N - N M) [Rearranging]

Because M and N are line segments, we can use the symmetric version of our definition for the inner product to rewrite the first term
= M · N + 1/2 (M N - N M)

The first term is a scalar that must be equal to the product of the lengths of M and N and the cosine of the angle between them. We know this because we built this expectation (#1) into the definition of the inner product.
= |M| |N| cos(θ) e + 1/2 (M N - N M)

This works because the length of Npar from section one is |N| cos(θ). We shall leave the second term as a mystery to be finished after the introduction of the outer product.

----Outer Product----

No new concepts need to be introduced here to proceed with a description of the outer product, so readers need not be concerned with figuring out how to conjure rabbits from magic hats without more training. For those that followed the discussion of projection, though, we shall now focus our attention on the perpendicular remainder labeled as line segment BZ and the shape between the two line segments. Outer products emphasize perpendicular remainders while inner products emphasize parallel projections.

There are, however, some expectations for the outer product that are best explained before the definition as we did for the inner product.

1. The first expectation we have is that the outer product will usually have a higher ranked result than the ranks of any of the operands. At worst, the result will have a rank equal to the rank of the highest ranked operand. Like the inner product, however, there is a limit to how far this expectation goes. In any geometric algebra there is a highest and a lowest rank object. The outer product will not create anything of a higher rank than the highest rank object. ----Technical Note----In higher dimensional algebras above R(3.0), this first expectation will be weakened a bit. The definition given below can produce multi-ranked results in higher algebras. More advanced students will probably recognize the exterior product of forms anyway.
2. For our second expectation, we want the outer product of two line segments to be a plane segment with the same area as the parallelogram defined by the line segments. While the inner product used projection to collapse one object onto another, the outer product builds upward to higher ranks by describing the enclosed space between the two operands. Two adjacent legs and the angle between them give enough information on a parallelogram to know the fence for the area enclosed.
Now it is time to write out the definition for the outer product. We will use the mid-height wedge (^) to represent the outer product.
Definition: A ^ B where A and B are from the same geometric algebra
Rule 1: If A, B are of pure rank then
A ^ B = 1/2 (AB - BA) if the ranks of A and B add to an even number or
A ^ B = 1/2 (AB + BA) if the ranks of A and B add to an odd number

Rule 2: If A or B are of mixed rank, break each into a sum of pure ranked objects and distribute ( ^ ) across each sum. After distribution use Rule 1 for each term.
A = ∑ Ai where i = 0, 1, 2, 3 and it represents geometric rank
B = ∑ Bj where j = 0, 1, 2, 3 and it represents geometric rank
So A ^ B = ∑ Ai ^ ∑ Bj
= ∑ ∑ Ai ^ Bj

A few examples will demonstrate how this outer product works and how it meets the expectations listed above.
Example 16:Perform the outer product for A = e1 + 3e2 and B = 6e2 - 7e3
The sum of the ranks for A and B is two, so rule one requires the use of the antisymmetric formula.
A ^ B = 1/2 [ (e1 + 3e2)(6e2 - 7e3) - (6e2 - 7e3)(e1 + 3e2) ]
= 1/2 [ (6e12 + 18e - 7e13 - 21e23) - (-6e12 + 7e13 + 18e + 21e23) ]
= 6e12 - 7e13 - 21e23
The rank raising expectation from expectation one is met. It also shows that part of our second expectation is met since A and B are line segments while the result is a plane segment. A simpler example will show that the area of the plane segment is what we want too.
Example 17:Perform the outer product for A = 3e1 and B = e3
The sum of the ranks for A and B is two, so rule one requires the use of the antisymmetric formula.
A ^ B = 1/2 [ (3e1 e3) - (e3 3e1) ]
= 1/2 [ 3e13 + 3e13 ]
= 3e13
The magnitude of this result should be 3 and it is. This is due to the fact that the two line segments define a rectangular fence that has lengths of 1 and 3 on its sides. Since the rectangle itself has an area of 3, we have met expectation two.
Example 18:Perform the outer product for A = 3e12 and B = 4e12 + 5e13
The sum of the ranks for A and B is four, so rule one requires the use of the antisymmetric formula.
A ^ B = 1/2 [ 3e12 (4e12 + 5e13) - (4e12 + 5e13) 3e12 ]
= 1/2 [ -12e + 15e1213 +12e - 15e1312 ]
= 1/2 [ -15e23 - 15e23 ]
= -15e23
In this last example we see that the first term of B did not contribute to the result of the outer product. Since the first term and A involve the same plane segment, they are effectively parallel. Outer products emphasize objects that are effectively perpendicular, so it should not be surprising that the first term of B does not contribute.

Now, let us finish example three from section one from where we left off. With our definition of the outer product, the last mysterious term can be identified.

Example 3: (finished) Multiply two line segments M and N and show the general result.
M N= 1/2( M N + M N) + 1/2( N M - N M) [Just adding zero]

Because M and N are line segments, we recognize the first term as the inner product.
= M · N + 1/2 (M N - N M)

Because M and N are line segments, we recognize the second term as the outer product.
= M · N + M ^ N

We have already seen that the inner product produces a scalar in this case that is equal to the product of the magnitudes of the two line segments and the cosine of the angle between them. The fact that the second term is similar with a replacement of sine for cosine and a plane segment for a scalar can be seen if one writes a formula for the area of a parallelogram. When two adjacent sides are known along with the angle between them, the area is the product of the sides and the sine of the angle. This gives us exactly what we wrote for example three in section one.

----Dual Operation----

There is one last operation to describe before we consider the introductory material to be complete. This last operation helps connect low and high ranked objects to one another and the inner and outer products together. With the 'dual' operation, the old definition of the cross product used by a student first learning about vectors can be linked to our outer product. The dual operation is useful in many ways and makes good intuitive sense to boot.

Think about the number of distinct, independent objects of each rank. In a three dimensional world there is one type of scalar, three line segments, three plane segments, and one volume segment. With these eight basis objects, one can write any other object as a linear combination as we did in section one. In a four dimensional world there is one scalar, four line segments, six plane segments, four volume segments and one hypervolume segment.

The number of independent objects of each rank is not mysterious or accidental. These numbers can be found by starting with a set of digits that represent the possible indices. Pick out a number of digits and make a subset. The number of plane segments available is the same as the number of possible distinct two element subsets. The number of volume segments is the same as the number of distinct three element subsets.

----Technical Note----

Those who have already learned about permutations and combinations know the technical details. The number of objects of each rank happens to be the same as the appropriate binomial coefficient.

There is an interesting symmetry in the number of objects of each rank. There is always one scalar and one object of the highest possible rank. The highest possible ranked object is often named a pseudo-scalar when it is convenient to do so. In a three-dimensional world, the volume segment is the pseudo-scalar.

The number of line segments is equal to the number of possible indices while the number of objects with a rank just below the pseudo-scalar is identical. It's as if a mirror was placed in the middle of all the possible ranks. The number of objects of high rank matches up well with objects of lower rank if one counts down from the pseudo-scalar and up from the scalar.

An even more curious symmetry can be found between objects of a particular rank and their pseudo counterparts. If one starts with one basis object and multiplies it by the pseudo-scalar, one gets the pseudo counterpart. This helps us define the dual operation and the operation that takes one input and outputs its pseudo counterpart.

Definition: The dual of an object is the object after it has been multiplied on the left by the pseudo-scalar.
Example 19:Perform the dual operation on A = 4e12 + 5e13
The dual of A is the pseudo-scalar times A. In R(3,0) the volume segment is the pseudo-scalar.
A' = e123 A = e123 (4e12 + 5e13)
= 4e12312 + e12313
= -4e3 + e2
The dual operation lets us clear up the last bit of mystery surrounding example 14 where we loosely talked about 'recasting plane segments as line segments.' The recasting operation implied is the dual operation just defined.
Example 14 (redone):Perform the inner product for A = 3e12 and B = 4e12 + 5e13
The sum of the ranks for A and B is four, so rule one requires the use of the symmetric formula.

A · B = 1/2 [ 3e12 (4e12 + 5e13) + (4e12 + 5e13) 3e12 ]
= 1/2 [ -12e + 15e1213 -12e + 15e1312 ]
= -12e + 1/2 [ - 15e23 + 15e23 ]
= -12e

Note that we can write A as e123 A' if A' = 3e3 and B as e123 B' if B' = 4e3 - 5e2

So...

A · B = (e123 A') · (e123 B')
The ranks of both terms still add to four, so the same symmetric rule must be used.
= 1/2[(e123 A') (e123 B') + (e123 B') (e123 A')]

We know from the multiplication table for R(3,0) that the pseudo-scalar e123 commutes with all other basis objects in the algebra, so we can pass it around A' and B' without changing the result.

= 1/2[(e123 e123 A' B') + (e123 e123 B' A')]
= -1/2[ A' B' + B' A']
= - A' · B'

For proof, just work out - A' · B'.

= -1/2[3e3(4e3 - 5e2) + (4e3 - 5e2)3e3]
= -1/2[12e + 15e23 + 12e - 15e23]
= -12e

There are a lot more interesting identities one can find involving the dual, but we shall leave them to the reader or later problems. One curious one is a link between inner and outer products one finds when trying to commute the pseudo-scalar through the operands.
----Technical Note----

The final thing we will say here about the dual operation is that it is a vital link between how students who learn the cross product of vectors handle multiplication and how we do it with geometric algebras. The cross product of vectors takes in two vectors and outputs one defined to be perpendicular to the first pair in a right-handed sense. In our language, the final vector they get is the dual of the plane segment they would have produced if they had used an outer product instead of a cross product. If you know the right-hand rule, your fingers sweep out the plane of the first two vectors and your thumb points in the direction of the result. To us, the fingers sweep out the plane segment and the thumb points in the direction of the dual of that plane segment.

Summary

In this section, the inner and outer products were described and demonstrated with examples. Both were related to numeric angles and the concept of project even though the actual recipes for them do not require such calculations. With both of these secondary products, we cleaned up one example from section one. Finally, the dual operation was defined and demonstrated. With the dual, we cleaned up one mystery about the connection between the outer product and cross products used for vectors.

With a bit of experience with these lessons, a student will have enough of the tools they need to begin using geometric algebras to represent physical objects. Properties and property currents form the foundations of physical theories, so experience with geometric objects and the tools used to manipulate them are necessary before traveling further into the physics we can express using geometric algebras.

----Problems for Section 3----

11: In section three, a recipe was described for projecting a line segment on to a reference line. Write up something similar describing how to project a plane segment onto a reference plane. Remember that you can't use numeric angles when you stick to standard geometric constructions.

12: Find the inner product of A = 16 e - 9 e3 and B = e12 + 4e123.

13: Find the angle between A = 5 e1 - 9 e3 and B = 2e2 - 3e3.

14: What is the length of A = 3 e1 - 4 e2 - 7 e3 after it has been projected onto the 12-plane?

15: Solve for A · (B · C) when A = e1, B = e2, and C = 15e123

16: Find the outer product of A = 16 e - 9 e3 and B = e12 + 4e123.

17: If A, B, and C are line segments, simplify the following expression. A · ( B ^ C ).

18: If A, B are line segments and C is a plane segment, simplify the following expression. A · ( B ^ C ).

19: If A is a plane segment and B, C are line segments, simplify the following expression. A ^ ( B ^ C ).

20: Does the dual operation commute through the inner and outer products in R(3,0)? If so or not, prove it. Does your proof work for other algebras besides R(3,0)?

 Poll
If you are an OO programmer, would you consider learning this stuff to avoid typecasing with matrix libraries?
 No 21% Yes 42% I'm not an OO programmer. 36%

 Votes: 57 Results | Other Polls

 Display: Threaded Minimal Nested Flat Flat Unthreaded Sort: Unrated, then Highest Highest Rated First Lowest Rated First Ignore Ratings Newest First Oldest First
 Introduction to Geometric Algebra (part three) | 74 comments (67 topical, 7 editorial, 1 hidden)
 Heh, that's funny. (4.00 / 1) (#6) by xriso on Sun Oct 20, 2002 at 10:59:34 AM EST

 Last night I was reading up on quaternions, and first thing I look at today on kuro5hin starts talking about an inner and outer product. :-) (By the way, quaternions seem to be extremely useful for computer simulations of rotational kinematics) -- *** Quits: xriso:#kuro5hin (Forever)
 quaternions (4.00 / 1) (#8) by adiffer on Sun Oct 20, 2002 at 03:00:35 PM EST

 The quaternion algebra is R(0,2) in case you are curious. The vector space defined by the quaternions is what most people use, though.  It is good enough to get by for rotations in 3-space. -Dream Big. --Grow Up.[ Parent ]
 What is a kinematic? (none / 0) (#11) by kholmes on Sun Oct 20, 2002 at 06:44:09 PM EST

 Sounds like one of the words they make up on Star Trek to make sure none of the viewers knows what it means. Anyway, tis not in my dictionary. If you treat people as most people treat things and treat things as most people treat people, you might be a Randian.[ Parent ]
 Kinematics (none / 0) (#12) by Cameleon on Mon Oct 21, 2002 at 05:55:53 AM EST

 From dictionary.com: The science which treats of motions considered in themselves, or apart from their causes; the comparison and relation of motions. [ Parent ]
 I've given up on dictionary.com (none / 0) (#22) by kholmes on Tue Oct 22, 2002 at 01:42:30 AM EST

 My dictionary has too many words that dictionary.com doesn't, and is actually easier to reference. But, I suppose there are the exceptions. If you treat people as most people treat things and treat things as most people treat people, you might be a Randian.[ Parent ]
 Dictionary.com (none / 0) (#26) by Cameleon on Tue Oct 22, 2002 at 02:48:50 AM EST

 Well, it's really nice as a quick reference, or when you don't have a dictionary nearby. Or, when you're like me, and you don't have a good English dictionary, just a Dutch one, and Dutch-English ones. [ Parent ]
 Explanation of "kinematics" (none / 0) (#18) by nuntius on Mon Oct 21, 2002 at 11:11:12 PM EST

 Here's a working definition of kinematics:  the analysis of motions. Technical explanation: There are two types of kinematics - forward and reverse. Forward kinematics takes a set of transformations and calculates the resulting coordinate frame. Reverse kinematics takes a desired coordinate frame and determined the transforms required to obtain that frame. Examples: Forward kinematics - A robotic arm has a shoulder, elbow, wrist, and hand.  Given the angles of rotation at each joint, where is the hand? Reverse kinematics - The robotic hand needs to go to a particular point (e.g. to pick something up).  What joint angles are required to reach that location? [ Parent ]
 Excellent! (4.00 / 1) (#7) by awgsilyari on Sun Oct 20, 2002 at 01:08:13 PM EST

 Now, can we have an article on tensors? It would go well to compliment the particle physics columns. -------- Please direct SPAM to john@neuralnw.com
 tensors (5.00 / 2) (#9) by adiffer on Sun Oct 20, 2002 at 03:13:07 PM EST

 If someone else wants to right it, that would be great.  One of my goals in life is to write a way around them with geometric algebras.  Tensors are inherently not representation-free, so while they are very powerful tools, they rub me the wrong way.  I feel they hide some of the beauty of the physics we want to represent with them and stuff up some of our creative channels that rely heavily upon intuition. Think about this for a moment or two.  With R(3,1) I can write the electromagnetic field with a bivector function.  The stress-energy of that field, though, cannot be written as a single function in that same algebra.  That tensor is a symmetric one, so rank two objects won't work.  People who write E&M in tensor notation can write both the field and the stress-energy of the field without any hitches.  Is that a problem for the followers of geometric algebra?  I think not since I think the tensor folks are guilty of data-type casting when they use that symmetric tensor.  They are doing something important and don't realize it.  My aim is to uncover what that important thing is. Tensors are an excellent tool and an excellent start, but I think there is much more going on there than most realize.  It's the same way with matricies in general.  Pick a few special ones and there could be a lot of interesting structure behind those indices like vector spaces, groups, and so on. -Dream Big. --Grow Up.[ Parent ]
 here's hoping (none / 0) (#16) by adiffer on Mon Oct 21, 2002 at 05:45:29 PM EST

 I would like to see your version of this lesson then.  I was always bothered by them, but I'm willing to accept it all as a failing of the physicists to teach the subject correctly. If you were to write this stuff up, you might help me get past that block and advance the software library on which I've been working.  I'm struggling a bit with the tensor equivalents. -Dream Big. --Grow Up.[ Parent ]
 I'd like to see it (none / 0) (#27) by adiffer on Tue Oct 22, 2002 at 04:51:20 AM EST

 I would like to see your write-up of this.  Every different version helps those of us feeling thick headed at one time or another. I was just working another problem with a three-axis magnetometer.  I'm supposed to show my friends which direction the vehicle is facing in their familiar yaw/pitch/roll coordinates.  After working it awhile, I wound up writing an object that looked like an element of a space that is a tensor product of two algebras.  What got me thinking is that the object behaves like a map from one algebra to another.  I think I'm close to what you're saying, but a few pictures and other words sure would help. -Dream Big. --Grow Up.[ Parent ]
 sorry to but in (none / 0) (#35) by martingale on Wed Oct 23, 2002 at 01:30:58 AM EST

 You've got a nice start of an explanation, but there's a more algebraic version which may clear things up: Let X, Y be vector spaces with bases x_1,..., x_n and y_1, ..., y_m say. The tensor product of X and Y is the vector space X × Y with mn dimensions and basis given by all the possible formal products x_i × y_j. (Normally, × is written inside a circle, but HTML doesn't allow this?) In general, X × Y is different from Y × X. To get the tensor product of two arbitrary vectors in X and Y, express them in their bases and pass the × through the sums in the obvious way. The result is a vector in the space X × Y whose components are (v_i w_j) because the space has mn dimensions. It's common to write it as a matrix. Because it lives in the tensor product of X and Y, it's common to call any vector in X × Y a tensor So a tensor is a vector which lives in a space which happens to be a tensor product of two or more simpler vector spaces. The interesting tensors in physics live in the tensor product of the tangent and cotangent spaces. Now I don't want to get into what those are, but the vectors belonging to the (co)tangent spaces can operate on functions in simple ways. By building tensor products of those vectors (ie now called tensors) we can build more complicated operators on functions. So if we have a tensor T living in the tensor product of spaces X, Y, Z, U, W say, then T is a vector in X × Y × Z × U × W, which is a huge vector space with dim(X)dim(Y)dim(Z)dim(U)dim(W) dimensions, and we can list all the components of T by a set of numbers T_{ijk}^{lm} say. By convention in physics, upper indices refer to those spaces (U, W here) which are tangent spaces, while lower indices refer to the cotangent spaces. The indices can be raised and lowered by certain operations which turn the relevant tangent spaces into cotangent spaces and vice versa. Uh, I'll stop here, carry on ;-) [ Parent ]
 Another way to look at tensors (none / 0) (#39) by senderista on Wed Oct 23, 2002 at 01:58:23 PM EST

 I prefer to think of tensors as "scalar-valued multilinear maps" rather than "formal products". The key is to look at any vector space V as a set of scalar-valued linear maps on another vector space V*. You can do this by defining the dual space V* (the vector space of scalar-valued linear maps on V), and then demonstrating the canonical isomorphism between V and the "dual of the dual" V**. This is why every vector is a tensor - any vector v in V is a linear map on the dual space V*. In physicists' terms, every vector is a covector on the cotangent space. Then it's easy to think of tensors as "products" of vectors, since given two vector spaces U and V and bases u^i and v^i, we can show that the set of bilinear maps on U* X V* defined by uv^ij(u, v) = u^i(u) * v^j(v) for all u, v in U* X V*, is a basis for the tensor product space U (x) V. IOW, we've shown that the set of "products" of basis vectors from U and V is a basis for their tensor product. I'll stop there as I'm not sure anyone is still reading :-) "It amounts to the same thing whether one gets drunk alone, or is a leader of nations." -- Jean-Paul Sartre [ Parent ]
 duality (none / 0) (#53) by martingale on Thu Oct 24, 2002 at 01:17:28 AM EST

 That's a nicer definition than I've given, but perhaps less elementary (if that is possible in a discussion of tensors :-) It also suffers from another problem, namely that once you switch to infinite dimensional spaces, in general V is only a subspace of V^{**}, and the tensor products of vectors u_n\otimes v_n don't necessarily converge, when they do, to another product of vectors u\otimes v (depends on the topology used). [ Parent ]
 Right (none / 0) (#64) by senderista on Thu Oct 24, 2002 at 08:31:33 PM EST

 Yep, I should have said "finite-dimensional". The proof that the canonical mapping from V->V** is an isomorphism infers from dim(V) = dim(V**) that it's onto, given that it's 1-1, and this is invalid when V is infinite-dimensional. "It amounts to the same thing whether one gets drunk alone, or is a leader of nations." -- Jean-Paul Sartre [ Parent ]
 Representation-free tensors? (none / 0) (#40) by BlaisePascal on Wed Oct 23, 2002 at 02:25:33 PM EST

 Let's see if I understand this.... You have vector space V, which has an orthonormal set of bases {vi}, and a similarly defined U and {uj}. You get the tensor space VU by taking, as bases, the ordered pairs (vi,uj) of bases from V and U. You get the tensor product of two vectors v in V and u in U by finding the sets {ai} and {bj} such that v= sum(aivj) and likewise for u, then finding sum(aibj(vi,uj)) Here's the rub.... In addition to {vi} as a set of bases for V, there is also {v'i}, and a corresponding set of bases for the tensor space on V and U given by {(v'i,uj)}, and the tensor product of v and u would be given by sum(a'ibj(v'i,uj)) Obviously, the representation of v under the first set of basis vectors (i.e., {ai)) differs from the representation of v under the second set of basis vectors ({a'i}), but that doesn't make the v different. It is not clear, however, that the two forms of the tensor space VXU are equivalent, or that the tensor product vXu within the two forms of the tensor space are equivalent. In most of my dealings with physics, I've noted that the representation of vectors as having basis vectors (or (equivalently) decomposition into (x,y,z,...)) is mentioned briefly when introducing vectors or a new concept on them (like dot and cross products, or partial derivatives or integration, etc), but for the most part, they deal with vectors as vectors. Tensors are different. Every treatment of tensors I've seen deals explicitly with the representation of tensors as n-dimensional arrays of values. The definition of tensor spaces and tensor products you give as the "mathematical" definitions is in terms of the bases of the component vector spaces, and vectors in those spaces represented as linear combinations of the basis vectors. I could be wrong, but what I see adiffer as doing is looking for a method of accomplishing the same thing that tensors are doing but being more representation-independent, the way vectors are. I'd like to see the same thing, myself. [ Parent ]
 Simple definition of tensors (none / 0) (#47) by senderista on Wed Oct 23, 2002 at 05:00:06 PM EST

 Given a set of vector spaces {V_i} defined over the same field F (always the real or complex numbers in physics), their tensor product is defined as the vector space of multilinear (i.e. linear in each argument) functions from the Cartesian product V_1* X V_2* X V_3* ... of the duals of these vector spaces to F. (The dual V* of a vector space V over a field F is just the vector space of linear functions from V to F.) Although this isn't the usual physicist's definition of tensors, the better physics books (e.g. Wald's General Relativity) use this definition (and the worse ones like Weinberg's use the typical coordinate-dependent monstrosity). One more thing: don't use the word "orthonormal". You don't need an inner product to define tensors. "It amounts to the same thing whether one gets drunk alone, or is a leader of nations." -- Jean-Paul Sartre [ Parent ]
 they're equivalent (none / 0) (#54) by martingale on Thu Oct 24, 2002 at 01:25:42 AM EST

 You'll probably like senderista's explanation better. See also my other comments in this thread. BTW For another nice book using coordinate free methods to define tensors, check out either Misner, Thorne and Wheeler, or if you want a small book there's Schutz. [ Parent ]
 That's my problem.... (none / 0) (#48) by BlaisePascal on Wed Oct 23, 2002 at 06:18:53 PM EST

 I'm not trained as a mathematician (or a physicist, for that matter), so I've only run across tensors in conjunction with GR. I am a bit of a knowledge-sponge, and I'm basically trying to garner a self-education in physics and maths.  Unfortunately, access to material that's formal, but understandable, is very difficult. Since I started running into tensors, I've been looking for a good description that I can understand.  One thing that's been lacking is, as you put it, what tensors really mean.  I've seen a distinct lack of info about them except in relation to GR.   Virtually all the sources I've looked at define them in terms of variant and covariant indices, products, contraction, etc, but in terms I didn't quite understand -- such as variant indices and covariant indices vary in opposite ways when subjected to transformations, etc, but with no clear examples of how that worked.  I got the impression that vectors were horizontal (i.e. nx1 matrices), and covectors were vertical (i.e., 1xn matrices), and that if you multiply a vector an a co-vector (by matrix multiplication, which in this case is equivalent to a dot-product), you'd get a scalar, but no clear understanding of what was going on. John Baez, in the GR tutorial linked to earlier, has managed to transform my understanding.  He defined a covariant as a linear function f(v) from V->R (assuming a vector space over R), with additional restrictions. Since any v is a linear combination of {vi}, f(v) is a linear combination of f(vi), so f() be represented as a covector fi = f(vi), with f(v) = fv (dot-product assumed). If T is a linear transformation on V, then T also transforms the covector f as well.  The other restriction on f is that f(v) = (Tf)(Tv). Or something like that. [ Parent ]
 various definitions (none / 0) (#52) by martingale on Thu Oct 24, 2002 at 12:52:49 AM EST

 The different definitions do get confusing. Perhaps the best thing to keep in mind is that they vary slightly depending on the environment. I gave you an (incomplete) algebraic definition. Physicists dealing with manifolds have more structure, so for example they'll define tensor fields over the whole manifold. It's an extension of the single vector space definition I gave you, since there's a tangent space at each point of the manifold. In the old physics and mathematics textbooks, everything is defined in terms of coordinates rather than coordinate-free. Then the index gymnastics is needed to distinguish covariant and contravariant components. In my example, if you take the tensor T = u \otimes v where u \in X and v \in X^* (dual space) say, this is a mixed covariant and contravariant tensor. In the old notation, T has components T^i_j, because it would be written T = \sum T^i_j e_i \otimes \omega^j, where e_i are basis vectors in X and \omega^j are basis vectors in X^*. The index transformation rules characterize T in terms of the tensor product space X \otimes X^*, ie if T were to live in X^*\otimes X (different order) then its components would transform differently. It gets messy once you realize that you can transform T (living in X \otimes X^*) into T' (living in X^* \otimes X^*) by making use of the duality bracket between X and X^*, which is usually defined in terms of the metric on the manifold. So you get T'_{ij} = \sum_k g_{ik}T^k_j A good rule to keep it straight is to note the location of the components. A superscript indicates a component in the vector space (tangent space for manifolds) because the basis is usually given as a list of vectors e_1,...,e_n where the subscript denotes the label of the basis vector (*), so it's possible to write T^ie_i and omit the sum. A subscript on T means it's a component in the dual space (cotangent space for manifolds) because the basis is usually written \omega^1,...,\omega^n with superscripts. That way it's possible to write T_i\omega^i and omit the sum sign. With this convention, it's possible to see at a glance if there's an omitted sum sign in front of an expression, ie if and only if there's a repeated index pair as a superscript and subscript. (*) The components of e_1 (basis vector) are sometimes written e^1,...,e^n, ie e_1 = \sum e^k e_k, where of course e^1 = 1, e^2 =...= e^n = 0. [ Parent ]
 okay (none / 0) (#51) by martingale on Thu Oct 24, 2002 at 12:26:17 AM EST

 In my zeal to simplify I ended up not being well defined. Of course you would need to display an isomorphism between tensor products of vector spaces defined in terms of different bases. That's trivial from the bilinearity of the tensor product. So if x, y are vectors in X, Y and x \otimes y is their tensor product, let's _define_ the linear transformation T from X \otimes Y with basis e_i \otimes f_j to X' \otimes Y' with basis e'_i \otimes f'_j by the matrix T_{(ij)(kl)} = \alpha_{ik}\beta_{jl}, ie so that T(e_i \otimes f_j) = \sum_{kl} T_{(ij)(kl)} e'_k \otimes f'_l, where \alpha and \beta are the transformations which take the e_i basis into e'_k and f_j into f'_l respectively. Since both \alpha and \beta are invertible, we can define \hat T by the matrix \hat T_{(kl)(ij)} = \alpha^{-1}_{ki} \beta^{-1}_{lj} and then \hat T is the inverse of T. But by this stage, it's kind of messy again. I shouldn't have mentioned bases in the first place. The bilinearity of \otimes is enough, ie let X \otimes Y be the vector space of pairs (x,y), written x \otimes y such that (x + x') \otimes y = x \otimes y + x' \otimes y x \otimes (y + y') = x \otimes y + x' \otimes y' \alpha x \otimes y = (\alpha x)\otimes y = x \otimes (\alpha y) for x, x' \in X, y, y' \in Y, \alpha \in \Reals. But now we have to show that such a space exists... (easy, let X = Y = \Reals and \otimes = multiplication ;-) I guess at this point senderista's definition is better, although it's more "advanced" because of the duality; however I've introduced linear transformations, so that's a sneaky way of introducing duality too. You must be a physicist or applied mathematician though your nick suggests probability theorist. And I'm curious to know what your definition of covector is too. Physicists and applied mathematicians define those weirdly too :-) My nick gives it away... I define a covector algebraically as an element of the dual space. But to me, deep down those things are measures anyway :-) I think the point of the various definitions used by physicists and mathematicians is the added structure. On a C^\infty manifold, the interesting vector spaces are the tangent and cotangent spaces at each point, which have a natural relation to the space of differentiable functions M \to \Reals. So in that case, the tangent vectors can be defined as derivations, and cotangent vectors introduced as forms. In settings unrelated to manifolds, "tangent" vectors don't make sense, so everything has to become abstract. [ Parent ]
 tensors and GR (none / 0) (#38) by senderista on Wed Oct 23, 2002 at 01:23:58 PM EST

 First off, I'd like to correct the assertion that "tensors are inherently not representation-free". Tensors are geometric objects that need no preferred basis or coordinate system for their definition. In fact, they helped me understand functional programming! I learned CS after physics, and the notion of "currying" came very naturally to me since it's exactly analogous to contracting a tensor with a vector/covector, i.e. "filling a slot" with some constant argument to produce a tensor of lower degree. Back in college I started working on a purely geometric introduction to general relativity. It started with tensor products, then moved to manifolds, then introduced fiber bundles, and finally the notion of a connection on a principal bundle. Then I was going to introduce the metric connection, Riemann/Ricci tensors, etc., and explain Einstein's equation in terms of these purely geometric concepts. I never actually got that far, and unfortunately the whole thing got deleted along with my account when I graduated. But I'd be happy to start over again and post in installments to K5 if anyone is interested. John Baez also has an excellent tutorial on GR, but it's more intuitive and less mathematical than my approach would be. The basic inspiration is MTW's Gravitation, but with a more modern mathematical approach (AFAICT, principal fiber bundles are almost unknown to physicists, although vector bundles have become pretty popular). "It amounts to the same thing whether one gets drunk alone, or is a leader of nations." -- Jean-Paul Sartre [ Parent ]
 I'd like to see it (n/t) (none / 0) (#43) by adiffer on Wed Oct 23, 2002 at 03:42:47 PM EST

 -Dream Big. --Grow Up.[ Parent ]
 another very nice account (none / 0) (#55) by martingale on Thu Oct 24, 2002 at 01:34:14 AM EST

 Solely on differential geometry, though: Spivak, Differential Geometry. That's if you don't get upset by the typesetting, though. [ Parent ]
 There aren't any tensors (none / 0) (#66) by medham on Thu Oct 24, 2002 at 11:39:37 PM EST

 In Gravity's Rainbow. The real 'medham' has userid 6831.[ Parent ]
 No pictures? (2.00 / 1) (#10) by surya on Sun Oct 20, 2002 at 05:57:55 PM EST

 Articles like this remind me of Alice in Wonderland. "What good is a book if it has no pictures in it?". Either the author doesnt want to include pictures or K5 doesnt allow it.
 image tags (none / 0) (#17) by adiffer on Mon Oct 21, 2002 at 05:54:07 PM EST

 I would use them if tags were available to K5 authors.  I understand why they are not, though. -Dream Big. --Grow Up.[ Parent ]
 Twas shown a while back (none / 0) (#24) by bjlhct on Tue Oct 22, 2002 at 02:04:08 AM EST

 Goatse can be done in ascii. * kur0(or)5hin - drowning your sorrows in intellectualism[ Parent ]
 OO == Java? (4.00 / 1) (#29) by p3d0 on Tue Oct 22, 2002 at 08:54:40 AM EST

 Your poll is truely strange. When you say "OO progrmamer" do you mean "Java programmer"? I ask because many OO languages don't even have typecasting (eg. Smalltalk, Self), and most others are capable of expressing typesafe matrix classes (eg. C++, Eiffel). -- Patrick Doyle My comments do not reflect the opinions of my employer.
 my ignorance then... (none / 0) (#32) by adiffer on Tue Oct 22, 2002 at 05:35:06 PM EST

 I'm not experienced with enough of the other languages.  The bulk of my work in with Java, so my understanding of the terminology is a little biased. The kind of typecasting that bothers me is when a polar vector can look like an axial vector.  They don't transform the same way under reflections, so a simulations programmer should be careful in handling both types.  Most of my experience with matrix libraries is from the non-OO world, so the equations permit embedded typecasting along with the function. An angular momentum vector isn't the same kind of thing as a momentum vector, but the matrix representations look similar. -Dream Big. --Grow Up.[ Parent ]
 Oh I see what you mean (none / 0) (#37) by p3d0 on Wed Oct 23, 2002 at 01:00:10 PM EST

 I think I see what you mean. Matrices are all just 2-D arrays of numbers, and if they have the same dimensions, they are indistinguishable until your simulation goes awry. Is that it? -- Patrick Doyle My comments do not reflect the opinions of my employer.[ Parent ]
 yes (none / 0) (#41) by adiffer on Wed Oct 23, 2002 at 03:32:48 PM EST

 Yes.  That is why I feel drawn to OO like a moth to a flame/light bulb/moon. I know I want to write my physics with objects. It is so clear to me that this is the right path for our future that it hurts. -Dream Big. --Grow Up.[ Parent ]
 Do we have to use decimal? (1.30 / 13) (#31) by Fen on Tue Oct 22, 2002 at 02:25:48 PM EST

 I don't like decimal. Why can't we all use hexadecimal, for the few times in math where examples are given with numbers? Use hexadecimal. --Self.
 Use hexadecimal (1.50 / 2) (#33) by the on Tue Oct 22, 2002 at 06:53:42 PM EST

 But with a twist. Instead of digits for 0 to 15, use digits for -8 to 8. Write them as 8, 7, 6, 5, 4, 3, 2, 1,0,1,2,3,4,5,6,7,8. So how do you write a number like 1410? As 1216. And 25510 is 10116. And -12316=12316. Think about it. This has a great many advantages over ordinary base 16. For example you only need to learn multiplication tables up to 8 rather than up to 16. It's symmetric and doesn't have much of a preference for positive numbers over even numbers. The algorithm for subtraction is almost identical to the algorithm for addition. Lots of benefits. But it'd be better if we used an odd base because there is some ambiguity over whether we should use 8 or 8. -- The Definite Article [ Parent ]
 Heard of balanced ternary? (1.66 / 3) (#49) by Fen on Wed Oct 23, 2002 at 06:57:35 PM EST

 I think that's what you're getting at. You use -1, 0, and 1. It has lots of interesting properties--like in money you only have to have one of each coin change hands. --Self.[ Parent ]
 another nice article... (2.00 / 1) (#36) by johwsun on Wed Oct 23, 2002 at 03:04:33 AM EST

 ..but again not science relative, as long as Algebra and Geometry and Mathematics is NOT science.
 quite true (5.00 / 1) (#42) by adiffer on Wed Oct 23, 2002 at 03:37:24 PM EST

 But...science is the closest topic of all of them. I used to section these under 'Column' and felt that was appropriate, but I sensed that the vote to add Science as a section was directed at some of us writing educational articles. After a few more of these, though, I think you will see the science angle.  My intent is to drift into physics with the next chapter.  The material to be covered next concrens how some of our standard physical properties are represented like currents, motion, force fields, and potentials. -Dream Big. --Grow Up.[ Parent ]
 Heh (3.00 / 1) (#61) by luserSPAZ on Thu Oct 24, 2002 at 01:50:55 PM EST

 Iasson has a rather distinct view of what 'Science' is, but nobody else seems to know what it is.  He's quite happy to repeat the same point over and over, however, and refer us to "The Philosophy of Science." [ Parent ]
 no problem (4.00 / 1) (#68) by adiffer on Fri Oct 25, 2002 at 05:40:25 AM EST

 Such people don't bother me any.  I used to spend a bit of time on the first day of any Astronomy class I taught trying to teach students how to distinguish sciences from non-sciences.  There were always a few subjects upon which people disagreed, but the debate usually led to a discussion of the scientific method which clarified some of their positions. For the record, I list Mathematics as a language. -Dream Big. --Grow Up.[ Parent ]
 ..maybe.. (none / 0) (#74) by johwsun on Sun Dec 29, 2002 at 03:29:08 PM EST

 ..I could explain you better in my native language. The main problem is that I cannot use the english language properly, thus I cannot express myself very clear. [ Parent ]
 Inner product with scalar component (5.00 / 1) (#50) by PhysBrain on Thu Oct 24, 2002 at 12:02:13 AM EST

 This wasn't explicitly mentioned in your post, so I thought that I might point it out: The inner product of a grade-0 vector (a.k.a the scalar or e term) and any other vector/multivector is defined as zero. This makes sense from a projection standpoint since the projection of a scalar (point-like) onto any higher grade entity will still give you a null length entity. However, this is not strictly apparent from your definition of the inner product. Take problem #12 for example. If you expand out the dot product into pure ranked objects and distribute, as you suggest, then you get: A·B = (16e·e12) + (16e·4e123) + (-9e3·e12) + (-9e3·4e123) If you expand these out using the symmetric and antisymmetric definitions of the inner product in terms of the geometric product, then the second and third terms cancel due to their odd rank (antisymmetric) and we're left with: A·B = 16e12 - 36e12 from the first and fourth terms respectively. This gives you the (incorrect) result of -20e12, when the correct result should have been -36e12. In the literature, the special case of the inner product of a scalar and any other r-vector is axiomatically defined to be zero. Scalar multiplication is still allowed though, since it is not specifically excluded in the definition of the outer product. PhysBrain --- Ad Astra Per Aspera
 good catch (none / 0) (#56) by adiffer on Thu Oct 24, 2002 at 02:25:10 AM EST

 12: Find the inner product of A = 16 e - 9 e3 and B = e12 + 4e123. = 16e · e12 + 16e · 4e123 - 9 e3 · e12 - 9 e3 · 4e123 = 8[e e12 + e12 e] + 32[e e123 - e123 e] - 9/2[e3 e12 - e12 e3] - 18[e3 e123 + e123 e3] = 16e12 + 32[0] - 9/2[0] - 36[e12] = - 20[e12] Good catch.  If we don't have the inner product of any object with a scalar be zero. That first doesn't result in rank lowering.  Should it though?  It has to as you have pointed out. I think the cleanest argument for why the inner product of a scalar with any other object has to be defined as zero comes from the fact that it is for some of the objects (odd ranks) and wouldn't be for others (even ranks.)  That doesn't make any sense. This means my rule for outer products is equally incomplete.  What should the outer product of an object taken with the pseudo-scalar be?  We have to set them to zero too. Good catch.   If Lounesto were around, he would have charred me for this by now.  8) -Dream Big. --Grow Up.[ Parent ]
 Formal Definitions of Inner and Outer Products (none / 0) (#59) by PhysBrain on Thu Oct 24, 2002 at 11:24:26 AM EST

 The distribution of inner and outer products across arbitrary multivectors is defined just as you have stated. So the only thing that needs clarifying is the products of pure-grade multivectors The following are the definitions for inner and outer product of homogeneous (ie. pure-grade) multivectors from the book Clifford Algebra to Geometric Calculus by Hestenes and Sobczyk. We define the inner product of homgeneous multivectors by Ar·Bs = |r-s|, if r>0 and s>0 Ar·Bs = 0, if r=0 or s=0 ... In a similar way, we define the outer product of homogeneous multivectors by Ar^Bs = r+s The <>r operator extracts out the r-grade part of the multivector. So, A = r implies that A is a homogeneous multivector of grade r, which is also written as Ar. Scalar multiplication of a multivector is preserved in this way by allowing it to be part of the outer product. So, by writing the geometric product of two homogeneous multivectors by ArBs = Ar·Bs + Ar^Bs We see that the inner product drops out if r=0 or s=0, but the outer product is preserved. So, if r=0 and A0=ae A0Bs = ae^Bs And since e commutes with all multivectors (ie. eB = Be = B) we see that ae^Bs = Bs^ae = aBs = Bsa PhysBrain --- Ad Astra Per Aspera[ Parent ]
 seen that one (none / 0) (#67) by adiffer on Fri Oct 25, 2002 at 05:33:36 AM EST

 I've seen those definitions and I still have a problem with them. Try an outer product of two bivectors in a four generator algebra.  Their definition has an interesting hiccup that has always bothered me.  If the two bivectors share one index, the result shouldn't be of rank four.  It should be of rank two at best. Believe it or not, there are competing definitions among the physicists.  They are all the same in three dimensions, but they diverge in four and higher. -Dream Big. --Grow Up.[ Parent ]
 Linear independence (none / 0) (#70) by PhysBrain on Fri Oct 25, 2002 at 05:38:26 PM EST

 I see what you mean, but the definitions given actually keep you out of trouble here. The outer product does not behave the same way as the geometric product. The thing to keep in mind is that the outer product is nonzero iff the operands are linearly independent. In three dimensions this effect is fairly obvious because you can't get four linearly independent basis vectors. So the outer product of two bivectors is always zero. Remember that e12 = e1^e2 e23 = e2^e3 So, their outer product would be e12^e23 = e1^e2^e2^e3 = 0 This result is the same in a four dimensional space. However, the outer product of two linearly independent bivectors, which only exist in 4D and higher spaces, is nonzero e12^e34 = e1^e2^e3^e4 = e1234 So, what is happening when you do something like e12^e23 = 4 = 4 = 4 = 0 The geometrical interpretation that I have come to understand is that the two linearly dependent bivectors do not span a 4D space, so they cannot generate a quadvector. Now to get at the heart of the matter that I think is really bothering you. If one looks at the geometric product of the two bivectors, one sees that it is nonzero. It is also pretty straight forward to show that both the inner and outer products are zero. The outer product I have shown above. The inner prodcut is e12·e23 = 0 = 0 = 0 So what's going on? I thought our definition of the geometric product was something like AB = A·B + A^B I think I'm correct when I say that this is always true only when one of the operands is a scalar or a vector. But never fear, we have one more product to learn about. The commutator product. A X B = 1/2(AB - BA) This is, of course, the antisymmetric part of the geometric product. I get the impression that it isn't used very often, but it is necessary to have around. In spaces with dimension greater than three, the geometric product of two homogeneous multivectors will produce mixed grade multivectors of the form ArBs = |r-s| + |r-s|+2 + ··· + r+s ArBs=∑k=0..m|r-s|+2k where m=1/2(r+s-|r-s|). Whew. I think that's enough of a rant for one post. And to think, I didn't start studying Geometric Algebra until I saw your first Introduction to Geometric Algebra Post. I think I recognized the potential of this representation almost at once. I've been drinking this stuff up like a sponge since then. I've been shopping around for a dissertation topic for some time now, and I'm now hoping to apply GA/GC to the numerical solution of PDE's for application in computational simulations. If anyone out there has any ideas on the subject, or knows of anyone doing related work, I'd love to hear about it. PhysBrain Ad Astra Per Aspera[ Parent ]
 good to hear (none / 0) (#71) by adiffer on Fri Oct 25, 2002 at 09:02:38 PM EST

 Opinions expressed are my own... (none / 0) (#72) by PhysBrain on Sat Oct 26, 2002 at 02:54:08 AM EST

 makes sense (none / 0) (#73) by adiffer on Sat Oct 26, 2002 at 11:12:53 PM EST

 You are making sense.  You may even convince me to come around to an alternate view.  I shall think upon it for awhile. You have also shown much of what it takes to do independent research.  I wish you a lot of luck and hope you have fun doing it.  There is nothing quite like it in the whole world. -Dream Big. --Grow Up.[ Parent ]
 Why define angles numerically in radians? (none / 0) (#57) by expro on Thu Oct 24, 2002 at 09:29:55 AM EST

sine = 2a / (a^2 + 1) cosine = (1 - a^2) / (a^2 + 1) Of course, this is still somewhat impure because it forces us to rely on orthogonals, when the underlying representation has no need and in many cases, no justification to rely on orthogonals, proving that this is specific to a specific coordinate system. If I were measuring in a tetrahedral coordinate system (axes emanating from the sides of a tetrahedron instead of from the sides of a cube), which is more optimized for natural 60-degree object orientations, I would define my angle: -1 <= a <= 1 representing equivalents of radian rotations from -Pi/3 to Pi/3 computing three different trigonometric functions (I will call them aota, bota, cota) instead of two as follows:

aota = (6 - 2r^2) / (3r^2 + 9) bota = (r^2 - 6r - 3) / (3r^2 + 9) cota = (r^2 + 6r - 3) / (3r^2 + 9) I would place these in a matrix to rotate as follows: aota bota cota 1/3 cota aota bota 1/3 bota cota aota 1/3 0 0 0 1 ... and so I have rotation without the ortogonal concept of sine that reveals dependency on a specific coordinate system. I guess that the real problem I have with this description is that in my simple, uneducated, experience, it seems like despite claims to be abstract and to rise above the computational details of the coordinate system, it is still very firmly mired in the conventional concepts of the standard cubic coordinate and most-popular trigonometric systems.
 Angles in radians... (none / 0) (#58) by BlaisePascal on Thu Oct 24, 2002 at 11:18:02 AM EST

 There are some nice and convenient things about measuring angles in radians.  Most don't make sense until you get into calculus, however. Here's some features of sin(x) and cos(x) that hold when x is measured in radians, but not in degrees, grads: (using ' notation for derivatives in x) sin'(x) = cos(x) cos'(x) = -sin(x) sin(x) and cos(x) are independent solutions of f'' = -f sin'(0) = 1 exi = sin x + i cos x ...and others, mostly derivable from these. Your alternative definitions of sine and cosine don't really have any of those properties.  They aren't even periodic.  Granted, it still has sine^2+cosine^2 = 1, but that's about it. [ Parent ]
 Re: Angles in radians... (none / 0) (#60) by expro on Thu Oct 24, 2002 at 01:01:53 PM EST

 You have mentioned some convenient calculus-related features of angles measured in radians, which are well known to anyone who has taken basic calculus. There are also some quite inconvenient features, which is why it is not the common system in use by most non-mathematicians. You missed some nice features of the alternative angular measurement, probably because you do not have the experience thinking in terms of the alternative system. For the record, all I would have to do is insert a construct using a modulus function on the angle to make it extend periodically beyond the described range and make it almost indistinguishable to the unpracticed eye, since it doesn't vary in this range in mapping from the radian angle multiplied by 2 / Pi by more than about 20%. It always produces rational values for sine and cosine from rational values of the angle. There is also a fairly simple geometric interpretation. But we do not need to go into detail here. The point of being able to use alternative systems is that each is convenient for a specific purpose and none deserves complete universality. None of the purposes that I saw described at the beginning of the article seemed to make it important for an angle to be measured in the particular way described, and it seems to reinforce my point that bits of the popular trigonometric system and cubic coordinate system have been arbitrarily embedded into at least this descriptions of geometric algebra instead of keeping it abstract. When we talk about an angle, it seems to me that at a generic level, we need to talk about concatenation of angles, one angle being greater than another, rotation through a specific angle. We can talk about these things independent of the system mapping an angle to a number, and without forcing a specific scalar interpretation of the angle that compromises the abstractness of the features, which may be accomplished just as well in any of the systems I described. The sine function itself never, to me, deserved more mention than simply being one of a number of possible interesting phase shifts, as shown by the tetrahedral coordinate system that favors 60 degree rotations over 90 degree rotations. It is useful for some things, and totally unuseful for others because it does not produce simple rational values even in the common case of the 60 degree angle which seems to me to occur much more often in nature than a 90-degree angle. [ Parent ]
 Also, your calculus results are only rectangular. (none / 0) (#62) by expro on Thu Oct 24, 2002 at 03:23:42 PM EST

 I should have noted also, if you were plotting the sine wave on a triangular coordinate graph (the two-dimensional projection of tetrahedral coordinates, again emphasizing the 60 degree angle instead of the 90 degree angle) then you would not obtain the same derivitives, because you would be differentiating with respect to non-orthogonal axes. Without doing all the computations involved, the 90-degree derivitive seems to again be an artifact of the coordinate system for dy/dx. Again, we see that the biases arise from the choice of coordinates, and probably should not arise from a pure geometric algebra, but I am only guessing based upon the broad principles stated. [ Parent ]
 Projection operation.... (none / 0) (#63) by BlaisePascal on Thu Oct 24, 2002 at 05:49:05 PM EST

 For each pair of vectors a and b, let's define a function fab (t) = a - t b Let  tmin( a , b ) be defined by | fab (tmin( a , b )| <= | fab (t)| for all t. Because I don't want to do the proofs now, I'll simply assert that tmin is uniquely determined by a, b is proportional to | a | is inversely proportional to | b |. For the next step, let's define Pb ( a ) = b tmin( a , b ). Pb ( a ) has these properties. it's idempotent ( Pb ( Pb ( a )) = Pb ( a ) for all a ) it doesn't depend on the scale of b ( Pb ( a ) = P b b (a) for all b != 0) it's linear This rather convoluted definiton of Pb was specifically done to avoid any mention of a basis for the vector space, any (explicit) mention of orthogonal or normal vectors, and is representation-independent.  All that it requires is that the vector space is a vector space with a magnitude operation defined -- which is the case in geometric algebra.  Geometric algebra doesn't require any particular basis to exist, only that a*2 be a scalar for all vectors *a. I have avoided any mention of vector products, only using products of scalars and vectors, vector sums and differences, and vector magnitudes.  The tricky part of defining tmin, which relies on the concept of finding the minimum of a function.  But again, I can find the minimum of that function even without resorting to coordinates or basis vectors (derivatives on parameters in a vector space can be defined without referring to coordinates, and when the derivative is 0, the vector-valued function is at a minimum). Note, however, that P0 is not defined.   So the next step is: Define D(a,*b*) = |*b*||*Pb*| if b != 0  and 0 if b = 0 Now D(a,*b*) has the following properties:  D(a,*a*) = |*a*|2  D(a,*b*) = D(b,*a*)  D(a a, b) = a D(a,*b*)  D(a + b, c) = D(a,*c*) + D(b,*c*)  The previous three together imply D is commutative and bilinear. Assuming I did all my math right, not once have I introduced a coordinate system or any system of basis vectors, nor did I mention the dimensionality of the vector space, or any other aspect of the representation of the vector space. I assert that D(a,*b*) = a dot b, and therefor forms an alternative, coordinate-free, definition of the inner product of two vectors. I'll continue this line of discussion in another reply after I get home from work. [ Parent ]
 It's about definitions (none / 0) (#69) by adiffer on Fri Oct 25, 2002 at 05:55:56 AM EST

 In order to define what I mean when I say we are measuring an angle in a numeric sense, I have to define how to do it.  That is why the definitions for circular and hyperbolic angles are there.  Without these definitions, the trigonmetric functions don't mean much and people begin to think that their inclusion in the inner and outer products are examples of circular reasoning. I actually don't care all that much about radians.  I use them due to training and the fact that the definitions I listed for angles are unitless.  Radians aren't technically the same kind of unit as others we use. I also don't mean to imply that we are above calculation in any sense.  I just try to keep to a representation-free approach until the very last moment when I actually want to calculate something.  I've taught the multiplication tables in terms of a particular representation (rectangular as you have noted), but in practice, I stay away from a coordinate system choice until I have to make it.  The many other coordinate systems work just as well if you keep track of the usual little conversion factors.  Full representations aren't always needed to calculate things anyway.  Look at the inner product and you will have an example of what I mean.  Inner products are projections, so the only parts of the reference frame I really need are those associated with the direction of the thing to be projected and the direction of the reference object.  That may very well be less than the full reference frame. -Dream Big. --Grow Up.[ Parent ]
 Introduction to Geometric Algebra (part three) | 74 comments (67 topical, 7 editorial, 1 hidden)
 Display: Threaded Minimal Nested Flat Flat Unthreaded Sort: Unrated, then Highest Highest Rated First Lowest Rated First Ignore Ratings Newest First Oldest First