21.07.2013 Views

ABSTRACT ALGEBRAIC STRUCTURES OPERATIONS AND ...

ABSTRACT ALGEBRAIC STRUCTURES OPERATIONS AND ...

ABSTRACT ALGEBRAIC STRUCTURES OPERATIONS AND ...

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

Anders Madsen<br />

<strong>OPERATIONS</strong><br />

<strong>AND</strong><br />

<strong>STRUCTURES</strong><br />

<strong>ABSTRACT</strong><br />

<strong>ALGEBRAIC</strong><br />

<strong>STRUCTURES</strong><br />

y z<br />

y z<br />

yz<br />

bc<br />

b c<br />

b c<br />

1


This is a booklet in a series with the title ”Abstract algebraic structures”, which I have<br />

written for the algebra courses (E1,BE2) part of the mathematics education at RUC<br />

These booklets should be seen in connection with another series with the title ”Concrete<br />

algebraic structures”. These two series should be viewed as the jaws of a pair of pinchers.<br />

It would obviously be futile (and frustrating) to teach abstract algebra without involving<br />

concrete examples and it would be poor (and devoid of perspective) only to build concrete<br />

examples without involving the underlying abstract structures.<br />

And then again, there is a certain beauty in emphasizing the abstract character by isolating it<br />

and let its top-down nature appear clearly as in done in the controversial idol ”The Elements<br />

of Mathematics” by Bourbaki. Starting with the coarsest structures and then step by step<br />

refining by adding structural elements. All results which occur in many circumstances are<br />

proved once and for ever in its most elementary form.<br />

In the same way there is some satisfaction combined with letting each concrete structure stand<br />

as simple as possible without to much ado, das Ding an sich. And the pleasure by recognizing<br />

the essentially same type of argument recurring over and over in different disguises.<br />

Besides having the aesthetic satisfaction from the pure abstraction and the pure pleasure<br />

from the concrete details, both perspectives have a great cognitive influence and contributes<br />

to the development of competencies which are essential to any mathematician.<br />

I have chosen to emphasize these to oppositely directed but coordinated perspectives which<br />

is implied in the two series.<br />

The individual concrete structures are displayed in separate expositions without mutual<br />

references. Subjects which are needed at more places are repeated at each place. But the<br />

choice of details is made in such a way as to best possible deliver stuff to be used in examples<br />

in the abstract part.<br />

The text of the abstract part is not printed but can be found at<br />

http://milne.ruc.dk/~am/algebra<br />

Anders Madsen, may 2012<br />

2


Table of contents<br />

1 Introduction<br />

2 Operations<br />

1 Background and motivation 8<br />

2 Background terminology 8<br />

3 Definition 9<br />

4 Homomorphisms 10<br />

5 Induced operations 14<br />

6 Induced operation on subset 14<br />

7 Induced operation on function spaces. 16<br />

8 Induced operation on product spaces. 16<br />

9 Induced operation on quotient. 19<br />

10 Axioms for operations. 23<br />

3 Algebraic structures.<br />

1 Definition. 26<br />

2 Homomorphisms and isomorphisms 26<br />

3 Induced structures on subset 27<br />

4 Induced structure on function spaces 29<br />

5 Induced structure on quotient 31<br />

4 Appendix: Tables.<br />

1 Symbols for sets. 32<br />

2 Tables with operations 34<br />

3 Tables with homomorphisms 35<br />

5 Examples and exercises<br />

6 Appendix : Classification<br />

7 Semigroups<br />

1 Definition 46<br />

3


2 Induced semigroups 47<br />

3 Powers 48<br />

8 Monoids<br />

1 Definitions 48<br />

2 Induced monoids 49<br />

3 Examples 51<br />

4 Monoid homomorphisms 51<br />

5 Inverse 52<br />

6 Powers 52<br />

7 Translations 53<br />

9 Groups<br />

1 Definitions 54<br />

2 Induced group structures 55<br />

3 subgroup 55<br />

4 Quotient groups 59<br />

5 Group homomorphisms 61<br />

10 Rings<br />

1 Definitions and rules 62<br />

2 Subring 62<br />

3 Induced rings 63<br />

4 Quotient rings and ideals 64<br />

5 Integral domains 66<br />

11 Fields<br />

1 Definitions 68<br />

2 Quotient ring over a maximal ideal 69<br />

3 Fields of fractions 69<br />

12 Examples and exercises<br />

13 Stikordsregister<br />

4


1: Introduction<br />

The word algebra is of arabic origin meaning take into parts and put together,<br />

in short cut and paste. An equation is cut into pieces which are gathered to<br />

a new equation. And so algebra is about expressions put together using operations,<br />

originally the simple operations addition, subtraction, multiplication<br />

and division, and build together to create equations.<br />

Algebra is about manipulating such algebraic equations, with the intention to<br />

do it in a way which makes them solvable. The typical algebraic equation is<br />

an equation of the n-th degree in one unknown or a system of equations with<br />

several unknowns. The challenge to find methods for solution has been a main<br />

driving force in the development of mathematics.<br />

The specification of what is considered to be an algebraic problem has been<br />

widened in the course of this development. It has been recognized that a lot of<br />

specific methods can be subordinated as special cases of more general points of<br />

view. And today algebra is treated as an abstract discipline, where you do not<br />

necessarily define the objects which are the building stones or the operations<br />

used to connect them. In stead you make some assumptions of their properties<br />

and develop a theory which can be applied to all mathematical objects which<br />

have these properties. It is obvious that the more assumptions you make the<br />

more results may be deduced. On the other hand more assumptions will reduce<br />

the applicability. Therefore there exists a plethora of algebraic structures, and<br />

it is part of the sport to keep account of what precisely can be achieved with<br />

a specific choice of assumptions.<br />

So we start out with very few assumptions and gradually extend the number<br />

of assumptions. In this way you construct hierarchies of algebraic structures.<br />

Here we are going to focus on a single hierarchy, starting with the simplest<br />

possible configuration and then successively adding structure. This refinement<br />

of the structure can be accomplished by taking more operations into account<br />

and demanding more conditions on the operations.<br />

And now a survey over the text, which may be appropriate to bear in<br />

mind during reading, since it highlights the system behind the text. It might<br />

otherwise be a little more difficult to extract the main lines.<br />

The introductory chapters exhibit the fundamental notions for a lot of structures<br />

defined using operations in a set. There are the following categories of<br />

structures.<br />

5


Category Operations Axioms<br />

1 semigroups a + b + is associative<br />

2 monoids 0 0 neutral wrt +<br />

3 groups −a −a inverse wrt +<br />

4 rings a ⋆ b,1<br />

⋆ is assoc.<br />

⋆ is distr wrt +,<br />

1 neutral wet ⋆<br />

5 integral domain no zero divisors wrt ⋆<br />

6 fields<br />

We have successive refinements of structures generated by adding new operations<br />

and/or sharpening the conditions on these operations. In the column<br />

with operations and the row of a category you find the operations which have<br />

been added at this stage. The same holds for the column of conditions.<br />

For each of the mentioned categories you can construct new structures within<br />

the same category by carrying the structure which you have on some set to<br />

another set, which is suitably related to the first, it may for instance be a<br />

subset. We say that such structures are induced.<br />

We Shall consider the following types of induced structures:<br />

- substructures (the structure is induced on a subset of a set on which you<br />

already have a structure.)<br />

- structures on function spaces (the structure is induced on a set of functions<br />

with images in a set in which already has a structure. So addition of real<br />

numbers induce addition of reel functions )<br />

- product structures (the structure induced on a product of sets, all of which<br />

already have a structure. For instance addition of real numbers induce addition<br />

of pairs of reals. )<br />

- quotient structure (the structure induced on a quotient of a set which already<br />

has a structure. For instance addition of integers induce addition of remainder<br />

classes of integers.)<br />

Most algebraic structures are namely induced structures, induced from some<br />

fundamental structures in each category, which can be considered to be some<br />

sort of ground building blocks<br />

So constructing new structures from old ones is one important issue. The<br />

other important issue is to study the mappings between structures in the same<br />

category which relates the structures.<br />

6


This is connected to the fact that a number of phenomena which only depend<br />

on the structure may be studied in another structure, sometimes in a more<br />

convenient way. So multiplications can be carried out by means of additions by<br />

using logarithm functions. (This was of practical importance in the days prior<br />

to electronic calculating machines).<br />

This type of mappings which respects the structures are called homomorphisms.<br />

Among them you will find isomorphisms, which completely equal the structures.<br />

These three main aspects (the categories, induced structures, homomorphism)<br />

are mirrored in the architecture of the text:<br />

First part: Operations is about single operations. After the definition of operation,<br />

homomorphisms are treated in relation to single operations.<br />

Second part: Structures is about collections of operations. After the definition<br />

of structures homomorphisms are treated in relation to structures.<br />

Third part: Categories of structures introduces successively the above mentioned<br />

categories. A category is a certain type of structure together with a set<br />

of rules (axioms) to be satisfied by the operations.<br />

For each category the definition is followed by results concerning the structures<br />

induced by structures within the category. These induced structures will often<br />

them selves be in the category, although this is not always the case. Especially<br />

when it comes to quotient structures, you have to deal with the problem of<br />

having the defining equivalence relation to be compatible with the structure.<br />

This is the motivation for some special substructures where you are dealing<br />

with what is called normal subgroups and ideals.<br />

7


2: Operations<br />

2. 1: Background and motivation<br />

In this part we have collected stuff about the operations as such, without<br />

considering them as part of some algebraic structure.<br />

An operation is a way of composing a number of operands to a result, as you<br />

know addition as an operation which takes two summands and gives their sum.<br />

When we are dealing with operations we are mainly thinking of binary operations.<br />

These have two operands- But we are also going to consider operations<br />

with any number of operations, for instance one operand. So we give a definition<br />

which covers all cases in one.<br />

We are not in this context going to handle operations with more than two<br />

operands. But operations with one operand will show to be important. And<br />

we are even going to consider operations with zero operands (to be defined<br />

next).<br />

The rationale of this more general view of operations is as usual to make it<br />

possible to handle separate cases in one sweep. This may be considered to be<br />

utterly affected and a sign of the mathematicians ubiquitous urge for formalism<br />

and abstraction. I just find it smart! And not that difficult after all, may be a<br />

little inconvenient and strange in the beginning.<br />

2. 2: Background terminology<br />

We are going to use extensively the Cartesian product of sets. Lets therefore<br />

recall that we are calling the set A1 × . . . × An the Cartesian product or simply<br />

the product of the factors A1, . . . , An. This set consists per definition of all the<br />

ordered tuples (a1, . . . , an) for which ai ∈ Ai for all i ∈ In, where In = {i ∈<br />

Z: 1 ≤ i ≤ n}. Notice that this definition includes I0 = ∅.<br />

The mapping pi : A1 × . . . × An → Ai defined by pi(a1, . . . , an) = ai is called<br />

the projection on the i-th factor. Many things concerning the product can be<br />

conveniently formulated using the projections.<br />

We have that two elements a, b ∈ B are equal, if pi(a) = pi(b) for all i ∈ In.<br />

8


Let f : B → A1 × . . . × An be a mapping into the product, then we call the<br />

mapping pi ◦f the i ′ th component of f or the component of f of index i . Then<br />

f is completely determined by the collection of its components. So to define f<br />

it suffices to define all the pi ◦ f.<br />

So given a map fi : B → Ai for each i ∈ I there exists a unique map f with<br />

pi ◦ f = fi for all i ∈ I.<br />

If all the factors are the same we shall use power notation : A n = A × . . . × A.<br />

For a map f : A → B we define the associated multi map f : A n → B n by<br />

f(a1, . . . , an) = (f(a1), . . . , f(an)).<br />

Occasionally we shall just write f for f.<br />

These definitions also are meaningful when n = 1. It even shows to be appropriate<br />

to make a convention which extends to the case n = 0, that is the case<br />

where there are no factors at all. We do this by defining the empty tuple, the<br />

ordered set with 0 elements and denote it by (). Then we let the product set<br />

of 0 factors be the set with the only element ().<br />

You should remember that this is just a convention, which is a useful way to<br />

avoid a lot of special cases in some formulations. With this convention we also<br />

have that<br />

A 0 = {()}.<br />

2. 3: Definition<br />

1. Definition: Operation<br />

Let A be a set and let n ≥ 0 be an integer. An operation ♢ in A with n<br />

operands is a mapping<br />

♢ : A n → A.<br />

An operation with n operands is also called an operation with the arity n or an<br />

n-ary operation. For n = 0, 1, 2, 3, 4 one uses the special terms 0-ary, unary,<br />

binary, ternary and quaternary.<br />

2. Remark: Constant operations<br />

9


An operation ♢ in A with 0 operands is uniquely determined by ♢ (()) that is a<br />

certain element of A, which the operation can be identified with. The operations<br />

with 0 operands are therefore sometimes called the constant operations. So we<br />

shall all the time think of an operation with 0 operands as an element of A.<br />

3. Remark: Operation symbols<br />

To underline the general aspect we are going to use operation symbols which do<br />

not already have a definite meaning such as x ♢ y, x ♡ y.<br />

4. Definition: Prefix, Infix, Postfix<br />

We have a number of alternative ways to denote an operation, other than<br />

♢ (a1, . . . , an), namely<br />

♢ a1 . . . an, a1 ♢ . . . ♢ an, a1 . . . an ♢<br />

called respectively prefix, infix and postfix notation. In some contexts we do not<br />

state which notation we use in the hope that it is otherwise easily understood<br />

from the context. For unary operations only prefix and postfix notation are<br />

relevant. Generally we use the operator notation. Infix notation is mostly used<br />

for binary operations.<br />

5. Remark: Backwards polish<br />

Postfix notation is in principle the simplest and is used in programming contexts,<br />

for instance on some pocket calculators. It is simply because you can<br />

avoid parentheses E1 X2<br />

2. 4: Homomorphisms<br />

We know from the real numbers the the rule exp(x + y) = exp(x) · exp(y). For<br />

a linear mapping L we have that L(x + y) = L(x) + L(y). If we let M(X)<br />

denote the matrix which belongs to a certain rotation X, then M(X ◦ Y ) =<br />

M(X) · M(X). In each of these three cases we are dealing with a mapping with<br />

a particularly nice behaviour with respect to a couple of operations.<br />

This may be generalized to arbitrary pairs of operations if they have the same<br />

arity:<br />

6. Definition: Homomorphism wrt a pair of operations<br />

10


Let ♢ and ♡ be operations on A and B respectively and both having n<br />

operands. Let f : A → B be a mapping, so that for all (a1, . . . , an) ∈ A n :<br />

you have that<br />

f( ♢ (a1 . . . an)) = ♡ (f(a1) . . . f(an))<br />

Then we say that f is homomorphic wrt ( ♢ , ♡ ).<br />

7. Remark: Homomorphisms wrt unary and 0-ary<br />

In infix notation the defining equation may be written<br />

f(a1 ♢ . . . ♢ an) = f(a1) ♡ . . . ♡ f(an)<br />

which in the binary case is<br />

f(a1 ♢ a2) = f(a1) ♡ f(a2)<br />

In the unary case it means that f( ♢ a) = ♡ f(a), and for n = 0 we get<br />

f( ♢ ) = ♡<br />

Loosely we can say that it doesn’t matter if we carry out the operation before<br />

using the mapping or first map all the operands and then use the operation on<br />

the result. This can be most elegantly and concentrated stated by the equation<br />

⋆ f ◦ ♢ = ♡ ◦ f<br />

which is displayed in the diagram<br />

♢<br />

A n f ✲ B n<br />

♡<br />

❄ ❄<br />

f<br />

A<br />

✲ B<br />

8. Definition: Isomorphism wrt a pair of operations<br />

Let ♢ and ♡ be operations in A and B both with n operands. Let f : A → B<br />

be a bijective mapping such that f is a homomorphism wrt ( ♢ , ♡ ) and f −1<br />

is a homomorphism wrt ( ♡ , ♢ ) Then we say that f is an isomorphism wrt<br />

( ♢ , ♡ ).<br />

11


You should realize that you have already met a number of homomorphisms by<br />

studying the following examples and exercises: E3 E4<br />

E5 E6 E7<br />

E8 X9 X10<br />

We could say that a homomorphism preserves the algebraic structure and it<br />

should then not be surprising that a composition of homomorphisms is again a<br />

homomorphism. There are other rules for combining homomorphisms. These<br />

are the subject of the next section:<br />

9. Theorem: Composition and homomorphisms<br />

Let ♢ , ♡ and ♠ be operations on A,B and C respectively, all with n operands,<br />

And let f : A → B, g : B → C be mappings with h = g ◦f, which we summarize<br />

in the diagram<br />

Then<br />

A<br />

✒ f<br />

B<br />

❅ g<br />

❅❅❘<br />

h ✲ C<br />

1) If f and g are homomorphisms then so is h.<br />

2) If f and h are homomorphisms and f is surjective then g is a homomorphism.<br />

3) If g and h are homomorphisms and g is injective then f is a homomorphism.<br />

Proof : We use the compact homomorphism equation ⋆ in D6.<br />

Then the proof of (1) takes the form<br />

h ◦ ♢ = g ◦ f ◦ ♢ = g ◦ ♡ ◦ f =<br />

♠ ◦ g ◦ f = ♠ ◦ g ◦ f = ♠ ◦ h<br />

The proof of (2) consists in combining the two equations<br />

g ◦ ♡ ◦ f = g ◦ f ♢ = h ◦ ♢ = ♠ ◦ h,<br />

♠ ◦ g ◦ f = ♠ ◦ h<br />

12


to yield g ◦ ♡ = ♠ ◦ g, using that f and so f is surjective.<br />

Regarding (3) we combine the equations<br />

g ◦ f ◦ ♢ = h ◦ ♢<br />

g ◦ ♡ ◦ f = g ◦ f ◦ ♢ = h ◦ ♢<br />

to deduce that f ◦ ♢ = ♡ ◦ f, since g is injective.<br />

Since these proofs may seem very formalistic and concentrated we repeat them<br />

for the binary case in a more simple form<br />

Proof of (1):<br />

Let a1, a2 ∈ A then<br />

h(a1 ♢ a2) =g(f(a1 ♢ a2))<br />

Proof of (2):<br />

Let b1, b2 ∈ B.<br />

=g(f(a1) ♡ f(a2)) = g(f(a1)) ♠ g(f(a2))<br />

=h(a1) ♠ h(a2)<br />

As f is surjective we can choose a1, a2 ∈ A such that f(a1) = b1, f(a2) = b2.<br />

We then have that<br />

g(b1 ♡ b2) =g(f(a1) ♡ f(a2)) = g(f(a1 ♢ a2)) = h(a1 ♢ a2)<br />

Proof of (3):<br />

=h(a1) ♠ h(a2) = g(f(a1)) ♠ g(f(a2))<br />

=g(b1) ♠ g(b2)<br />

Let a1, a2 ∈ A. First we see that<br />

g(f(a1 ♢ a2)) = h(a1 ♢ a2) = h(a1) ♠ h(a2)<br />

and next that<br />

g(f(a1) ♡ f(a2)) = g(f(a1)) ♠ g(f(a2)) = h(a1) ♠ h(a2)<br />

13


which gives that<br />

g(f(a1 ♢ a2)) = g(f(a1) ♡ f(a2))<br />

and since g is injective it then follows that<br />

f(a1 ♢ a2) = f(a1) ♡ f(a2)<br />

10. Theorem: A bijective homomorphism is automatically an isomorphism<br />

En bijective homomorphism is an isomorphism.<br />

Proof : Follows from (3) (3) applied to (f, g, h) = (f, f −1 , Id)<br />

11. Theorem: Iterated composition of homomorphisms gives a homomorphism.<br />

If we have a number of sets, each with its own operation, combined in a chain by<br />

homomorphisms then their composition is a homomorphism between the start<br />

and the end of the chain.<br />

Proof : Follows by induction from point (1) (1)<br />

2. 5: Induced operations<br />

Now follow some ways of constructing new operations from given ones, so called<br />

induced operations. These are suboperation, function space operation, product<br />

operation and quotient operation.<br />

2. 6: Induced operation on subset<br />

It will show convenient to extend an operation on a set A to act also on subsets<br />

of A in the following obvious way:<br />

12. Definition: Extension of an operation to subsets.<br />

14


Let ♢ be an operation on A with n operands and let B1, . . . , Bn be subsets of<br />

A. Then we define<br />

♢ (B1, . . . , Bn) = { ♢ (b1, . . . , bn) : b1 ∈ B1, . . . , bn ∈ Bb}<br />

which in the binary case may be written<br />

B1 ♢ B2 = {b1 ♢ b2 : b1 ∈ B1, b2 ∈ B2}<br />

If any of the subsets is a singleton, that is a set of the form {b}, then we shall<br />

write b ♢ B2 instead of {b} ♢ B2.<br />

13. Definition: Closed or invariant subset<br />

A subset B of A is called closed (or invariant) wrt the operation ♢ on A if<br />

♢ (B, . . . , B) ⊆ B.<br />

14. Remark: Closed wrt to a constant operation.<br />

B is closed wrt the constant operation ♢ if and only if ♢ ∈ B. X11<br />

X12<br />

15. Definition: Induced operation on a subset<br />

Suppose that B is a subset of A, which is closed wrt to the operation ♢ on<br />

A. Then the restriction of ♢ to B n is an operation on B with the same arity.<br />

We call it the operation induced by ♢ on B. We use the same symbol for the<br />

induced operation. When necessary we write ♢ B.<br />

If we let i : B → A denote the inclusion map i(x) = x, x ∈ B we can summarize<br />

the definition in the diagram<br />

♢B<br />

B n i ✲ A n<br />

♢<br />

❄ ❄<br />

i<br />

B<br />

✲ A<br />

which just states that ♢B is the operation on B which makes i an homomorphism.<br />

16. Definition: Translations<br />

15


Let ♢ be a binary operation on A and let a ∈ A. The mapping of A into it self<br />

defined by x ↦→ a ♢ x ([A ∋ x ↦→ a ♢ x ∈ A]) is then called the left translation<br />

by a. Right translation is defined analogously<br />

17. Remark:<br />

This is inspired by addition of vectors (fx in space).<br />

2. 7: Induced operation on function spaces.<br />

For any sets M and A we let F(M, A) denote the set of mappings from M to<br />

A. We are going to use an operation on A to define an operation on F(M, A).<br />

This is<br />

18. Definition: Induced operation on function spaces<br />

If ♢ is an operation on A with n operands and M is a set, then we can define<br />

an operation ♢M in F(M, A) by the formula<br />

♢ (f1, . . . fn)(x) = ♢ (f1(x), . . . , fn(x))<br />

and we call it the the operation on F(M, X) induced by ♢ . Usually we shall<br />

just use the same symbol for the induced operation, which hopefully will cause<br />

no serious ambiguity. If necessary we denote it ♢ M . For binary operations<br />

the definition may be written<br />

(f1 ♢ f2)(x) = f1(x) ♢ f2(x)<br />

For each x ∈ M we let δx denote the mapping from F(M, X) to A defined by<br />

δx(f) = f(x). This mapping is called evaluation at x. Then we can summarize<br />

the definition of ♢ M in the diagram<br />

F(M, A) n δx ✲ A n<br />

♢M<br />

♢<br />

❄ ❄<br />

δx F(M, A)<br />

✲ A<br />

from which we see that ♢ M is exactly that operation which makes δx a homomorphism<br />

for all x.<br />

16


2. 8: Induced operation on product spaces.<br />

On the linear space R n we have defined addition by component wise addition:<br />

(x1, . . . , xn) + (y1, . . . , yn) = (x1 + y1, . . . , xn + yy)<br />

Here we have a set R n = R × . . . × R which is a set product of factors on<br />

which an addition + is known and from these component additions we create<br />

an addition on the product.<br />

To create this component wise operation it is however not needed neither that<br />

the factors nor the operations are the same. So we generalize by allowing the<br />

different factors to have its own operation. The definition may look a little<br />

scary at first<br />

19. Definition: Product operation. Direct product<br />

Let there be given r sets Ai, i = 1, . . . r and on each of these an operation ♢ i<br />

with s operands.<br />

We then define the product operation ♢ = ♢ 1<br />

× · · · × ♢ r<br />

to be the operation uniquely defined by the formula<br />

(⋆) pi ◦ ♢ = ♢ i<br />

◦ pi<br />

on A = A1 × · · · × Ar<br />

(A map into a product is determined by its projections (as mentioned in the<br />

background terminology))<br />

which simply states that to find the component with index i of the result of the<br />

operation you just take component with index i in each of the factors and use<br />

the operation with that index on those.<br />

To see a more explicit formula we restrict to the case where the operations are<br />

binary and we get<br />

(a1, . . . , ar) ♢ (b1, . . . , br) =<br />

(<br />

a1 ♢ 1<br />

b1, . . . , ar ♢ r<br />

The definition may also be summarized in the diagram<br />

17<br />

br<br />

)<br />

.


A n pi ✲ A n i<br />

♢<br />

i<br />

♢<br />

❄ ❄<br />

A<br />

pi ✲ Ai<br />

and we see that we have defined the operation in such a way that all the projections<br />

are homomorphisms.<br />

In the special case where all the factors are the same Ai = A and also the<br />

operations are the same ♢ i<br />

= ♢ for all i we shall use the notation ♢ for<br />

♢ 1<br />

× . . . × ♢ r<br />

. So in case of binary operations we have<br />

(a1, . . . , ar) ♢ (b1, . . . , br) = (a1 ♢ b1, . . . , ar ♢ br).<br />

where we recognize vector addition as a special case, namely ♢ being + in a<br />

linear space.<br />

The case with the same factor repeated is the most common, (you can practice<br />

it in X13) though it may also be relevant with different factors like in X14<br />

Next consider the question concerning how the properties of the factor operations<br />

are reflected in the product operation. First we have:<br />

20. Theorem: Projection on a factor is a homomorphism<br />

With the notation in D19 we have that the projections pi are homomorphisms<br />

wrt the pair ( ♢ , ♢ i<br />

)<br />

Proof : This follows by comparing the definition of the product operation (⋆<br />

in D19) with the definition of homomorphism (⋆ in D6). See also the diagram<br />

in the definition.<br />

The following theorem shows that for a mapping with values in a product<br />

the question of being a homomorphism may be answered by looking at its<br />

components:<br />

21. Theorem: The components are homomorphisms and conversely<br />

18


Suppose that ♢ is an operation on A, that ♡ 1<br />

, . . . , ♡ r<br />

B1, . . . , Br, all with n operands.<br />

Further let B = B1 × · · · × Br<br />

and ♡ = ♡ 1<br />

× · · · × ♡ r<br />

.<br />

and assume that fi : A → Bi are the components of f : A → B.<br />

are operations on<br />

Then f a homomorphism wrt ♢ and ♡ if and only if fi is a homomorphism<br />

wrt to ♢ and ♡ i<br />

for all i<br />

Proof : Since fi = pi ◦ f the only if part follows from T9: 1 and the if part<br />

follows from T9:2, since pi is surjective.<br />

This is used to prove a very famous theorem, see the example E15<br />

2. 9: Induced operation on quotient.<br />

While building product sets is about how to create new objects with many properties<br />

from objects with simpler properties, then building quotients is about to<br />

forget properties, which are not relevant in the context.<br />

A radical example is to forget all properties of a permutation except its parity.<br />

We are then left with only two elements ”even” and ”odd”. When we transfer<br />

some operation on permutations to these object we must assure our selves that<br />

the operation is insensitive to all other properties of the operation.<br />

This is the case with multiplication of permutations, since the parity of the<br />

product only depends on the parity of the factors.<br />

The general framework for this is notion of a partition K of a set X into classes<br />

according to some criterion. Then K is the set of classes. Each class is non<br />

empty and all elements must be in exactly one class. If K is a class and x ∈ K<br />

we say that x is a representative of K and that K is the class of x, which is<br />

also denoted [x]K.<br />

The mapping of X into K which to x assigns its class [x]K is called the canonical<br />

projection and we denote it by kK.<br />

Any mapping f of X into some set Y which is constant on the classes can in<br />

an obvious way be considered to be defined on XK, lets call it fK. Then we<br />

19


have the diagram<br />

kK<br />

X<br />

❅ f<br />

❅❘<br />

❄<br />

✒ fK<br />

K<br />

Y<br />

The most usual way to specify a partition is based on an equivalence relation.<br />

Lets recall that an equivalence relation ∼ in a set X is characterized by the<br />

three properties<br />

1) reflexivity (x ∼ x)<br />

2) symmetry (x ∼ y ⇒ y ∼ x)<br />

3) transitivity (x ∼ y, y ∼ z ⇒ x ∼ z)<br />

For each x ∈ X we define the equivalence class of x to consist of all elements<br />

equivalent with x. This partition is denoted X∼, and we simply write [x]∼<br />

for the class of x. This partition is called the quotient of X modulo ∼. The<br />

canonical map is the denoted by k∼.<br />

Often an equivalence relation is based on a mapping f of X into some set Y<br />

by defining x1 ∼ x2 to mean f(x1) = f(x2). Then f is constant on the classes<br />

and the situation is described by the diagram<br />

k∼<br />

X<br />

❅ f<br />

❅❘<br />

❄<br />

✒ f∼<br />

X∼<br />

Y<br />

Now we return to considering operations. We have been dealing with the task<br />

of transferring a map on a set X to a quotient of X and we are interested in<br />

extending this idea to operations, that is to have an operation on the quotient.<br />

Operations for which this is possible will be considered to be compatible with<br />

the partition. If the partition is defined by some equivalence relation we shall<br />

say that the relation is compatible.<br />

The mathematical precision of this is in the following, where we know a relation<br />

defining the quotient.<br />

20


22. Definition: Compatibility<br />

Suppose that ♢ is an operation and ∼ an equivalence relation in A, such that<br />

a1 ∼ b1, . . . , an ∼ bn ⇒ ♢ (a1, . . . , an) ∼ ♢ (b1, . . . , bn)<br />

for all (a1, . . . , an) and (b1, . . . , bn). Then we say that the equivalence relation<br />

and the operation are compatible<br />

Lets for convenience also state the definition for binary operations:<br />

a1 ∼ b1, a2 ∼ b2 ⇒ a1 ♢ a2 ∼ b1 ♢ b2<br />

You have most certainly met this notion previously, for instance in the examples:<br />

X16 X17 X18 X19 More examples (which may be new to you) are in<br />

E20 E21<br />

The reason for introducing compatibility is as mentioned that it enables us to<br />

induce an operation on the equivalence classes. this is clarified in the next<br />

theorem :<br />

23. Theorem: Calculating with representatives<br />

Suppose that the operation ♢ is n-ary operation on A and ∼ is en equivalence<br />

relation on A. If they are compatible, then the class of the result by the operation<br />

is determined by the classes of the operands. More formally<br />

Let K1, . . . , Kn ∈ A∼ and let a1, . . . , an ∈ A be representatives for the respective<br />

classes. Then the equivalence class<br />

[ ♢ (a1, . . . , an)]∼.<br />

is independent of the way the representatives are chosen.<br />

21


This makes the following definition meaningful:<br />

24. Definition: The quotient operation<br />

The formula<br />

[a1] ♢ ∼<br />

. . . ♢ ∼<br />

[an] = [a1 ♢ , . . . , ♢ an]<br />

defines am operation on the set of equivalence classes. We call it the quotient<br />

operation induced by ♢ modulo ∼, and we denote it ♢ ∼.<br />

The definition is summarized by the diagram<br />

♢<br />

A n k ✲ A∼ n<br />

∼<br />

♢<br />

❄ ❄<br />

A<br />

k ✲ A∼<br />

which shows that the quotient operation is the one making the canonical projection<br />

a homomorphism.<br />

25. Theorem: The canonical projection is a homomorphism.<br />

Let ♢ be an operation on A and let ∼ be an equivalence relation compatible<br />

with ♢ and let k be canonical projection of A on A∼ associated with ∼. Then<br />

k is a homomorphism wrt to ♢ and ♢ ∼.<br />

Proof : Since k is the canonical projection, we have that k(a) = [a], the<br />

equivalence class containing a. Then the diagram above proves the theorem.<br />

Lets give a less formalistic proof in the case of a binary operation<br />

k(a1) ♢ k(a2) = [a1] ♢ [a2] = [a1 ♢ a2] = k(a1 ♢ a2)<br />

which is the diagrammatic formulation of the condition for homomorphism<br />

(D6).<br />

The previous theorem show that an equivalence relation which is compatible<br />

with an operation gives rise to a homomorphism. The next theorem shows the<br />

opposite, that any homomorphism generates a compatible equivalence relation.<br />

22


26. Theorem: Criterion for homomorphy on the quotient<br />

Let ∼ be an equivalence relation on A compatible with the operation ♢ . Let k<br />

denote the canonical projection of A on A∼. Suppose that ♡ is an operation<br />

on B and f is a mapping of A∼ into B. Then f is a homomorphism wrt<br />

( ♢ ∼, ♡ ) if and only if f ◦ k is a homomorphism wrt ( ♢ , ♡ )<br />

Proof : Consider the diagram<br />

A<br />

✒ k<br />

A∼<br />

❅ f<br />

❅❅❘<br />

f◦k ✲ B<br />

The if part follows from T9:2 (since k is surjective) and the only if part follows<br />

from T9:1<br />

27. Theorem: Homomorphism and compatibility.<br />

Suppose that f : A → B is a homomorphism wrt ♢ and ♡ . Let ∼ be the<br />

equivalence relation induced by f, (that means a ∼ b ⇔ f(a) = f(b)). Then ∼<br />

will be compatible with ♢ .<br />

Proof : Suppose that a1 ∼ a ′ 1, . . . , an ∼ a ′ n. Then<br />

and so<br />

f( ♢ (a1, . . . , an)) = ♡ (f(a1), . . . , f(an))<br />

= ♡ (f(a ′ 1), . . . , f(a ′ n)) = f( ♢ (a ′ 1, . . . , a ′ n))<br />

♢ (a1, . . . , an) ∼ ♢ (a ′ 1, . . . , an) ′<br />

2. 10: Axioms for operations.<br />

28. Definition: Associative operation<br />

A binary operation ♢ on A is said to be associative if<br />

a ♢ (b ♢ c) = (a ♢ b) ♢ c<br />

23


holds for all a, b, c ∈ A<br />

29. Theorem: Parentheses are redundant for associative operations<br />

Easy to understand what it means but clumsy to formulate precisely.<br />

30. Definition: Commutative operation<br />

A binary operation ♢ on A is said to be commutative if<br />

a ♢ b = b ♢ a<br />

holds generally, that is for all a, b ∈ A<br />

31. Definition: Distributive operation<br />

En binary operation ♢ is said to be distributive wrt a binary operation ♡ if<br />

a ♢ (b1 ♡ b2) = (a ♢ b1) ♡ (a ♢ b2)<br />

holds generally, that is for all a, b ∈ A<br />

32. Theorem: Associativity, commutativity and distributivity is hereditary<br />

If an operation ♢ on A is associative, the same holds for the operations induced<br />

on subsets, function spaces, product spaces and quotient spaces. The same holds<br />

commutativity and distributivity.<br />

Proof : An easy, may be tedious exercise.<br />

33. Theorem: Associativity, commutativity and distributivity is preserved<br />

by homomorphisms<br />

Let ♢ be an operation in A and ♡ an operation in B and let f be a homomorphism<br />

of A into B. If f is surjective and ♢ is associative then ♡<br />

is associative. If f is injective and ♡ is associative then ♢ is associative.<br />

Obvious analogues hold for commutativity and distributivity.<br />

24


Proof : Assume f surjective and ♢ associative. Let b1, b2, b3 ∈ B. We can<br />

choose a1, a2, a3 ∈ A such that f(a1) = b1, f(a2) = b2, f(a3) = b3. Then<br />

b1 ♡ (b2 ♡ b3) = f(a1) ♡ (f(a2) ♡ f(a3)) = f(a1 ♢ (a2 ♢ a3))<br />

(b1 ♡ (b2)) ♡ b3 = (f(a1) ♡ f(a2)) ♡ f(a3) = f((a1 ♢ a2) ♢ a3)<br />

Since the two right ends of the lines are equal (by associativity of ♢ ) also the<br />

two left ends are equal, showing that ♡ is associative.<br />

The remaining part of the proof is left as an exercise.<br />

25


3: Algebraic structures.<br />

3. 1: Definition.<br />

34. Definition: Algebraic structure<br />

An algebraic structure is a set equipped with operations in the set. If the set<br />

itself is A and if the operations are the ordered tuple ( ♢ 1<br />

, . . . , ♢ n<br />

) we shall use<br />

(A, ♢ 1<br />

, . . . , ♢ n<br />

) to denote the structure. We call A the underlying set of the<br />

structure. Often it will cause no confusion to simply let A denote the structure.<br />

35. Remark: Infinitely many operations<br />

In general you will often allow more than finitely many operations. We do not<br />

do so here for notational convenience.<br />

36. Definition: Type of structure<br />

By the type of an algebraic structure we understand the ordered tuple (i1, . . . , ik)<br />

of the arities of the operations.<br />

We are now going to extend all the notions we have for a single operation to<br />

structures. It simply consists in letting claims and constructions apply to all<br />

the operations of the structure. for instance a mapping is a homomorphism<br />

for a pair of structures if it is a homomorphism for all the pairs of operations<br />

defining the structures. A product of structures consists of the products of all<br />

the defining operations and so for all the other notions. This might be enough<br />

to know but we are carrying out the details for completeness and reference.<br />

3. 2: Homomorphisms and isomorphisms<br />

37. Definition: Homomorphism<br />

Let (A, ♢ 1<br />

, . . . , ♢ n<br />

) and (B, ♡ 1<br />

, . . . , ♡ n<br />

) be algebraic structures of the same<br />

type and let f : A → B be a mapping, which is a homomorphism wrt ( ♢ i<br />

, ♡ i<br />

)<br />

for all i = 1, . . . , n. Then we say that f is a homomorphism wrt the pair of<br />

structures A and B.<br />

38. Definition: Isomorphism<br />

26


Let (A, ♢ 1<br />

, . . . , ♢ n<br />

) and (B, ♡ 1<br />

, . . . , ♡ n<br />

) be algebraic structures of same type<br />

and let f : A → B be a mapping, which is an isomorphism wrt ( ♢ i<br />

, ♡ i<br />

i = 1, . . . , n. Then f is an isomorphism wrt the pair of structures.<br />

) for all<br />

39. Theorem: Composition of homomorphisms<br />

Let (A, ♢ 1<br />

, . . . , ♢ n<br />

), (B, ♡ 1<br />

, . . . , ♡ n<br />

) and (C, ♠ 1<br />

, . . . , ♠ n<br />

) be algebraic structures<br />

of same type. And let f : A → B, g : B → C be mappings with h = g ◦ f.<br />

Then we have that<br />

1) If f and g are homomorphisms then so is h<br />

2) If f and h are homomorphisms and f is surjective then also g is a<br />

homomorphism<br />

3) If g and h are homomorphisms and g is injective then f is a homomorphism<br />

Proof : Follows from the related theorem T9 about mappings which are homomorphisms<br />

wrt a single pair of operations.<br />

40. Theorem: Bijectivity implies automatically isomorphism<br />

A bijective homomorphism is an isomorphism.<br />

Proof : Follows from T39 (3) used on the scenario (f, g, h) = (f, f −1 , Id)<br />

41. Theorem: Composition of homomorphisms is homomorphism.<br />

Analogously for isomorphisms<br />

If we have a chain of structures linked in by homomorphisms, then the composition<br />

of these homomorphisms is itself an homomorphism<br />

Proof : Follows by induction from the previous theorem<br />

3. 3: Induced structures on subset<br />

42. Definition: Substructure<br />

By a substructure of a structure is meant a (non empty) subset which is invariant<br />

wrt to all of the operations of the structure.<br />

27


43. Theorem: Substructure is a structure of same type<br />

Let B be a substructure of A. When B is equipped with the induced operations<br />

you get a structure of same type as A. We shall always let B also denote this<br />

structure if no other structure is explicitly specified.<br />

Substructures occur in a lot of situations.<br />

44. Theorem: The generated substructure<br />

Let (A, ♢ 1<br />

, . . . , ♢ n<br />

) be a algebraic structure and let D ⊂ A. Then there exists<br />

a substructure B that contains D and is the least one to do so, which means<br />

that it is contained in any other substructure containing D.<br />

It is the intersection of all substructures containing D and can be explicitly<br />

written as<br />

B = ∩ {C ∈ A : D ⊆ C}<br />

where A denotes the set of substructures of (A, ♢ 1<br />

, . . . , ♢ n<br />

Proof : We define B by the formula and starts by showing that B is in fact a<br />

substructure. (Remark that A ∈ A and so A is not empty). To do so we must<br />

show that B is invariant wrt any operation.<br />

So let ♢ be any operation of the structure, say with arity k. Let (b1, . . . , bk) ∈<br />

B k . We must show that ♢ (b1, . . . , bk) ∈ B. So let C be any substructure<br />

containing D. Since C is a substructure we have that ♢ (b1, . . . , bk) ∈ C.<br />

Since this is true for any C we have that ♢ (b1, . . . , bk) must also be in the<br />

intersection of all these C which is B.<br />

By construction B is contained in any substructure containing C<br />

45. Definition: The generated substructure, the algebraic closure<br />

The structure B in the preceding theorem is said to be the subalgebra generated<br />

by D, or the algebraic closure of D. It is denoted by ⟨D⟩. If D = {d1, . . . , dn}<br />

we shall also write ⟨D⟩ = ⟨d1, . . . , dn⟩<br />

46. Theorem: A characterization of the algebraic closure<br />

The algebraic closure ⟨D⟩ consists of all elements which can be obtained by<br />

successive application of the operations on elements of D<br />

28<br />

).


Proof : It follows from the definitions that any substructure C containing D<br />

also must contain the elements constructed as above, since C must be invariant.<br />

So all of these elements must be in ⟨D⟩. On the other hand these elements will<br />

constitute a substructure.<br />

47. Remark: What is meant by generation<br />

The characterization in the theorem more clearly expresses the meaning of generation.<br />

And we could as well have taken this as a definition.<br />

48. Theorem: The image by a homomorphism is a substructure<br />

Let f be a homomorphism wrt (A, ♢ 1<br />

C be a substructure of (A, ♢ 1<br />

, . . . , ♢ n<br />

Then f(C) a substructure of (B, ♡ 1<br />

Proof : Obvious.<br />

, . . . , ♢ n<br />

).<br />

, . . . , ♡ n<br />

).<br />

) and (B, ♡ 1<br />

, . . . , ♡ n<br />

), and let<br />

49. Theorem: The inverse image by a homomorphism is a substructure<br />

Let f be a homomorphism wrt (A, ♢ 1<br />

, . . . , ♢ n<br />

) and (B, ♡ 1<br />

, . . . , ♡ n<br />

), and let<br />

C be a substructure of (B, ♡ 1<br />

, . . . , ♡ n<br />

). Then f −1 (C) is a substructure of<br />

(A, ♢ 1<br />

, . . . , ♢ n<br />

).<br />

Proof : Let ( ♢ , ♡ ) be a pair of coupled operations with n operands. Let<br />

a1, . . . , an ∈ f −1 (C), which means that f(a1), . . . , f(an) ∈ C. It is to be shown<br />

that ♢ (a1 . . . , an) ∈ f −1 (C), which means that f( ♢ (a1, . . . , an)) ∈ C. But<br />

since f is a homomorphism we have that f( ♢ (a1, . . . , an)) = ♡ (f(a1), . . . , f(an)),<br />

and this expression is in C, since f(a1), . . . , f(an) are so per assumption, and<br />

C as a substructure is closed wrt ♡ .<br />

3. 4: Induced structure on function spaces<br />

For any sets M and A we will use F(M, A) to denote the set of all mappings<br />

form M to A. For each x ∈ M we define the map ex from F(M, A) to A by<br />

ex(f) = f(x) and we call this map the evaluation at x. It so to speak changes<br />

the roles of map and argument.<br />

29


We are now going to move any algebraic structure on A to be a structure<br />

on F(M, A) (See the introduction to D 18), which we shall call the induced<br />

operation:<br />

50. Definition: Induced structure on function spaces<br />

Antag at (A, ♢ 1<br />

, . . . , ♢ n<br />

) is a algebraic structure and at M is a set. Then kalder<br />

vi den algebraic structure on F(M, A) der som operations har de inducerede<br />

operations for den inducerede structure. We tænker os altid, uden at behøve at<br />

fremhæve det, at F(M, A) is equipped with denne structure.<br />

51. Theorem: The structure on the function space is of same type<br />

Using the notation from the previous theorem we have that F(M, A) is of the<br />

same type asA<br />

Proof : Oplagt.<br />

3.4.1: Induced structure on product.<br />

Since we can construct the product of a number of operations we can also<br />

construct the product of structures by taking the product of the involved operations.<br />

This is made precise in the following<br />

52. Definition: Direct product of structures<br />

We take the case of two factors first: Suppose that (A, ♢ 1<br />

, . . . , ♢ n<br />

) and (B, ♡ 1<br />

, . . . , ♡ n<br />

are two structures of same type. Then we define their product as the structure<br />

(A × B, ♢ 1<br />

×♡ 1<br />

, . . . , ♢ n<br />

× ♡ n<br />

)<br />

Next lets take m factors each of which have m operations:<br />

Suppose that (Ai , ♢ i 1, . . . , ♢ i n) for i = 1, . . . , m are algebraic structures all of<br />

same type. Then we define the product structure as (A, ♢ 1<br />

, . . . , ♢ n<br />

), where<br />

A = A1 × · · · × An ♢ i<br />

= ♢ i 1 × · · · × ♢ i m.<br />

53. Theorem: The product structure is of same type<br />

The product structure has the same type as the factors.<br />

30<br />

)


Proof : The factors have per definition same type, lets say with aryties (k1, . . . , kn).<br />

Men each of product operations has the same arity as its factors. Therefore we<br />

have that the aryties for the product structure also be (k1, . . . , kn).<br />

3. 5: Induced structure on quotient<br />

54. Definition: Quotient structure modulo an equivalence relation.<br />

Canonical projection.<br />

Let there be given an algebraic structure (A, ♢ 1<br />

, . . . , ♢ n<br />

), where the underlying<br />

set A is equipped with an equivalence relation ∼ compatible with all the<br />

operations.<br />

We then define an algebraic structure, called the quotient structure of (A, ♢ 1<br />

modulo ∼, to be the algebraic structure (A∼, ♢ 1<br />

55. Theorem: Quotient structure type.<br />

The quotient structure is of same type.<br />

Proof : Obvious .<br />

∼, . . . , ♢ n<br />

56. Theorem: The canonical projection is a homomorphism.<br />

Den canonical projection is a homomorphism.<br />

Proof : Obvious.<br />

57. Theorem: Homomorphism gives equivalence.<br />

∼).<br />

, . . . , ♢ n<br />

Assume that f is a homomorphism of the algebraic structure (A, ♢ 1<br />

, . . . , ♢ n<br />

)<br />

into the algebraic structure (B, ♡ 1<br />

, . . . , ♡ n<br />

), and let the relation ∼ in A be the<br />

equivalence relation induced by f (that is: a ∼ b ⇔ f(a) = f(b)). Then we<br />

have that ∼ is compatible with the structure on A.<br />

31<br />

)


58. Definition: Quotient induced by a homomorphism.<br />

The equivalence relation mentioned in the preceding theorem is said to be induced<br />

by f and we denote it by ∼f . The quotient structure induced by this<br />

relation will be called the quotient structure induced by f and will be denoted<br />

by A/f<br />

4: Appendix: Tables.<br />

4. 1: Symbols for sets.<br />

In the table below X and Y are sets G, H are groups, M monoids, R rings,<br />

L Fields and V, W linear spaces. If needed in the context we let X and Y be<br />

equipped with topologies and we let V, W have inner products.<br />

Notation Set<br />

1 N natural numbers<br />

2 Z integers<br />

3 Q rational numbers<br />

4 R real numbers<br />

5 C complex numbers<br />

6 M ∗ invertible elements in M<br />

7 Zn classes of remainders Z/nZ<br />

8 Un invertible elements in Zn, (Z ∗ n)<br />

9 Lin(V, W ) linear mappings of V into W .<br />

10 Iso(V, W ) isometries of V into W<br />

11 Rot(2) rotations in the plane<br />

12 Rot(3) rotations in space<br />

13 Rfl(2) reflections in the plane<br />

14 Rfl(3) reflections in space<br />

15 T classes of remainders R/Z<br />

16 Mat(m, n, L) m × n matrices over L<br />

17 GL(n, L) invertible elements i Mat(n, n, L)<br />

18 SL(n, L) {A ∈ GL(n, L)| det(A) = 1}<br />

19 En unity matrix of dimension n<br />

20 O(n) orthogonal matrices<br />

21 SO(n) special orthogonal matrices<br />

22 U(n) unitary matrices<br />

32


23 SU(n) special unitary matrices<br />

24 Pern(R) permutation matrices over R<br />

25 F(X, Y ) mappings from X into Y<br />

26 C(X, Y ) continuous mappings from X into Y<br />

27 D(X) subsets of X.<br />

28 O(X) open subsets of X.<br />

29 F(X, Y ) ∗ bijective mappings from X into Y<br />

30 C(X, Y ) ∗ homeomorphisms from X into Y<br />

31 ⟨a, b, c, . . .⟩ words over {a, b, c, . . .}<br />

32 Sn permutations of n objects<br />

33 An even permutations of n objects<br />

34 Gx elements in G with x as fixed point<br />

35 R[X] polynomials in X over the ring R<br />

36 R(X) the canonical field extension of R[X]<br />

37 L(α) minimal field extension of L, containing α<br />

38 Fix(G, X) {g ∈ G|g(X) ⊆ X}<br />

39 Mb Möbius transformations<br />

40 Rot(X) rotations invariant for X<br />

41 V Klein’s Vierer-Gruppe (Z2 × Z2)<br />

42 T tetrahedron group<br />

43 H hexahedron group<br />

44 I icosahedron group<br />

45 S n unit hyper sphere i R n+1<br />

33


4. 2: Tables with operations<br />

.<br />

Operations with 0 operands<br />

Set operation Closed subsets<br />

1 N 0 kN<br />

2 C 0 R, Q, Z<br />

3 C 1 R, Q, Z, N<br />

4 C i<br />

5 Mn(L) En, E SLn(L), On(L), Pern(L)<br />

6 Mn(C) En, E Mn(R), Mn(Q), Mn(Z)<br />

7 D(X) ∅<br />

8 F(X, X) I<br />

9 ⟨a, b, c, . . .⟩ (), empty word<br />

Unary operations<br />

Set operand operation comment closed subsets<br />

1 C a −a Opposite R, Q, Z, N<br />

2 C ⋆ a a −1 Reciprocal<br />

3 C a ā conjugate<br />

4 L n x −x Opposite<br />

5 Mn(L) A A † ,A t ,A ⊤ transposed<br />

6 Mn(C) A A ⋆ adjoint<br />

7 F(X, X) f f −1 inverse<br />

8 D(X) A X \ A complement<br />

Binary operations<br />

Set operands operation name closed subsets<br />

1 C a, b a + b R, Q, Z, N<br />

2 C ⋆ a, b ab<br />

3 Mn(L) A, B A + B<br />

4 Sn s, t st<br />

5 D(X) A,B A ∩ B<br />

6 D(X) A,B A ∪ B<br />

34


4. 3: Tables with homomorphisms<br />

In the following tables A, X, S, M, G, R, L, V are denoting an arbitrary structure<br />

of the following types respectively general structure, set, semigroup, monoid,<br />

group, ring, field.<br />

If we have more structures of the same type we shall use indexed letters in the<br />

usual way, so that M1, M2, . . . etc denote sets. We let M ′ denote a subset of<br />

M and so on.<br />

The set of homomorphisms of a structure A into a structure B is denoted by<br />

Hom(A, B).<br />

Homomorphisms wrt binary operations.<br />

x ♢ y A x f(x) B u ♡ v symbo<br />

1 x + y Rn x Ax Rm u + v<br />

2 m + n Z n an R uv exp. m<br />

3 A + B Mat(m, n, L) A L(A, V, W) Lin(V, W ) u + v ass. lin. m<br />

4 L + M Lin(V, W ) L M(L, V, W) Mat(m, n, L) u + v ass. ma<br />

5 LM Lin(V, W ) L M(L, V, W) Mat(m, n, L) u ◦ v ass. ma<br />

6 P + Q L[X] P P (L) Lin(V, V ) u + v<br />

7 x + y Rn x Ax Rm u + v mult. m.<br />

8 xy N x σ(x) N0 N<br />

u + v prime spec<br />

9 xy OrdA x |x| N u + v word len<br />

10 x × y Finite sets x |x| N0 uv ant. ele<br />

11 x ◦ y Rot(3) x ˆx Mb u ◦ v ass. Möbius tran<br />

12 x ◦ y Mb x ˆx Rot(3) u ◦ v ass. ro<br />

13 xy SU(2) x ˆx SO(3) uv ass. ro<br />

14<br />

5: Examples and exercises<br />

Example 1: Different ways of notation. (see nr 5)<br />

We have that +(a, ·(b, c)) = +a(·bc) = a + b · c = abc · +<br />

Exercise 2: Backward polish. (see nr 5)<br />

10 2 · 5 + √ 2 · 10 : = 1<br />

Example 3: Linear mappings are homomorphisms (see nr 8)<br />

35


Let L be a linear mapping from the vectors pace V to the vector space W .<br />

Then L is a homomorphism wrt the pair (+, +).<br />

Let Λ denote multiplication with the scalar λ, that is Λ(x) = λx for any vector<br />

x. Then Λ is a unary operator and L is a homomorphism wrt the pair (Λ, Λ)<br />

Example 4: Presenting a linear mapping by a matrix is homomorphic (see nr<br />

8)<br />

If L is a linear mapping from the linear space V to the linear space W and<br />

bases are chosen, then there exists a unique matrix A such that if w = L(v)<br />

then y = Ax where x are the coordinates of v and y are the coordinates of w.<br />

We say that A represents L (or is associated with or belongs to L).<br />

The mapping that sends L to A is a homomorphism wrt the pair of binary operations<br />

(◦, ·) consisting of composition of maps and multiplication of matrices.<br />

If we restrict to invertible mappings and invertible matrices then it is also<br />

a homomorphism wrt the pair of unary operations (inversion of mappings,<br />

inversion of matrices).<br />

Example 5: Associating a Möbius transformation with a matrix is homomorphic<br />

(see nr 8)<br />

(<br />

a<br />

To any invertible complex matrix<br />

c<br />

hA given by<br />

)<br />

b<br />

we associate the complex mapping<br />

d<br />

hA(z) =<br />

az + b<br />

cz + d ,<br />

The mapping that takes A to hA is a homomorphism wrt to the pair (·, ◦) of<br />

matrix multiplication and composition of mappings.<br />

Example 6: The dependence of the power on the exponent is a homomorphy<br />

(see nr 8)<br />

Let x be a real number. Then the mapping N ∋ n ↦→ x n ∈ R is a homomorphism<br />

wrt the pair (+, ·).<br />

Example 7: The dependence of the matrix power on the exponent is a homomorphy<br />

(see nr 8)<br />

Let A be a square matrix. Then the mapping N ∋ n ↦→ A n is a homomorphism<br />

wrt the pair (+, ·).<br />

Example 8: Homomorphisms for unary operations (see nr 8)<br />

36


We consider the unary operation − on R and want to determine those mappings<br />

f of R into it self, which are homomorphisms. the condition is that<br />

f(−x) = −f(x),<br />

which is the condition for f to be an odd function<br />

This result can be generalized to general linear spaces.<br />

Exercise 9: Homomorphisms for unary operations (see nr 8)<br />

Write the condition f : R → R to be a homomorphism wrt the operations<br />

x ↦→ −x and x ↦→ x −1 , and find such a homomorphism. (Hint: f(x) = a x )<br />

Exercise 10: The four fundamental isomorphisms concerning the algebraic operations.<br />

(see nr 8)<br />

Let a be a real constant. Study the four function definitions ax, a x , log a(x), x a<br />

and explain that they define homomorphisms for suitable choices of domain<br />

and codomain. Notice that all combinations of addition and multiplication are<br />

possible.<br />

Exercise 11: Closed wrt to addition. (see nr 14)<br />

Construct an increasing chain of subsets of C which are closed wrt addition in<br />

C<br />

Exercise 12: Show that the set {x+y √ 3 : x, y ∈ Q} is closed both wrt addition<br />

and multiplication .<br />

Show that the same is not true for {x + y 3√ 2 : x, y ∈ Q}. (see nr 14)<br />

Exercise 13: Product of equal factors (see nr 19)<br />

Show that Z2 × Z2 = {(0, 0), (0, 1), (1, 0), (1, 1)}. Show that the product operation<br />

+ × + partially is given in the following table and complete:<br />

(+,+) (0,0) (0,1) (1,0) (1,1)<br />

(0,0) (0,0) (0,1) (1,0) (1,1)<br />

(0,1) (0,1) (0,0) (1,1) (1,0)<br />

(1,0) (1,0) (1,1)<br />

(1,1) (1,1) (0,1)<br />

(·, ·) (0,0) (0,1) (1,0) (1,1)<br />

(0,0) (0,0) (0,0) (0,0) (0,0)<br />

(0,1) (0,0) (0,1) (0,0) (0,1)<br />

(1,0) (0,0) (0,0)<br />

(1,1) (0,0) (1,0)<br />

Exercise 14: Product of different factors (see nr 19)<br />

Make tables for (Z2, +) × (Z3, +) and (Z2, ·) × (Z3, ·), in the same way as in<br />

X13<br />

Example 15: Chinese remainder theorem (see nr 21)<br />

37


Assume that p divides n. Then the mapping Zn ∋ x ↦→ x mod p ∈ Zp is a<br />

homomorphism (with obvious choices of operations). Prove it<br />

Assume that n = pq. The mapping Zn ∋ x ↦→ (x mod p, x mod q) ∈ Zp × Zq is<br />

then according to T21 a homomorphism. If p and q are coprime then this is<br />

even an isomorphism. The latter claim is a way of formulating the so called<br />

Chinese remainder theorem. The most common way of formulating it is to say<br />

that the system of equations<br />

x mod p = u x mod q = v<br />

always has solutions and that they together constitute a class of remainders<br />

modulo n. This formulation corresponds to the claim that the above mapping<br />

is surjective (since the domain and the codomain has the same number of<br />

elements.<br />

From this you can see that Z2 ×Z3 is isomorphic with Z6 with obvious modular<br />

additions. compare with the results in X13 and X14<br />

The Chinese remainder theorem is proved as follows :<br />

Since p and q are coprime we can use the extended Euclidean algorithm to find<br />

integers k and l such that kp + lq = 1, and then u − v = (kp + lq)(u − v). This<br />

gives<br />

u + k(v − u)p = v + l(u − v)q.<br />

And so x = u + k(v − u)p is a solution. Prove this.<br />

Exercise 16: Permutations with the same sign (see nr 22)<br />

Show that the relation ”x and y has the same sign is an equivalence relation<br />

on Sn. Find the equivalence classes.<br />

Show that this relation is compatible with product of permutations.<br />

Show that the sign considered as a mapping is a homomorphism wrt a suitable<br />

pair of operations.<br />

Exercise 17: Orthogonal matrices with the same determinant (see nr 22)<br />

Show that the relation ”the determinant of x and y is the same” is an equivalence<br />

relation on On. Find the equivalence classes.<br />

Show that this relation is compatible with product of matrices.<br />

38


Show that the determinant considered as a mapping is a homomorphism wrt<br />

to suitable choices of operations.<br />

Exercise 18: Similar matrices (see nr 22)<br />

By definition two square matrices A and B are similar if they are matrix (<br />

each with respect to some bases of its own) for the same linear mapping. Show<br />

that this relation is an equivalence relation which is compatible with product<br />

of matrices. This is easily proven by using the definition given here, but may<br />

also be proven by using the characterization that A and B are similar if and<br />

only if there exists an invertible matrix S such that AS = SB.<br />

Exercise 19: Integers with same remainder (see nr 22)<br />

Show that the relation ”tje principal remainder modulo n of x and y is the<br />

same” is an equivalence relation on Z. Find the equivalence classes.<br />

Show that this relation is compatible with addition.<br />

Show that x mod n is a homomorphism wrt suitable choices of operations.<br />

Example 20: Colour compatibility (see nr 22)<br />

A cube is coulored with three colours same colour on opposite faces. We define<br />

two rotations of the cube to be colour equivalent if they have the same action<br />

if you only consider how the colours are placed after the rotation<br />

Show that this relation is a equivalence relation on the set of rotations of the<br />

cube and is compatible with composition<br />

Example 21: Möbius equivalence (see nr 22)<br />

Two 2 × 2-matrices over C which induces the same Möbius transformation are<br />

said to be Möbius equivalent. This relation is compatible with multiplication<br />

of matrices.<br />

39


6: Appendix : Classification<br />

Mathematicians are very fond off classifying and have developed a terminology<br />

which is well suited to describe the act of classifying.<br />

A given universe is partitioned into classes in such a way that any element is<br />

in a class and any class contains an element.<br />

Formally a partition is a set X (the universe) together with a set K of subsets<br />

of X. The sets which are members of K are said to be classes and an element<br />

in a class is said to be a representative of this class.<br />

It shall be so that each element in X is a representative for exactly one class<br />

and each class has at least one representative. In other words the classes are<br />

mutually disjoint and non empty and their union is all of X. The class to which<br />

x belongs is denoted [x]K.<br />

We shall use two examples throughout :<br />

Example 1: The set K = {3Z, 1 + 3Z, 2 + 3Z} is a partition of Z with [x]K =<br />

x + 3Z. These are the remainder classes modulo 3.<br />

Example2: The set of lines that are parallel with the y-axes is a partition of the<br />

plane.<br />

Up to know we have been dealing with the result of a partition. The process of<br />

partitioning must be based on some specific properties of the members of X.<br />

One typical method is to have a rule which tells you if two members go into<br />

the same class or not.<br />

Formally this is takes the form of a relation x ∼ y which tells that x and y<br />

are in the same class. For such a relation to lead to a partition it is necessary<br />

and sufficient that it is an equivalence relation. On the basis of such a relation<br />

the classes are constructed in the following (obvious way): For each x ∈ X we<br />

define the class Kx = {y: y ∼ x}. Then we get the set K = {Kx: x ∈ X} of<br />

classes, which we can check actually to be a partition of X, (if ∼ happens to<br />

be an equivalence relation, and we have that at [x]K = Kx.<br />

We shall say that this partition is generated by (or induced by or associated<br />

with ) ∼ and call it the quotient of X modulo ∼ and denote it X/∼ or simply<br />

X∼. We shall return to the reason for calling it quotient.<br />

We are going to use [x]∼ to denote the class for which x is a representative.<br />

40


Lets take a look on the two examples<br />

If we define x to be congruent with y modulo 3 if x − y is et multiple of 3,<br />

then we have an equivalence relation ≡ which is the one which generates the<br />

partition into remainder classes.<br />

For the next example we shall interpret the coordinates (x, y) to record that<br />

some specific event takes place at timex at the location y. Then the relation<br />

”simultaneously” is an equivalence relation i the set of events and will generate<br />

the partition into vertical lines.<br />

Another method of classification is based on some property of the elements<br />

which determines its class. Formally we can model a property as a mapping f<br />

of X into some set I, such that f(x) is the instance of the property characterizes<br />

x. Then for each possible property, that is each member of i ∈ I, we can assign<br />

the class K i = {x: f(x) = i}.<br />

We shall say that this partition has f as its indicator. And we shall speak of<br />

the quotient of X modulo f and denote it X/f or simply Xf . and [x]f the class<br />

of x.<br />

Lets take another look at the examples:<br />

We get the partition into remainder classes modulo 3 by using I = {0, 1, 2} and<br />

f(x) = x mod 3.<br />

We get the partition into vertical lines for each of the following choices of<br />

indicator function:<br />

1) f is the projection on the x-axis<br />

2) f(x, y) = x<br />

3) f is the time of the event<br />

Having an indicator f there is an obvious equivalence relation which tells you<br />

if two elements belong to the same class for this indicator. We just say that<br />

x ∼ y if f(x) = f(y). We shall say that ∼ is generated by f. It is then obvious<br />

that X/∼= X/f and that [x]∼ = [x]f .<br />

To any partition you can assign an equivalence relation ∼ which generates it by<br />

defining x ∼ y to mean that x and y are in the same class, that is [x]K = [y]K .<br />

We can also f ind a function which is the indicator for the partition, namely<br />

the so called canonical indicator or canonical projection k, by letting I = K<br />

and k(x) = [x]K. To check that k is indicator for K you have to show that all<br />

41


classes in K have the form {x: k(x) = K} for some K ∈ K. But this is clear<br />

from the definitions.<br />

In our examples we see that<br />

x ↦→ [x]r is the canonical indicator for the partition into remainder classes<br />

(x, y) ↦→ the vertical line(x, y) is the canonical indicator for the partition into<br />

vertical lines<br />

Many indicators are made by selecting a special well suited representative for<br />

each class to represent all the members of the class, so to speak some sort of<br />

principal representative .<br />

Some examples:<br />

1) The principal remainder modulo some integer<br />

2) The projection on the x-axis (that is (x, 0))<br />

3) Rn in the class of linear spaces of dimension n in the partioning of<br />

the set of finite dimensional linear spaces with the dimension as the<br />

indicator.<br />

4) the principal argument of a complex number. To be more explicit:<br />

In the real numbers you have an equivalence relation which makes to<br />

real numbers equivalent if theire difference is a multiple of 2π. As<br />

5)<br />

the indicator of a class you often choose the (unique) member in the<br />

interval [0, 2π[.<br />

The principal value of arcsin by the partition which has sin as its<br />

indicator function. (the pricipal value of arcsin y is that solution x to<br />

the equation sin x = y which is in the interval [ − π<br />

]<br />

π<br />

2 , 2 . This is the<br />

one which computer programmes chooses if not otherwise instructed.<br />

For mathematicians arcsin is the equivalence class itself, since this<br />

leads to the simplest rules for calculation.<br />

We promised to motivate, explain, excuse the terminology quotient. We do it<br />

by an example: Let Z = X × Y , and let pX and pY denote the projections on<br />

the factors. If we use pX as the indicator function we get a partition into classes<br />

of the form K x = {(x, y): y ∈ Y } and the mapping X ∋ x ↦→ K x ∈ X/pA is a<br />

bijection. We use this to identify X/f with X and so Z = K × Y . This could<br />

be expressed that K is Z divided by Y .


Anders Madsen<br />

CATEGORIES<br />

OF<br />

<strong>STRUCTURES</strong><br />

<strong>ABSTRACT</strong><br />

<strong>ALGEBRAIC</strong><br />

<strong>STRUCTURES</strong><br />

y z<br />

y z<br />

yz<br />

bc<br />

b c<br />

b c<br />

2


This is a booklet in a series with the title ”Abstract algebraic structures”, which I have<br />

written for the algebra courses (E1,BE2) part of the mathematics education at RUC<br />

These booklets should be seen in connection with another series with the title ”Concrete<br />

algebraic structures”. These two series should be viewed as the jaws of a pair of pinchers.<br />

It would obviously be futile (and frustrating) to teach abstract algebra without involving<br />

concrete examples and it would be poor (and devoid of perspective) only to build concrete<br />

examples without involving the underlying abstract structures.<br />

And then again, there is a certain beauty in emphasizing the abstract character by isolating it<br />

and let its top-down nature appear clearly as in done in the controversial idol ”The Elements<br />

of Mathematics” by Bourbaki. Starting with the coarsest structures and then step by step<br />

refining by adding structural elements. All results which occur in many circumstances are<br />

proved once and for ever in its most elementary form.<br />

In the same way there is some satisfaction combined with letting each concrete structure stand<br />

as simple as possible without to much ado, das Ding an sich. And the pleasure by recognizing<br />

the essentially same type of argument recurring over and over in different disguises.<br />

Besides having the aesthetic satisfaction from the pure abstraction and the pure pleasure<br />

from the concrete details, both perspectives have a great cognitive influence and contributes<br />

to the development of competencies which are essential to any mathematician.<br />

I have chosen to emphasize these to oppositely directed but coordinated perspectives which<br />

is implied in the two series.<br />

The individual concrete structures are displayed in separate expositions without mutual<br />

references. Subjects which are needed at more places are repeated at each place. But the<br />

choice of details is made in such a way as to best possible deliver stuff to be used in examples<br />

in the abstract part.<br />

The text of the abstract part is not printed but can be found at<br />

http://milne.ruc.dk/~am/algebra<br />

Anders Madsen, may 2012<br />

44


Use the table of contents in the first part Go<br />

45


In the previous part the subject was general algebraic structures. We made no<br />

assumptions on the number or the character of the operations. Nor did we use<br />

any laws of calculation. Now we turn to specific algebraic structures. These<br />

are defined by specifying which types of operations are in play and which rules<br />

(also known as axioms) they must obey. For each such specification there will<br />

be many concrete algebraic structures answering the specification. We shall<br />

speak about a category of algebraic structures when we consider the collection<br />

of all structures determined by a given specification.<br />

We shall advance gradually through different categories, in each step adding<br />

some extra operation or some extra axiom.<br />

Consequently we start out with the category of semigroups, where there is only<br />

one operation and only one axiom, associativity.<br />

In the next step we pick out an element (considered to be a 0-ary operation)<br />

and claim as an axiom that this element is a neutral element. This is the<br />

category of monoids.<br />

We continue this way adding more operations and more axioms through the<br />

categories of groups, rings and fields.<br />

For each of these categories we study the induced structures and check if they<br />

them selves belong to the category. Since they have the same type it is enough<br />

to check if the axioms are satisfied.<br />

Then we study homomorphisms between structures of the same category .<br />

7: Semigroups<br />

7. 1: Definition<br />

59. Definition: Semigroup<br />

An algebraic structure (A, ♢ ), where ♢ is binary and associative is said to be<br />

a semigroup.<br />

60. Remark: Semigroups are half groups<br />

The name semigroup refers to the fact that many semigroups make up half (so<br />

to speak) of some group, for instance the natural numbers with addition, which<br />

is (roughly) half of the group of integers with addition. As a matter of fact it<br />

46


is always possible to extend (double) a semigroup to a genuine group, but we<br />

are not going to show how.<br />

Almost all the binary operations appearing in algebraic operations are associative.<br />

Semigroups thus are in the fundament of most algebraic structures.<br />

Accordingly most examples of semigroups can be embedded in more refined<br />

structure. In the sequel we shall meet monoids and groups as examples of this<br />

aspect. But semigroups which are essentially semigroups exist, and are treated<br />

in E23 and X28 and in X33 .<br />

7. 2: Induced semigroups<br />

61. Definition: Subsemigroup.<br />

A substructure of en semigroup is called a subsemigroup if the induced structure<br />

is a semigroup.<br />

62. Theorem: All substructures of a semigroup are subsemigroups<br />

A substructure of a semigroup is a subsemigroup<br />

Proof : Associativity is inherited (T32).<br />

This may be stated more operationally:<br />

63. Theorem: Criterion for subsemigroup<br />

A subset B of a semigroup (A, ♢ ) is a subsemigroup if and only if it is closed<br />

wrt to the operation (B ♢ B ⊆ B),<br />

64. Theorem: Structures induced by a semigroup are semigroups.<br />

If (S, ♢ ) is a semigroup and M is a set, then F(M, S) is a semigroup with the<br />

induced operation.<br />

If S1, . . . , Sn are semigroups then S1 × . . . × Sn is a semigroup.<br />

If ∼ is an equivalence relation compatible with the semigroup S then the quotient<br />

structure S∼ is a semigroup.<br />

47


Proof : Associativity is inherited by the induced structures, and no more is<br />

needed.<br />

7. 3: Powers<br />

The notion of power with positive integer exponent can be defined and the<br />

power rules proved at this very primitive level. And then we have dealt with<br />

this aspect once and for always. Notice that the definitions and the proofs are<br />

the ones you learned very early in your life.<br />

65. Definition: Powers with positive exponent.<br />

In a semigroup parentheses are superfluous and so powers with positive integer<br />

exponent of an element a may be immediately defined by the following recursive<br />

procedure: a 1 = a, a n+1 = aa n<br />

66. Theorem: Rules for calculating with Powers<br />

a n a m = a n+m , (a n ) m = a mn for all a and all n, m > 0.<br />

If ab = ba then (ab) n = a n b n .<br />

Proof : Induction. Example. Let m be fixed. We show the formula for this m<br />

and all n by induction after n.<br />

For n = 1 we have u n u m = uu m = u m+1 per definition of power.<br />

Suppose then that you have proved the formula for n = k. Then we have that<br />

u k+1 u m = uu k u m = uu m+k = u m+k+1 , and so the formula is also proved for<br />

n = k + 1.<br />

8: Monoids<br />

8. 1: Definitions<br />

A very large part of the binary operations, used in algebraic structures, have a<br />

neutral element, an element without action when used by the operation.<br />

To study the pure effect of introducing a neutral element we introduce the<br />

notion of a monoid, a structure which differs from a semigroup only by admitting<br />

a neutral element. Actually there are a lot of interesting monoids without<br />

further structure. For the sake of completeness we take the following<br />

48


67. Definition: Neutral element for a binary operation<br />

The element e is said to be a neutral element wrt den binary operation ♢ on<br />

A, if for all a ∈ A we have that a ♢ e = e ♢ a = a. We identify e with the<br />

constant operation (with 0 operands) for which e is the constant value.<br />

The prototype of a ”genuine” monoid, one without further ado is the set<br />

F(X, X) of all mappings of X into itself with composition as binary operation<br />

and the identity mapping as the neutral element. Other examples are seen<br />

in X33<br />

68. Theorem: Uniqueness of neutral element<br />

Any operation has at most one neutral element.<br />

Proof : If e and e ′ are neutral elements, then e = e ♢ e ′ = e ′<br />

69. Definition: Monoid<br />

A monoid is an algebraic structure (A, ♢ , e) for which (A, ♢ ) is a semigroup<br />

(the underlying semigroup) and e is a neutral element wrt ♢ . Since the neutral<br />

element is uniquely determined we shall also let (A, ♢ ) denote the monoid.<br />

8. 2: Induced monoids<br />

70. Definition: Submonoid<br />

A substructure of a monoid which is itself a monoid with the induced structure,<br />

is said to be a submonoid.<br />

71. Theorem: Each substructure of a monoid is a submonoid<br />

Substructure of monoid is submonoid<br />

49


Proof : The first thing to show is that the substructure is a semigroup. This<br />

follows from T62. Next we show that there is a neutral element. A substructure<br />

is per definition closed wrt each operation in the structure, and the neutral<br />

element is (considered to be ) an operation. The neutral element will therefore<br />

be a member of the substructure.<br />

A somewhat more operational way is<br />

72. Theorem: Criterion for submonoid<br />

A subset B of en monoid (A, ♢ , e) is a submonoid if and kun if B is closed<br />

wrt the operation (B ♢ B ⊆ B) and e ∈ B.<br />

Even though the following result may seem somewhat meager it is useful to<br />

have it coined for later reference (T77). But logically it belongs right here:<br />

73. Theorem: The trivial submonoid<br />

The singleton {e} is a submonoid<br />

Proof : It is obviously a substructure and therefore a submonoid.<br />

74. Theorem: Structures induced by a monoid.<br />

If (A, ♢ , e) is a monoid and M is a set, then F(M, A) is a monoid with the<br />

induced operations. The neutral element is constant mapping M ∋ x ↦→ e.<br />

Product structures of monoids (Ai, ♢ i<br />

e = (e1, . . . , en).<br />

, ei) is a monoid. The neutral element is<br />

A quotient structure (A, ♢ , e) modulo ∼ of a monoid is a monoid. The neutral<br />

element is [e]∼.<br />

Proof : We only need to check the axioms: In all three cases we know that the<br />

underlying structure is a semigroup, which is the first axiom. Which remains<br />

is to check the neutral element. This is straightforward in all the cases. Lets<br />

do it.<br />

For F(M, A) the induced operation is the constant mapping defined on M with<br />

the constant value e. This is seen to be neutral by simple inspection..<br />

50


For product structures we must inspect e1 × . . . × en, which is the constant<br />

operation (e1, . . . , en), and is easily seen to be neutral :<br />

(a1, . . . , an) ♢ 1<br />

×· · ·× ♢ n<br />

(e1, . . . , en) = (a1 ♢ 1<br />

e1, . . . , an ♢ n<br />

en) = (a1, . . . , an).<br />

For quotient structures we must inspect e/ ∼, which is the constant operation<br />

[e]. To show it to be neutral we have the following simple calculation:<br />

[a][e] = [ae] = [a]<br />

Examples of function spaces are X31,X32. Example quotient monoids are found<br />

in: E22<br />

8. 3: Examples<br />

Deal with the following examples and exercises. E23 E24 E25 E26 X27<br />

X28 X29 X30 X31<br />

8. 4: Monoid homomorphisms<br />

75. Definition: Monoid homomorphism<br />

A homomorphism between monoids is said to be a monoid homomorphism.<br />

X31,X33.<br />

X32<br />

76. Definition: The kernel for a monoid homomorphism<br />

By the kernel for a monoid homomorphism we mean the inverse image of the<br />

neutral element. So if (A, ♢ , eA) and (B, ♡ , eB) are monoids and f : A → B<br />

is a monoid homomorphism, then f −1 ({eB}) is called the kernel for f (wrt to<br />

the structures). It is denoted by ker f<br />

77. Theorem: The kernel of a monoid homomorphism is a submonoid<br />

Let (A, ♢ , eA) and (B, ♡ , eB) be monoids and f : A → B a monoid homomorphism,<br />

Then ker f is a submonoid.<br />

51


Proof : The kernel is the inverse image of a substructure, since the neutral<br />

element (as a singleton) is a submonoid (T73). A general result (T49) states<br />

that the inverse image of a structure is substructure.<br />

You can meet more monoids in X33 E34 E35 E36<br />

8. 5: Inverse<br />

78. Definition: Invertible element. Inverse.<br />

Suppose that (A, ♢ , e) is a monoid, and let a ∈ A. We shall say that a is<br />

invertible if there exists b ∈ A such that a ♢ b = e and b ♢ a = e. Then b is<br />

said to be an inverse element to a wrt ♢ .<br />

79. Theorem: Inverse is unique.<br />

An invertible element has only one inverse element.<br />

Proof : Let b1 and b2 be inverses of a. Then by associativity b1 = b1 ♢ e =<br />

b1 ♢ (a ♢ b2) = (b1 ♢ a) ♢ b2 = e ♢ b2 = b2.<br />

80. Definition: The inverse element.<br />

The unique inverse is said to be the inverse element for a.<br />

If you insist on having a general way of denoting the inverse you may write<br />

a ♢ −1 . But usually the context can tell you what operation is involved and then<br />

we simply write a −1 .<br />

8. 6: Powers<br />

81. Definition: Powers with negative exponents in monoids<br />

Let a be an invertible element. For n ∈ N we define a −n to be (a −1 ) n , and a 0<br />

to be e (the operation being known from context)<br />

82. Theorem: Power rules<br />

For any invertible element a and for all m, n ∈ Z we have that at a m ♢ a n =<br />

a m+n and (a m ) n = a mn .<br />

52


Proof : The theorem is already proved for n, m > 0 when dealing with semigroups<br />

,T66. It is easily seen also to hold for n = 0. And by using that we<br />

have a n a −n = a n (a −1 ) n = (aa −1 ) n = e for all m ≥ n ≥ 0 you then get<br />

a m a −n = a m−n+n−n a n a −n = a m−n a n a −n = a m−n . It is left to the reader to<br />

fill in the remaining details of the proof<br />

The following theorem is known in a lot of special cases, such as functions and<br />

matrices<br />

83. Theorem: Inverse of combinations<br />

If a and b are invertible elements of monoid (M, ♢ ) then so is a ♢ b and a −1 .<br />

Their inverses can be computed as (a ♢ b) −1 = b −1 ♢ a −1 and (a −1 ) −1 = a<br />

Proof : Exercise<br />

8. 7: Translations<br />

84. Theorem: The monoids of left translations<br />

Let La denote left translation by a in the monoid (M, ♢ ). Let LM denote<br />

the set of all such left translations. Then the mapping a ↦→ La is a monoid<br />

homomorphism from M to LM wrt to ♢ and composition of functions, that is<br />

La ♢ b = La ◦ Lb.<br />

Proof : Exercise<br />

85. Theorem: A translation with invertible element is invertible,<br />

that is a bijection<br />

If a is an invertible element in a monoid, then left translation with a is a<br />

bijection of the monoid on itself.<br />

Proof : Using the terminology from T84 we have that La ◦ L a − 1 = L a ♢ a −1 =<br />

Le = I (and similarly from the right) and therefore La is a bijection with<br />

inverse L a −1<br />

This has the following corollary:<br />

86. Theorem: Cancelation in monoids<br />

53


If a is invertible then for all x, y in the monoid you have that<br />

ax = ay ⇒ x = y<br />

meaning that you can cancel multiplication with invertible elements<br />

Proof : Exercise<br />

87. Theorem: A monoid homomorphism respects inversion<br />

For a homomorphism between monoids you have that if a is invertible then so<br />

is f(a) and its inverse is given by f(a) −1 = f(a −1 )<br />

Proof : Just check that the candidate for an inverse is ok: f(a −1 )f(a) =<br />

f(a −1 a) = f(eA) = eB.<br />

9: Groups<br />

And now we take one step up in the hierarchy by assuming all elements to have<br />

inverses which is equivalent to adding a new operation namely taking inverses.<br />

Therefore<br />

9. 1: Definitions<br />

88. Definition: Group<br />

An algebraic structure (G, ♢ , ⋆, e) is said to be a group if (G, ♢ , e) is a monoid<br />

( the underlying monoid) and if all a ∈ G is invertible wrt ♢ with ⋆a as inverse<br />

element. So the axioms are<br />

1) x ♢ (y ♢ z) = (x ♢ y) ♢ (x ♢ z)<br />

2) e ♢ x = x ♢ e = x<br />

3) a ♢ (⋆a) = (⋆a) ♢ a = e<br />

89. Remark: Short notation<br />

Since the neutral element and all inverses are uniquely determined by the binary<br />

operation ♢ it is common practice to let the group be denoted by the short form<br />

(G, ♢ )<br />

90. Remark: Multiplicative notation<br />

54


When dealing with abstract groups one often chooses to use the common multiplication<br />

symbol (the invisible dot) to denote the binary operation, that is ab.<br />

Similarly we use a −1 for ⋆a and 1 for e. If we say that G is a group without<br />

further specification of the operations this notation is understood. If there is<br />

more than one group in the context it may be relevant to denote the neutral<br />

element of G by 1G.<br />

9. 2: Induced group structures<br />

91. Theorem: The induced structures are group structures<br />

The structures that a group induces on subsets, products, function spaces and<br />

quotients are groups<br />

Proof : The only things left over from the analogue theorem for monoids is to<br />

check for invertibility, which is left to the patient reader.<br />

9. 3: subgroup<br />

92. Definition: Subgroup<br />

A substructure of a group is called a subgroup if it is a group when it is equipped<br />

with the induced operations<br />

It is easy to check for subgroups by the following characterization<br />

93. Theorem: Any substructure of a group is a subgroup<br />

Let H be a substructure of the group G. Then H is a subgroup .<br />

Proof : Since H is a substructure, H will be closed wrt all the operations,<br />

especially the operations in the underlying monoid. So H is a submonoid, and<br />

so a monoid.<br />

The operation of taking inverses in G induces an operation on H, which is<br />

obviously the operation of taking inverses in H.<br />

The next theorem is a more operative formulation of the same fact<br />

94. Theorem: Criterion for subgroup .<br />

55


A subset H of a group G is a subgroup if and only if HH ⊆ H, H −1 ⊆ H and<br />

1 ∈ H.<br />

95. Definition: Group homomorphism<br />

A homomorphism between two groups is said to be a group homomorphism<br />

96. Theorem: Images and inverse images of subgroups<br />

Let f : G1 → G2 be a group homomorphism and let H1 be a subgroup of G1.<br />

Put H2 = f(H1).Then H2 is a subgroup of G2.<br />

Let H2 be a subgroup of G2. Put H1 = f −1 (H2). Then H1 a subgroup of G1.<br />

Proof : A general result (T48) states that the image of a substructure is itself<br />

a substructure, so H2 is a substructure of G2, and therefore a subgroup. The<br />

proof for inverse image is similar.<br />

97. Definition: The Kernel<br />

The kernel for a group homomorphism f is the kernel for the underlying monoid<br />

homomorphism, and is denoted ker(f)<br />

98. Theorem: The trivial subgroup<br />

{1} is a subgroup (called the trivial subgroup ).<br />

Proof : It is obvious that {1} is closed wrt all the operations, therefore a substructure<br />

and therefore a subgroup .<br />

99. Theorem: The kernel for a group homomorphism is a subgroup<br />

Let f : A → B be a homomorphism. Then ker(f) is a Subgroup of A.<br />

Proof : The set {1B} is a subgroup . Its inverse image therefore also is<br />

subgroup . But this is per definition the kernel.<br />

The following theorem is useful for proving that a homomorphism is injective.<br />

The technique is to move questions to the neutral element by translation. It is<br />

then sufficient to have injectivity at the neutral element.<br />

100. Theorem: Characterization of injective homomorphisms<br />

56


A homomorphism f : A ↦→ B is injective precisely when its kernel is trivial (it<br />

consists of the neutral element alone<br />

Proof : Necessity is obvious. To show the sufficiency suppose that the only<br />

element in the kernel is 1A.. Then if f(a) = f(b) it follows that f(a −1 b) =<br />

f(a) −1 f(b) = 1B from which it follows that a −1 b = 1A, and therefore a = b.<br />

This proves the injectivity.<br />

A well known special case of this theorem is about linear space, and is recalled<br />

in E37. Another useful special case E38 is about matrices.<br />

The preceding examples are most probably already well known to you from<br />

past experience. May be X39 presents a new example.<br />

Now we are ready for the important subject of quotient groups. So we must<br />

consider equivalence relations compatible with the the group structure.<br />

Our first step is to introduce a way of creating such equivalence relations by<br />

using a subgroup .<br />

101. Theorem: A subgroup induces an equivalence relation<br />

Let H be a subgroup of the group G. The relation ∼ defined by<br />

a ∼ b ⇔ b ∈ aH<br />

is an equivalence relation.<br />

Proof : Reflexivity:<br />

a ∼ a, since a = a1 and 1 ∈ H, and so a ∈ aH.<br />

Symmetry:<br />

If a ∼ b then b ∈ aH per definition. Therefore there exists h ∈ H with b = ah.<br />

From this you can deduce that a = bh −1 and since h −1 ∈ H you have that<br />

a ∈ bH. Therefore b ∼ a<br />

Transitivity :<br />

Let x ∼ y and y ∼ z, which per definition means that y ∈ xH and z ∈ yH.<br />

Therefore there exists h1 and h2 with y = xh1 and z = yh2, and therefore<br />

z = yh2 = xh1h2 ∈ xH, and consequently x ∼ z.<br />

57


102. Definition: The equivalence relation induced by a subgroup<br />

The equivalence relation established in the preceding theorem is said to be the<br />

left equivalence relation induced by H. The equivalence classes are said to be<br />

the left cosets for H.)<br />

It is obvious to define right equivalence relation and right cosets analogously.<br />

Lets give some interpretation of what is going on here. Imagine that the elements<br />

in G have some property (call it their color). You should think of the<br />

elements h in the subgroup H to be so that their action doesn’t change the<br />

colour, by what we mean that ah has the same colour as a. Then the coset aH<br />

consists of the elements with the same colour. If H consists of all the ”neutral”<br />

elements then each coset represents a specific colour. Here are two examples<br />

X40 and X41<br />

103. Theorem: A Left coset for a subgroup is the image of H by a<br />

bijection.<br />

The left cosets have the form aH and are the result of a left translation of H.<br />

The mapping h ↦→ ah is a bijection of H on aH.<br />

Similar for right cosets.<br />

Proof : Let ∼ denote the left equivalence corresponding to H. This means<br />

per definition that b ∼ a ⇔ b ∈ aH. This shows that [a] = aH. Let La denote<br />

left translation with a that means that La(x) = ax. Then La(H) = aH. Left<br />

translation with an invertible element in a monoid is injective (T85). And in<br />

a group all elements are invertible .<br />

You have maybe met this result in the situation in X42. X43<br />

104. Definition: Index for subgroup<br />

If H is a subgroup in G then we define the index for H to be the number<br />

(possibly ∞) of left cosets and we denote it by [G : H]. Notice that we would<br />

have the same number if we had used the right cosets<br />

X44 X45<br />

105. Theorem: Lagrange’s theorem<br />

58


If H is a subgroup of the finite group G then |G| = [G : H]|H| and consequently<br />

|H| is a divisor in |G|.<br />

Proof : Each coset has the same number of members as the subgroup, since<br />

the coset is the result of left translation of subgroup and the left translation is<br />

bijective. Since G is the disjoint union of the cosets the result follows.<br />

A useful application of this theorem is exhibited in E46.<br />

9. 4: Quotient groups<br />

As seen a subgroup will give rise to a partition into classes, and the resulting<br />

quotient offers interesting information on the action of the subgroup on the<br />

group structure.<br />

However it is not always possible to transfer the group structure to the quotient<br />

since multiplication of two cosets does not always produce another coset, X41.<br />

To ensure the possibility of that, more is needed. Exactly what is the content<br />

of the following notions and results.<br />

106. Definition: Normal subgroup<br />

Let N be a subgroup of the group G. We shall say that N is normal if the<br />

induced equivalence relations are compatible with the group structure.<br />

We have the following convenient way to test if we have a normal subgroup<br />

107. Theorem: Criteria for normality<br />

The following conditions are equivalent<br />

(1) N is normal<br />

(2) (aN)(bN) = (ab)N and (Na)(Nb) = N(ab)<br />

(3) x −1 Nx = N for all x ∈ G<br />

(4) xN = Nx for all x ∈ G<br />

59


Proof :<br />

(1) ⇒ (2): If N is normal you have per definition that the associated left equivalence<br />

relation is compatible with the group operation and as a result of this<br />

calculation with the classes can be carried out by calculation with representatives,<br />

which is what is stated in (2).<br />

(2) ⇒ (3) Let x be an arbitrary member of G. Then by (2) we have that<br />

(xN)(x −1 N) = (xx −1 )N = N. For an arbitrary member h ∈ N we then have<br />

that xhx −1 e ∈ N. Therefore also xhx −1 ∈ N, and since h was arbitray it<br />

follows that xNx −1 ⊆ N. The opposite inclusion is obvious.<br />

(3) ⇒ (4): Exercise.<br />

(4) ⇒ (1): Assuming (4) we must show that the left equivalence relation is<br />

compatible with the group operation. Let a ∼ b and c ∼ d. Per definition this<br />

means that b ∈ aN and d ∈ cN. So we may choose h, k ∈ N with b = ah and<br />

d = ck. Since hc ∈ Nc and since Nc = cN per assumption it is possible to find<br />

p ∈ N with hc = cp. Therefore bd = ahck = acpk and pk ∈ N. This shows that<br />

bd ∈ acN which means that bd ∼ ac, establishing the compatibility. To show<br />

that also inversion is compatible let a ∼ b, that is a ∈ bH = Hb. So you may<br />

find h ∈ N with a = hb. Then a −1 = b −1 h −1 , which shows that a −1 ∈ b −1 H<br />

and so a −1 ∼ b −1 which establishes the compatibility. These criteria are used<br />

in X47 X48<br />

108. Theorem: Commutativity guaranties normality<br />

Any subgroup of a commutative group is normal.<br />

Proof : Trivial.<br />

109. Theorem: Same left and right coset in case of normality<br />

If N is a normal subgroup , then the right equivalence relation is identical with<br />

the left and so this relation is just said to be the equivalence relation induced<br />

by N and is denoted by ∼N .<br />

110. Theorem: Quotient group.<br />

Let N be a normal subgroup of the group G. The induced quotient structure is<br />

a group structure.<br />

60


Proof : This is just T91<br />

111. Definition: Quotient group.<br />

Let N be a normal subgroup of the group G. The induced quotient structure<br />

is said to be the quotient G and N and is denoted G/N.<br />

G∼N<br />

9. 5: Group homomorphisms<br />

112. Theorem: The kernel of a group homomorphism is a normal<br />

subgroup .<br />

The kernel of a group homomorphism f : G1 → G2 is a normal subgroup and<br />

the coset are the level sets of f. The equivalence relation induced by f is denoted<br />

∼f and is identical with the one induced by the kernel ∼ ker(f).<br />

Proof : Let N denote the kernel of f. We use one of the criteria for normality<br />

(T 107), namely x −1 Nx ⊆ N. Let h ∈ N and x ∈ G be arbitrary. Then<br />

f(x −1 hx) = f(x −1 )f(h)f(x) = f(x) −1 1Bf(x) = 1B, and so x −1 hx ∈ N.<br />

Let now aN be an arbitrary coset. Then we have that<br />

x ∈ aN ⇔ a −1 x ∈ N ⇔ f(a −1 x) = 1B ⇔ f(a) −1 f(x) = 1B ⇔ f(x) = f(a),<br />

so that aN = {x|f(x) = f(a)}. From this it also follows that the cosets of N<br />

are the same as the equivalence classes for f<br />

113. Definition: The homomorphism induced by a quotient group.<br />

The quotient group induced by the kernel of some homomorphism f is also<br />

called the quotient group induced by f and we can denote it by G/f.<br />

A normal subgroup can thus be constructed from a homomorphism. A little<br />

more surprising is it may be than any normal subgroup may be constructed<br />

this way. But that is what is said in<br />

114. Theorem: Any normal subgroup is the kernel of some homomorphism<br />

(for instance the canonical projection)<br />

Let N be a normal subgroup i G. The canonical projection kN of G on G/N is<br />

a group homomorphism with kernel N.<br />

61


Proof : Exercise<br />

These results are used in X49, X50, E51<br />

X52<br />

10: Rings<br />

10. 1: Definitions and rules<br />

115. Definition: Ring<br />

An algebraic structure (R, +, −, 0, ∗, 1) is said to be a ring if (R, +, −, 0) is a<br />

commutative group and (R, ∗, 1) is a monoid and if ∗ is both left and right<br />

distributive wrt +. Furthermore 0 ̸= 1. If ∗ is commutative the ring is called<br />

commutative. We speak in a obvious way about the underlying additive group<br />

and the underlying multiplicative monoid.<br />

There are very many sorts of rings, see for instance X53 X54 X55<br />

There are a lot of obvious rules that nevertheless needs proof :<br />

116. Theorem: Rules for calculation in rings<br />

(1) a ∗ 0 = 0 ∗ a = 0<br />

(2) (−a) ∗ b = a ∗ (−b) = −(a ∗ b)<br />

(3) (−a) ∗ (−b) = a ∗ b<br />

(4) (−1) ∗ a = −a<br />

Proof : Lets start recalling that −(−a) = a follows by considering the underlying<br />

group.<br />

(1) We have that a = 1 ∗ a = (1 + 0) ∗ a = 1 ∗ a + 0 ∗ a = a + 0 ∗ a, and so<br />

a = a + 0 ∗ a and then 0 ∗ a = 0.<br />

(2) We also have that (−a) ∗ b + a ∗ b = ((−a) + a) ∗ b = 0 ∗ b = 0, and so<br />

(−a) ∗ b + a ∗ b = 0 and therefore (−a) ∗ b = −(a ∗ b).<br />

(3) We have, using (2) twice, that (−a)∗(−b) = −(a∗(−b)) = −(−(a∗b)) = a∗b<br />

and therefore (−a) ∗ (−b) = a ∗ b.<br />

(4) We have that (−1) ∗ a = −(1 ∗ a) = −a.<br />

62


10. 2: Subring<br />

117. Definition: Delring<br />

A substructure of a ring which is a ring in its own right is said to be a subring.<br />

118. Theorem: Any substructure of ring is a subring<br />

Let D be a substructure of the ring R. Then D is a subring.<br />

Proof : When D is a substructure as a ring then it is also a substructure of<br />

the underlying additive group and so (D, +, 0, −) is itself a group, obviously<br />

commutative. In a similar way we see that (D, ∗, 1) is a monoid. Distributivity<br />

follows since this propery is hereditary.<br />

This may be formulated as the following criterion<br />

119. Theorem: Subring criterion<br />

A subset D of a ring (R, +, −, 0, ∗, 1) is a subring if and only if D + D ⊆<br />

D,D ∗ D ⊆ D,−D ⊆ D and 1 ∈ D.<br />

Proof : The conditions are clearly necessary. And they explicitly guaranties<br />

the closure of all the operations except 0. To see that 0 =∈ D notice that<br />

0 = 1 − 1.<br />

10. 3: Induced rings<br />

120. Theorem: Function spaces<br />

Suppose that A is a ring and that M is a set. The algebraic structure on<br />

F(M, A) induced by A is a ring.<br />

Proof : (F(M, A), +, −, 0) is a commutative group since it is induced by the<br />

underlying additive group. In the same way we see that (F(M, A), ∗, 1) is a<br />

monoid. Distributivity is inherited.<br />

The preceding theorem furnishes many important examples of rings, the rings<br />

of functions: X56, with some important subrings, rings of polynomials E57.<br />

121. Theorem: The product of rings is a ring<br />

63


Suppose that A and B are rings then A×B is a ring. Analogously for a product<br />

with several factors.<br />

Proof : (A × B, + × +, 0 × 0, − × −) is the product of de underlying additive<br />

groups and is therefore a group , and obviously commutative. In a similar way<br />

we have that (A×B, ∗×∗, 1×1) is a monoid. Again distributivity is inherited.<br />

10. 4: Quotient rings and ideals<br />

122. Theorem: Quotient of a ring is a ring.<br />

The quotient of en ring wrt to a compatible equivalence relation a ring<br />

Proof : The underlying additive group in the quotient is the quotient of the<br />

underlying additive group and therefore a group, obviously commutative. The<br />

underlying multiplicative monoid of the quotient is the quotient of the underlying<br />

multiplicative monoid and therefore a monoid. Distributivity is inherited<br />

by quotients.<br />

123. Theorem: The canonical projection is a homomorphism<br />

Let ∼ be an equivalence relation on a ring R, which is compatible with the<br />

structure. The canonical projection k∼ of R onto the quotient R∼ is a ring<br />

homomorphism<br />

Proof : This is an immediate consequence of the general result for quotient<br />

structures (T56).<br />

124. Definition: Kernel of a ring homomorphism<br />

The kernel of homomorphism f between rings is the kernel of f considered as<br />

a homomorphism of the underlying additive groups<br />

We are now going to introduce the analogy to a normal subgroup . A normal<br />

subgroup is a subgroup which induces an equivalence relation, which is compatible<br />

with the underlying additive group. But when we want to transfer the<br />

operations to the quotient ring we also need the relation to be compatible with<br />

multiplication. This special type of substructure is called an ideal. An ideal<br />

will always be the kernel of a homomorphism of the ring structures.<br />

125. Definition: Ideal.<br />

64


A subset I is said to be et ideal of the ring R, if I is a subgroup of the underlying<br />

additive structure and is closed wrt to multiplication by any element in the ring.<br />

This can be expressed by<br />

I + I = I<br />

RI = I<br />

IR = I<br />

126. Theorem: The kernel is an ideal.<br />

Let f be a homomorphism of the ring R1 into the ring R2. Then the kernel of<br />

f is an ideal<br />

Proof : Let K denote the kernel of f. Then f a homomorphism between the<br />

underlying groups with kernel K, which therefore is a subgroup of the underlying<br />

group in R1.<br />

This means that the first condition in the definition of ideals is fulfilled.<br />

The second condition: Let k ∈ K, r ∈ R then f(kr) = f(k)f(r) = 0f(r) = 0.<br />

Consequently kr ∈ I. Analogously for rk ∈ I.<br />

127. Theorem: The quotient wrt to an ideal is a ring<br />

Let I be an ideal in the ring R. Then I is a normal subgroup of the underlying<br />

additive group R. Let R/I denote the quotient group. Then the equivalence<br />

relation associated with I is compatible with the ring structure on R and the<br />

quotient structure R/I is a ring.<br />

Proof : We must show that the multiplication is compatible with the equivalence<br />

relation associated with I. Therefore let x1 ∼ y1 and x2 ∼ y2. This<br />

means per definition that x1 − y1 ∈ I and that x2 − y2 ∈ I. We use the good<br />

old trick to express a difference between two products by the difference of the<br />

factors to get<br />

x1x2−y1y2 = x1x2−x1y2+x1y2−y1y2 = x1(x2−y2)+(x1−y1)y2 ∈ IR+RI<br />

and by the properties of an ideal we have that RI + IR = I + I = I and<br />

so we have proved that x1x2 ∼ y1y2. Therefore the quotient structure is well<br />

defined and the product is given by the formula [x][y] = [xy] or in other words<br />

(x + I)(y + I) = (xy) + I. It is easy to check that this product makes the<br />

quotient a monoid with [1] as neutral element for multiplication. To check that<br />

this multiplication is distributive wrt addition is again routine. So the quotient<br />

is a ring.<br />

65


128. Remark: Motivation for ideals.<br />

The preceding result is the announced motivation for notion of an ideal.<br />

129. Definition: Quotient ring wrt to an ideal<br />

The ring constructed in the previous theorem is said to be the quotient ring wrt<br />

to the ideal ideal and is denoted R/I<br />

E58<br />

130. Theorem: An ideal is the kernel of a homomorphism.<br />

Let I be an ideal in the ring R and let kI denote the canonical projection of R<br />

onto the quotient ring R/I. Then kI is a homomorphism with the ideal as its<br />

kernel.<br />

Proof : The projection kI on a quotient is a homomorphism in any algebraic<br />

structure. So it only remains to show that the kernel of k is I. For all a ∈ R<br />

we have that kI(a) = [a]I = a + I. The 0-element in R/I is kI(0) = 0 + I = I.<br />

It therefore holds that a ∈ ker(kI) ⇔ kI(a) = I ⇔ a ∈ I. And so I = ker(kI).<br />

10. 5: Integral domains<br />

Our next goal is to study fields which are rings where division is always possible<br />

except by 0. There is however an interesting and import intermediary step. In<br />

fields you can always cancel by division by a non zero factor. But there exists<br />

rings that are not fields and where cancelation is possible. Such rings are called<br />

integral domains and they are important stepping stones for constructing fields<br />

from rings. We are going to see this in use in two cases.<br />

The cancelation property is prepared by a couple of definitions.<br />

131. Definition: Proper element.<br />

An element in a ring is called proper if it is not the zero element.<br />

132. Definition: Zero divisor<br />

A proper element a in a ring is said to be a zero divisor, if 0 is a non trivial<br />

multiple of a, that is if there exists a proper element k such that ka = 0 or<br />

ak = 0<br />

66


133. Definition: Integral domain<br />

A ring without zero divisors is said to be an integral domain<br />

This is obviously equivalent to<br />

134. Theorem: Equivalent definition<br />

Integral domains are the rings in which the set of proper elements is closed wrt<br />

multiplication.<br />

135. Theorem: Cancelation in integral domains<br />

In an integral domain you may cancel proper elements: If a is proper and<br />

ax = ay then x = y.<br />

Proof : If ax = ay then a(x − y) = 0 and since a is proper we must have that<br />

x − y = 0.<br />

We are not going to study structures induced by integral domains.<br />

We are next going to consider those ideals for which the quotient ring is a field:<br />

136. Definition: Proper ideal<br />

An ideal which is not a proper subset is said to be a proper ideal<br />

137. Theorem: The ideal closure.<br />

Let D be a subset of the ring R. Then the intersection of all ideals containing<br />

D is an ideal, called the ideal closure of D.<br />

Proof : Lets first show that the mentioned intersection I is again an ideal.<br />

So let r ∈ R, i ∈ I. Let I1 be an arbitrary ideal containing D. Then ri ∈ I1.<br />

Since this holds for all I1 then ri ∈ I.<br />

Next I must be the least ideal containing D, since it is per construction part of<br />

all others.<br />

138. Definition: The ideal closure<br />

The ideal constructed in the previous theorem is said to be the ideal generated<br />

by D or the ideal closure of D.<br />

67


139. Theorem: Description of the ideal closure.<br />

Let D be en subset of the ring R. Then the ideal closure of D is the set<br />

{x1a1 + . . . + xnan : x1, . . . , xn ∈ R, a1, . . . , an ∈ D}<br />

Proof : Let B denote the set above. It is readily checked that its an ideal<br />

according to the definition of ideals. It also contains D since any element<br />

a ∈ D may be written in the form 1a, where 1 denotes the 1-element of the<br />

ring. And any ideal containing D must contain all elements of B. And so B<br />

must be the least ideal containing D.<br />

140. Definition: Maximal ideal<br />

An ideal, which is maximal among all proper ideals, is said to be a maximal<br />

ideal<br />

E59<br />

Now we are ready for<br />

11: Fields<br />

11. 1: Definitions<br />

141. Definition: Field<br />

En ring, with commutative multiplication, for which any proper element is invertible<br />

wrt multiplication is called a field.<br />

This means an algebraic structure (L, +, −, 0, ·, ⋆, 1) satisfying the following<br />

axioms<br />

1) · is commutative<br />

2) (L, +, −, 0, ·, 1) is a ring<br />

3) (L ∗ , ·, 1, ⋆) is a group , where L ∗ = L \ {0}.<br />

142. Definition: Subfield<br />

En substructure of et field, being itself a field, is said to be et subfield.<br />

68


143. Theorem: Any substructure of a field is a subfield<br />

Any substructure of a field is a subfield<br />

Proof : You have to check that the substructure satisfies the axioms, which<br />

results from the analogous results for subrings and subgroups (T118 and T93.<br />

11. 2: Quotient ring over a maximal ideal<br />

Now we are prepared for one of the main results in algebra:<br />

144. Theorem: The Quotient ring wrt a maximal ideal is a field<br />

Let I be a maximal ideal in the ring R. Then the quotient ring R/I is a field<br />

Proof : Let L denote R/I. We know then that L is a ring (T 127). What<br />

remains is then to show that any proper element x in L is invertible. Lets<br />

choose a ∈ R with x = [a] and so x = a + I. Since x is proper we have that<br />

a /∈ I. Now let J be the ideal generated by I ∪ {a}. Since I is a proper subset<br />

of J it is not possible for J to be a proper ideal, therefore J = R, and so<br />

1 ∈ J. This means according to T139 that you can find a1, . . . , an ∈ I ∪ {a}<br />

and r1, . . . , rn ∈ R such that<br />

1 = r1a1 + . . . + rnan,<br />

which after some change of order also may be written<br />

1 = r1a1 + . . . + rpap + rp+1a + . . . + rna = ra + r ′ ,<br />

where we have put r = rp+1 + . . . + rn and r ′ = r1a1 + . . . + rnan. This implies<br />

since r ′ ∈ I, that [ar] = [1]. Putting now y = [r] then yields xy = [a][r] =<br />

[ar] = [1], which says that x has y as its inverse. E60<br />

11. 3: Fields of fractions<br />

Now we come to yet another way to construct fields from rings. We shall show<br />

how to construct all the fractions.<br />

The rational numbers constitutes a field, which moreover is the least to contain<br />

the integers Z as a subring. Any integral domain like Z, can in the same<br />

way be considered to be situated inside some field. Next follows a way to<br />

construct such a field. You can also consider this to be a way to construct the<br />

69


ationals from the integers. This make it possible to take the stand that you<br />

can define the rationals if only you take the integers for granted. This fits into<br />

a program where you want to have all concepts defined. This includes a way of<br />

constructing the integers from the natural numbers. And so on till you reach<br />

the foundations of mathematic based on set theory. But this is quite another<br />

story to be told in another afsnit.<br />

It is them important to say that the construction can be used in a lot of other<br />

cases, where we don’t have the result of the construction already.<br />

First we have to clarify the notions with the following<br />

145. Definition: Field of fractions<br />

Let R be a ring and let L be et field. Suppose there exists an injective homomorphism<br />

i : R → L where L is the underlying ring with the property that any<br />

x ∈ L may be written in the form i(p)/i(q) where p and q are in R. Now we<br />

use i to identify R with i(R), that is we consider p and i(p) to be the same.<br />

Then R is a subring of L and any element in L can be written in the form p/q,<br />

where p and q belongs to R. We shall say that L via i is a field of fractions for<br />

R. We shall call i the associated embedding of R into L<br />

Now follows the construction, which has the form of a<br />

146. Theorem: Existence of field of fractions<br />

There exists a field of fractions for the ring R if and only if R is an integral<br />

domain<br />

Proof : The necessity of the condition follows from the fact that L as a field<br />

is also an integral domain and so the i(R) as a substructure of an integral<br />

domain is itself an integral domain, and then of course this also holds for R by<br />

isomorphism.<br />

To prove the sufficiency assume that R is an integral domain.<br />

We define the set S = {(p, q) ∈ R × R : q ̸= 0}. On S we define an addition<br />

+ by the assignment (a, b) + (c, d) = (ad + bc, bd) and a multiplication by<br />

the assignment (a, b)(c, d) = (ac, bd). That this in fact defines operations rely<br />

heavily on R being an integral domain, since the non existence of zero divisors<br />

ensures that bd ̸= 0. We also define the opposite operation −(a, b) = (−a, b)<br />

and a reciprocal operation (a, b) −1 = (b, a). Finally we define a zero element<br />

(0, 1) and et unit-element (1, 1).<br />

70


Then it is routine to check that S equipped with these operations is a ring.<br />

Now the set I consisting of all elements of the form (0, b) can be checked to be<br />

an ideal.<br />

We let L denote the quotient ring of S wrt this ideal. Let ∼ denote the associated<br />

equivalence relation, then the following calculation<br />

(x, y) ∼ (u, v) ⇔ (x, y) − (u, v) ∈ I ⇔ (xv − yu, yv) ∈ I ⇔ xv − yu = 0<br />

shows that<br />

(x, y) ∼ (u, v) ⇔ xv = yu<br />

In what follows we shall use [p/q] to denote the equivalence class containing<br />

(p, q) ∈ S.<br />

We notice that [1/1] is the class which contains the unit element (1, 1) in the<br />

ring S, and so is the unit element, 1, in the quotient L. Furthermore this class<br />

also contains all the elements (a, a) because (a, a) ∼ (1, 1) since a · 1 = 1 · a.<br />

Analogously you see that [0/1] is the zero element,0.<br />

If [a/b] ̸= 0 the a ̸= 0, and then (b, a) ∈ S, and [b/a][a/b] = [ab/ab] = 1, and<br />

this shows that [a/b] is invertible with [a/b] −1 = [b/a] . And the L is a field.<br />

The mapping defined by h(a) = (a, 1) is a homomorphism of R into S, since<br />

h(a) + h(b) = (a, 1) + (b, 1) = (a · 1 + b · 1, 1 · 1) = (a + b, 1) = h(a + b)<br />

h(a)h(b) = (a, 1)(b, 1) = (ab, 1) = h(ab)<br />

The mapping i : R → L defined by i(a) = [h(a)] = [a/1] is a composition of<br />

two homomorphisms and therefore itself a homomorphism. To show that it is<br />

injective lets assume that i(a) = i(b). Using the definitions we see that<br />

i(a) = i(b) ⇔ [a/1] = [b/1] ⇔ (a, 1) ∼ (b, 1) ⇔ a · 1 = 1 · b ⇔ a = b.<br />

which proves injectivity<br />

As a result of the identification of a with i(a) we have justified the the following<br />

147. Definition: Canonical field of fractions for an integral domain<br />

The field constructed in the previous theorem is said to be the canonical field of<br />

fractions for R. The mapping i is said to be the canonical embedding of R.<br />

71


The canonical field of fractions L for an integral domain R has the property<br />

that it is a field which is an extension of R. Furthermore it is minimal wrt<br />

to this property in the sense that it does not contain more than absolutely<br />

necessary, namely the fractions made from elements in R.<br />

In some sense it is also the only one possible. This is stated in the following<br />

148. Theorem: ”Uniqueness” of field of fractions<br />

Assume that L, M are fields of fractions for the ring R via i, j. Then there<br />

exists an isomorphism F between the fields L and M such that i = F ◦ j, which<br />

in particular means that the fields are isomorphic.<br />

Proof : It is enough to show the theorem when L is the canonical fields of<br />

fractions for R. We use the notation from the proof of the existence part.<br />

So let M be any field of fractions for R and let j be the embedding of R into<br />

M. We define the map f of S into M by<br />

f(p, q) = j(p)/j(q)<br />

The first step is to show that f is a homomorphism (between rings). This is<br />

straight forward using that j is a homomorphism and L is a field:<br />

f ((p, q) + (r, s)) =<br />

j(ps + qr)<br />

j(qs)<br />

= f(p, q) + f(r, s)<br />

= j(p)j(s) + j(q)j(r)<br />

j(q)j(s)<br />

= j(p) j(r)<br />

+<br />

j(q) j(s)<br />

Next we note that the equivalence relation ∼ is generated by f in the sense that<br />

(p, q) ∼ (r, s) ⇔ f(p, q) = f(r, s)<br />

This means that we can define a function F on S/I by F ([p/q]) = f(p, q)<br />

since the value does not depend on the chosen representative of the class. Then<br />

f = F ◦k and therefore F is a homomorphism by a general theorem, T392, since<br />

f is surjective as a result of the definition D145 and since k is a homomorphism<br />

( T56)<br />

Next we show that F is injective. So assume that F (a) = F (b) with a =<br />

[p/q], b = [r/s]. Then f(p, q) = F (a) = F (b) = f(r, s) and so (p, q) ∼ (r, s)<br />

which means that a = b. Since f is surjective, so is F . Now we have shown<br />

that F is an isomorphism between fields<br />

72


The prototype example of field extension is the extension from Z to Q (see<br />

E61)<br />

A very important field of fractions is the one resulting from the ring of polynomials<br />

: E62<br />

12: Examples and exercises<br />

Example 22: Words of equal length (see nr 74)<br />

Th relation x and y have the same length is an equivalence relation on the<br />

monoid of words and it is compatible with concatenation En equivalence class<br />

consists of all words of a certain length. This classification is induced by the<br />

homomorphism which is defined to be the length of the word.<br />

Example 23: The semigroup of words (see nr 74)<br />

The set S of words over a given alphabet with concatenation as operation is a<br />

semigroup.<br />

Example 24: The semigroup of words is not a monoid, but ... (see nr 74)<br />

If the semigroup of words is extended with the empty word you will get a<br />

monoid with the empty word as the neutral element.<br />

Example 25: The set of naturals with addition is a semigroup (see nr 74)<br />

The set of naturals with addition is a semigroup<br />

Example 26: The word length is a homomorphism (see nr 74)<br />

The mapping x ↦→ |x| which to a word assigns its length (number of letters) is a<br />

semigroup homomorphism. It can in an obvious way be extended to a monoid<br />

homomorphism<br />

Exercise 27: Greatest common divisor gives a semigroup (see nr 74)<br />

On the integers you have the binary operation x ⊓ y which to x and y assigns<br />

the greatest common divisor of x and y.<br />

Show that Z with this operation is a semigroup.<br />

Exercise 28: Greatest common divisor does not yield a monoid (see nr 74)<br />

Show that the semigroup i X27 is not (the underlying semigroup of) a monoid<br />

Exercise 29: Minimum gives a semigroup (see nr 74)<br />

We define x ∧ y as the minimum of x and y.<br />

73


Show that ∧ is a binary operation on Z, making it a semigroup and that N is<br />

a subsemigroup.<br />

Exercise 30: Minimum does not give a monoid (see nr 74)<br />

Show that the semigroup in X29 is not (the underlying semigroup of) a monoid.<br />

Exercise 31: Minimum induces an operation on sequences (see nr 74)<br />

The set of sequences of integers may be identified with F(N, Z).<br />

We let ∧ (also) denote the operation F(N, Z) which is induced by ∧. This<br />

operation is the component wise minimum,that is (a ∧ b)(n) = a(n) ∧ b(n).<br />

Show that F(N, Z) with this operation is a semigroup.<br />

We shall say about a sequence of integers that it has finite support if only<br />

finitely many of its elements are different from zero. We denote the set of<br />

sequences with finite support by Z ∞ 0 .<br />

Show that Z ∞ 0 is a subsemigroup.<br />

Exercise 32: The prime spectrum as a homomorphism (see nr 75)<br />

Continuation of X27 and X31. To any natural number n we assign the sequence<br />

e with finite support, where e(i) is defined by the property that<br />

n = p(1) e(1) p(2) e(2) · · ·<br />

is the factorization of n into primes. Here p(i) is the prime nr i. So e(i) = 0<br />

if the prime p(i) do not appear in the factorization. We denote this sequence<br />

with σ(n) and call it the prime spectrum of n.<br />

Show that σ is a homomorphism from N into Z ∞ 0 for a suitable choice of<br />

operations.<br />

Show also that σ is a monoid homomorphism, when the operation on N is<br />

multiplication and the operation on Z ∞ 0 is componentwise addition.<br />

Exercise 33: Operations on subsets (see nr 77)<br />

Let X be the set of subsets of X. Show that (X , ∩, X) and (X , ∪, ∅) are<br />

monoids. Let Xe denote the set of finite subsets of X and Xc denote the set of<br />

subsets with finite complement.<br />

Show that (A, ♢ ) is a semigroup when A is Xe or Xc and ♢ is ∩ or ∪.<br />

Some of these semigroups may be extended to monoids. Decide which.<br />

74


Consider the possibility of the mapping A ↦→ X \ A being a homomorphism or<br />

an isomorphism.<br />

Example 34: Remainder classes with multiplication is a monoid (see nr 77)<br />

The relation x ≡ y on Z defined by demanding x and y to be in the same<br />

remainder class modulo n is an equivalence relation which is compatible with<br />

multiplication. Since (Z, ·, 1) is a monoid so is (Z/≡, ·, 1), where 1 also is used<br />

to denote the remainder class containing 1.<br />

Example 35: The monoid of remainder classes is isomorphic with Zn (see nr<br />

77)<br />

The relation x ≡ y i E 34 is induced by the mapping f(x) = x mod n with<br />

values in Zn = {1, . . . , n − 1}. By defining a multiplication ⊙ on Zn by the<br />

formula x ⊙ y = xy mod n, you see that f becomes an isomorphism between<br />

the monoids (Z/≡, ·, 1) and (Zn, ⊙, 1). You often see Zn defined as Z/≡.<br />

Example 36: Condition for existence of modular reciprocal (see nr 77)<br />

Continuation of E34. You can prove by using the extended Euclidean algorithm<br />

that the remainder class [x] is invertible if and only if x and n are coprime<br />

Example 37: Criterion for injectivity of (see nr 100)<br />

A linear mapping is injective if its kernel (null space) only contains the null<br />

vector. You can consider a linear space as a group with u + v, 0 og −u as<br />

operations.<br />

Example 38: The matrix representing a linear mapping with respect to given<br />

bases is uniquely determined (see nr 100)<br />

The mapping A ↦→ LA, which to a matrix A assigns the linear mapping LA<br />

represented by A is bijective. The mapping is obviously linear. Its kernel<br />

consists of the matrices which represents the zero mapping. This means that<br />

each column is coordinate vector for the 0 vector and therefore itself the 0<br />

vector. Therefore the matrix is the 0 matrix which therefore is the only element<br />

of the kernel.<br />

Exercise 39: Short proof of the Chinese remainder theorem (see nr 100)<br />

Use that the mapping in E15 is a group homomorphism to show it is injective.<br />

This gives another proof of the Chinese remainder theorem (but not a way of<br />

finding the solution).<br />

Exercise 40: A subgroup of the group of rotations of the tetrahedron (see nr<br />

102)<br />

Let G be the group of of rotations of a regular tetrahedron with corners A, B,<br />

C and D. Let H be the set of rotations with axis through A. It can be seen<br />

75


that H is a subgroup with the members I, a and a −1 , where a is a rotation<br />

of 120 ◦ about the axis through A. Determine the partition in left and right<br />

cosets.<br />

Information: a = (BCD) b = (ADC) c = (DAB) d = (ACB).<br />

Exercise 41: A subgroup of the group of isometries of the triangle (see nr<br />

102)<br />

Let G be the group of rotations and reflections of a regular triangle with corners<br />

A, B, C. Let d denote rotation 120 ◦ about the midpoint of the triangle and<br />

let a, b, c denote the reflections in the lines through A, B, and C, respectively.<br />

Let H be the set {I, a}. It may be seen that H is a subgroup. By a direct<br />

calculation (for instance by calculation with permutations of the corners ) you<br />

get that bH = {b, d} and cH = {c, d}. But (bH)(cH) = {b, c, d, d −1 }. This is<br />

an example that the product of two left cosets not necessarily is again a left<br />

coset<br />

Exercise 42: There are the same number of even and odd permutations (see<br />

nr 103)<br />

Show that the set of even permutations constitutes a subgroup with exactly<br />

one more coset, namely the set of odd permutations. Use this prove that there<br />

is the same number of even and odd permutations.<br />

Exercise 43: Any half subgroup is normal (see nr 103)<br />

Show that any subgroup which contains half of the elements is normal. Use it<br />

to solve the preceding exercise<br />

Exercise 44: Index (see nr 104)<br />

Determine the index for subgroup H in X40<br />

Exercise 45: Index for the remainder class (see nr 104)<br />

Determine the index for 3Z as a subgroup of (Z, +).<br />

Example 46: The cosets of the stabilizer. A counting formula (see nr 105)<br />

Let X be a set and F a set of bijections of X into itself. Assume further that<br />

F is a group with composition of mappings as operation. Then we say that we<br />

have a group of transformations acting on X.<br />

For x ∈ X we define the stabilizer of x to be the subset of F consisting of those<br />

members that fix x. We denote it by Fx. To summarize:<br />

h ∈ Fx ⇔ h(x) = x<br />

76


Now it is easily checked that Fx is a subgroup of F. Let f ∈ F and let us<br />

determine the left coset fFx, (we use multiplicative notation for composition).<br />

From the definition of coset we get that<br />

g ∈ fFx ⇔ f −1 g ∈ Fx ⇔ f −1 g(x) = x ⇔ g(x) = f(x)<br />

Letting y = f(x) we see that the coset<br />

fFx = {g : g(x) = y},<br />

lets denote it by Fx→y. Therefore we have a coset for each y which is f(x) for<br />

some f ∈ F, that is for each y in the set {f(x) : f ∈ F}. This set is usually<br />

called the orbit of x wrt F and may be denoted be Fx. Therefore the index of<br />

Fx is |Fx|, the number of elements in Fx.<br />

Then it follows from Lagranges formula that<br />

|F| = |Fx||Fx|,<br />

a formula that may serve to determine the number of elements in a group of<br />

transformations if you have a suitable stabilizer.<br />

Exercise 47: A non normal subgroup of the tetrahedron group (see nr 107)<br />

Show that the subgroup H in X40 not is normal<br />

Exercise 48: A normal subgroup of the tetrahedron group (see nr 107)<br />

Let G be the group of rotations of a regular tetrahedron. Let H be the set<br />

consisting of the three edge preserving rotations (that is having axis through<br />

midpoints of edges) and the identity mapping. Show that H is a normal subgroup.<br />

Determine the cosets an set up a table for calculation with the classes.<br />

Exercise 49: Same argument (see nr 114)<br />

Let C ∗ denote the set of invertible complex numbers, that is all complex numbers<br />

except 0. Then C ∗ is a group with complex multiplication. Let ∼ be the<br />

relation z1 ∼ z2 defined by z1 = rz2 for some positive real number r. Show that<br />

this is a compatible equivalence relation. Determine the equivalence classes,<br />

and the quotient group. Find a function f of C ∗ into a group that generates<br />

this relation. Show that it is a homomorphism and find its kernel.<br />

Exercise 50: Same modulus (see nr 114)<br />

Let C ∗ denote the set of invertible complex numbers, that is all complex numbers<br />

except 0. Then C ∗ is a group with complex multiplication. Let ∼ be<br />

the relation z1 ∼ z2 defined by z1 = uz2 for some complex number u on the<br />

77


unit circle. Show that this is a compatible equivalence relation. Determine the<br />

equivalence classes, and the quotient group. Find a function f of C ∗ into a<br />

group that generates this relation. Show that it is a homomorphism and find<br />

its kernel.<br />

Example 51: Definition of argument (see nr 114)<br />

We let Rot denote the set of rotations of the plane around the origin.<br />

We let SO2 denote the set of orthogonal orientation preserving matrices of<br />

order 2.<br />

For any t ∈ R we let R(t) denote member of Rot with angle t.<br />

( )<br />

cos t − sin t<br />

For any t ∈ R we let A(t) denote the matrix<br />

sin t cos t<br />

For any t ∈ R we let E(t) denote the number e it .<br />

For any u ∈ U we let L(u) denote the mapping of C into itself defined by<br />

L(u)(z) = uz. Then L(u) ∈ Rot.<br />

( )<br />

a −b<br />

For any u = a + ib ∈ U we let U(u) denote the matrix<br />

b a<br />

For any 2×2 real matrix A we let M(A) be the mapping of the plane into itself<br />

given by M(A)(x) = Mx.<br />

Show that the following diagram reflects these definitions and argue that all<br />

the arrows are homomorphisms. Tell which are isomorphisms and find the<br />

quotients modulo the kernels for those which are not.<br />

(R, +)<br />

E<br />

❄<br />

(U, ·)<br />

❅<br />

❅❅A<br />

R ✲ (Rot, ◦)<br />

✻<br />

M<br />

❅❘<br />

U ✲ (SO2, ·)<br />

(R, +)<br />

R ✲ (Rot, ◦)<br />

E<br />

<br />

❄ <br />

✒<br />

L<br />

✻<br />

M<br />

(U, ·)<br />

U ✲ (SO2, ·)<br />

Exercise 52: The stabilizer is usually not normal (see nr 114)<br />

Continuation of E46. When is the stabilizer normal. Not very often. We are<br />

going to use the criterion (3) in T107. We have seen that g ∈ fFx if and only<br />

if g(x) = f(x). In a similar way we can show that g ∈ Fxf if and only if<br />

g −1 (x) = f −1 (x) or equivalently g(f −1 (x) = x.<br />

78


We shall use this to illustrate an important example:<br />

Let G be the group of rotations of the unit sphere and let H be the subgroup<br />

of rotations having the z-axis as its rotation axis. Then H is the stabilizer<br />

of the north pole (0, 0, 1). Lets choose f to be the rotation 90 degrees about<br />

the y-axis then g ∈ fH if and only if g(0, 0, 1) = f((0, 0, 1) = (1, 0, 0) and<br />

g ∈ Hf if and only if g −1 (0, 0, 1) = f −1 (0, 0, 1) = (−1, 0, 0) which means that<br />

g(−1, 0, 0) = (1, 0, 0). It is an interesting exercise to show that this implies<br />

that g = f (g and f coincides in two points and so everywhere!) and so fH<br />

and Hf only have f in common and so cannot coincide.<br />

Lets take a deeper look at this example. Each coset is determined by the value<br />

at the the north pole which is common for all its members. So one class could<br />

consist of all the rotations which rotates the north pole to (1, 0, 0), independent<br />

of axis and angle. Any point on the sphere is a possible image (the orbit of<br />

the north pole is the whole sphere). So the quotient can be identified with the<br />

sphere. With the customary notation this is often written as the formula:<br />

SO3/SO2 = S 2<br />

Exercise 53: Z is the prototype ring (see nr 115)<br />

Convince yourself that Z with obvious operations is a ring<br />

Exercise 54: Ring of matrices (see nr 115)<br />

Convince yourself that the set of n × n matrices is a ring with the operations<br />

matrix addition, matrix subtraction, zero matrix, opposite matrix, matrix multiplication<br />

and identity matrix.<br />

Show that this also can be given meaning and holds when the elements of the<br />

matrices are taken from an arbitrary ring.<br />

Exercise 55: Ring of endomorphisms (see nr 115)<br />

This is a generalization of X 54. Let V be et linear space and let R denote<br />

Lin(V, V ), the set of linear mappings of V into itself. Show that it is possible<br />

to equip R as a ring with addition being the usual addition of functions with<br />

values in a linear space and multiplication is composition of functions.<br />

Exercise 56: Function rings (see nr 120)<br />

Show that the following sets (with obvious operations) are rings:<br />

1) F(X, R), the set of real functions on an arbitrary set X<br />

2) F(X, Z), the set of functions with integer values on a set X<br />

79


Example 57: Polynomials with real coefficients (see nr 120)<br />

The set of real functions on R is a ring.<br />

The set of polynomials is a subring and so a ring. This ring is usually denoted<br />

by R[X]. In the sequel we shall denote it P.<br />

In P we have a lot of results which are analogous to results valid in the ring of<br />

integers when we replace the order of the integers by the order of polynomials<br />

according to degree: one polynomial is considered to be less than another one<br />

if its degree is less :<br />

You can carry out division with remainder. A well known algorithm learns you<br />

how to. The resulting remainder will be less than the divisor. And so you can<br />

use a Euclidean algorithm to find a greatest common divisor. And you can<br />

define an extended algorithm: to any two given polynomials a and b you can<br />

express their greatest common divisor c in the form xa + yb, where x and y are<br />

polynomials.<br />

Let p be a polynomial. Put I = pP, then I is an ideal consisting of all multiples<br />

of p. Therefore P/I is a ring, the ring of remainder classes modulo p.<br />

Lets see how multiplication works:<br />

[a + bx][c + dx] = [(a + bx)(c + dx)] = [ac + (bc + ad)x + bdx 2 ] = [ac − bd + (bc + ad)x +<br />

[ac − bd + (ad − bc]x] + [bd(1 + x 2 )] = [ac − bd + (ad − bc]x]<br />

From which we see that the mapping (a+ib) → [a+bx] is a homomorphism from<br />

the complex numbers with multiplication. It is actually a field isomorphism.<br />

If q divides p then I = pP must be a subideal of J = qP. If moreover p not<br />

divides q then the subideal is a proper subideal.<br />

The polynomial p is said to be irreducible if it can not be factored into two<br />

polynomials, unless one of the factors is a constant. This is the polynomial<br />

analogue of a prime number.<br />

If p is irreducible then I is a maximal ideal (show it !) and the quotient ring<br />

consequently a field.<br />

The polynomial p(x) = 1 + x 2 is irreducible and the associated field is isomorphic<br />

with the field of complex numbers.<br />

80


What is said above may be generalized to polynomials, whos coefficients are<br />

taken from an arbitrary ring. This may lead to finite fields.<br />

Example 58: Rings of remainder classes are the prototypes of a quotient ring.<br />

(see nr 129)<br />

Let n ∈ N with n > 1.<br />

Let I = nZ, and notice that I is an ideal in the ring (Z, +, ·). Two elements<br />

x and y are equivalent if y ∈ x + I, that is when y − x ∈ nZ. Let k be the<br />

canonical projection of Z on Z/I, defined by k(x) = [x] = x + I = x + nZ,<br />

which means that k(x) is the remainder class modulo n which contains x.<br />

We shall here see an alternative way to construct this ring:<br />

Let Zn = {0, . . . , n − 1}. We define an addition on Zn by letting u ⊕ v =<br />

(u + v) mod n. Analogously we define subtraction and multiplication. There<br />

are also obvious candidates for 0-element and 1-element. We are going to show<br />

that this defines a ring isomorphic with Z/nZ.<br />

It is seen that the mapping f : Z → Zn, defined by f(z) = z mod n, is a<br />

homomorphism with respect to the underlying algebraic structure. To check<br />

the axioms we us that f is surjective to see that addition and multiplication<br />

are associative. The remaining axioms are straightforward.<br />

The mapping g : Zn → Z/I defined by g(x) = [x], is en bijection. Its inverse<br />

h = g −1 is given by h([x]) = x mod n.<br />

Direct check gives that f = h ◦ k. Then f and k are homomorphisms and since<br />

f is surjective it follows that h is a homomorphism and consequently an isomorphism.<br />

OBS: indsÆt om muligt reference !!!It may be noticed that instead<br />

of using the classes we are using the principal representatives and in stead of<br />

calculating with classes we are doing calculation with the representatives. You<br />

can see an example of multiplication tables on the figure: In praxis you wont<br />

look in the table for each operation but instead carry out the operations in the<br />

usual way and then take the remainder at last or at certain practical moments.<br />

0 1 2 3<br />

0 0 1 2 3<br />

1 1 2 3 0<br />

2 2 3 0 1<br />

3 3 0 1 2<br />

81<br />

0 1 2 3<br />

0 0 0 0 0<br />

1 0 1 2 3<br />

2 0 2 0 2<br />

3 0 3 2 1


Tabel 1. Addition table (⊕) and multiplication table (⊗) i Z4)<br />

Example 59: Prototype maximal ideal (see nr 140)<br />

The ideal nZ in the ring Z is maximal if and only if n is prime.<br />

If n is not prime, n = pq, then nZ is a proper subset of pZ and then of course<br />

not maximal. This proves the only if part.<br />

To prove the if part assume that n is prime and I is an ideal containing nZ as<br />

a proper subset. Then I must contain a number m which is coprime with n.<br />

By the properties of ideals I must then also contain all numbers of the form<br />

xm + yn. By the extended Euclidean algorithm then also the greatest common<br />

divisor of n and m must be in I. Since m and n are coprime I then contains<br />

1. And so I = R. This proves that nZ is maximal.<br />

Example 60: A finite field with 4 elements (see nr 144)<br />

Let P be the ring of polynomials over Z2. Let p be the polynomial x 2 + x + 1,<br />

which is irreducible. The ideal I = pP is therefore maximal, and the quotient<br />

ring P/I a field.<br />

By using the Euclidean algorithm you can choose a polynomial of degree less<br />

than 2 in each remainder class. There are only four such polynomials , namely<br />

0, 1, x and x + 1. These must be in different classes and so the field has four<br />

element. As an example of multiplication of classes we compute [x][x + 1],<br />

which gives [x 2 + x] = [x 2 + x + 1 − 1] = [x 2 + x + 1] + [−1] = [1]. And so the<br />

product is [1]. We can summarize the operations in the following tables, where<br />

i is the class contains x and j is the class which contains x + 1.<br />

+ 0 1 i j<br />

0 0 1 i j<br />

1 1 0 j i<br />

i i j 0 1<br />

j j i 1 0<br />

· 0 1 i j<br />

0 0 0 0 0<br />

1 0 1 i j<br />

i 0 i j 1<br />

j 0 j 1 i<br />

Example 61: Extension from Z to Q (see nr 148)<br />

We take the stand that the good Lord has offered us the integers as a gift. If<br />

we furthermore thinks that he also gave os the rationals, then we can observe<br />

that the rationals is a field of fractions for the integers. If God had been<br />

less generous we could have constructed the rationals as the canonical field of<br />

fractions for Z.<br />

Example 62: Power series as fractions (see nr 148)<br />

82


The field of fractions for the ring of polynomials may be identified with a set<br />

of formal power series with finitely many negative exponents.<br />

We define a formal power series to be a sequence a ∈ F(Z, R), that is a doubly<br />

infinite sequence for which there exists an integer N such that an = 0 for all<br />

n < N. We write the series as ∑<br />

n∈Z aixi , but this is only formal or symbolical,<br />

we are not thinking of it as a proper sum, and no notion of convergence is<br />

involved.<br />

A polynomial a0 + a1X + a2X 2 + . . . + aMx M may be identified with its sequence<br />

of coefficients . . . , 0, 0, a0, a1, . . . , aM , 0, 0, . . . which may be considered<br />

as a formal power series.<br />

We define addition and multiplication by<br />

where<br />

(<br />

∑<br />

aix i<br />

) (<br />

∑<br />

+ bix i<br />

)<br />

= ∑<br />

cix i<br />

(<br />

∑<br />

aix i<br />

) (<br />

∑<br />

bix i<br />

)<br />

= ∑<br />

dix i<br />

n∈Z<br />

n∈Z<br />

cn = ∑<br />

ai + bidn = ∑<br />

i∈Z<br />

i∈Z<br />

n∈Z<br />

aibn−i<br />

It is straight forward to check that this are operations in the set of formal<br />

power series and that you get a ring. When restricted to polynomials we have<br />

the usual operations on polynomials.<br />

And it is actually a field! This takes some effort to show. Moreover it is a field<br />

of fractions for the ring of polynomials. This is harder to show.<br />

You should check that<br />

1<br />

1 − x = 1 + x + x2 + . . .<br />

83<br />

n∈Z<br />

n∈Z<br />

n∈Z


13: Stikordsregister<br />

84

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!