Catálogo de publicaciones - libros

Compartir en
redes sociales


Basic Algebra

Anthony W. Knapp

Resumen/Descripción – provisto por la editorial

No disponible.

Palabras clave – provistas por la editorial

No disponibles.

Disponibilidad
Institución detectada Año de publicación Navegá Descargá Solicitá
No detectada 2006 SpringerLink

Información

Tipo de recurso:

libros

ISBN impreso

978-0-8176-3248-9

ISBN electrónico

978-0-8176-4529-8

Editor responsable

Springer Nature

País de edición

Reino Unido

Fecha de publicación

Información sobre derechos de publicación

© Birkhäuser Boston 2006

Cobertura temática

Tabla de contenidos

Preliminaries about the Integers, Polynomials, and Matrices

Anthony W. Knapp

This chapter is mostly a review, discussing unique factorization of positive integers, unique factorization of polynomials whose coefficients are rational or real or complex, signs of permutations, and matrix algebra.

Sections 1–2 concern unique factorization of positive integers. Section 1 proves the division and Euclidean algorithms, used to compute greatest common divisors. Section 2 establishes unique factorization as a consequence and gives several number-theoretic consequences, including the Chinese Remainder Theorem and the evaluation of the Euler function.

Section 3 develops unique factorization of rational and real and complex polynomials in one indeterminate completely analogously, and it derives the complete factorization of complex polynomials from the Fundamental Theorem of Algebra. The proof of the fundamental theorem is postponed to Chapter IX.

Section 4 discusses permutations of a finite set, establishing the decomposition of each permutation as a disjoint product of cycles. The sign of a permutation is introduced, and it is proved that the sign of a product is the product of the signs.

Sections 5–6 concern matrix algebra. Section 5 reviews row reduction and its role in the solution of simultaneous linear equations. Section 6 defines the arithmetic operations of addition, scalar multiplication, and multiplication of matrices. The process of matrix inversion is related to the method of row reduction, and it is shown that a square matrix with a one-sided inverse automatically has a two-sided inverse that is computable via row reduction.

Pp. 1-32

Vector Spaces over ℚ, ℝ, and ℂ

Anthony W. Knapp

This chapter introduces vector spaces and linear maps between them, and it goes on to develop certain constructions of new vector spaces out of old, as well as various properties of determinants.

Sections 1–2 define vector spaces, spanning, linear independence, bases, and dimension. The sections make use of row reduction to establish dimension formulas for certain vector spaces associated with matrices. They conclude by stressing methods of calculation that have quietly been developed in proofs.

Section 3 relates matrices and linear maps to each other, first in the case that the linear map carries column vectors to column vectors and then in the general finite-dimensional case. Techniques are developed for working with the matrix of a linear map relative to specified bases and for changing bases. The section concludes with a discussion of isomorphisms of vector spaces.

Sections 4–6 take up constructions of new vector spaces out of old ones, together with corresponding constructions for linear maps. The four constructions of vector spaces in these sections are those of the dual of a vector space, the quotient of two vector spaces, and the direct sum and direct product of two or more vector spaces.

Section 7 introduces determinants of square matrices, together with their calculation and properties. Some of the results that are established are expansion in cofactors, Cramer’s rule, and the value of the determinant of a Vandermonde matrix. It is shown that the determinant function is well defined on any linear map from a finite-dimensional vector space to itself.

Section 8 introduces eigenvectors and eigenvalues for matrices, along with their computation. Also, in this section the characteristic polynomial and the trace of a square matrix are defined, and all these notions are reinterpreted in terms of linear maps.

Section 9 proves the existence of bases for infinite-dimensional vector spaces and discusses the extent to which the material of the first eight sections extends from the finite-dimensional case to be valid in the infinite-dimensional case.

Pp. 33-88

Inner-Product Spaces

Anthony W. Knapp

This chapter investigates the effects of adding the additional structure of an inner product to a finite-dimensional real or complex vector space.

Section 1 concerns the effect on the vector space itself, defining inner products and their corresponding norms and giving a number of examples and formulas for the computation of norms. Vector-space bases that are orthonormal play a special role.

Section 2 concerns the effect on linear maps. The inner product makes itself felt partly through the notion of the adjoint of a linear map. The section pays special attention to linear maps that are self-adjoint, i.e., are equal to their own adjoints, and to those that are unitary, i.e., preserve norms of vectors.

Section 3 proves the Spectral Theorem for self-adjoint linear maps on finite-dimensional innerproduct spaces. The theorem says in part that any self-adjoint linear map has an orthonormal basis of eigenvectors. The Spectral Theorem has several important consequences, one of which is the existence of a unique positive semidefinite square root for any positive semidefinite linear map. The section concludes with the polar decomposition, showing that any linear map factors as the product of a unitary linear map and a positive semidefinite one.

Pp. 89-116

Groups and Group Actions

Anthony W. Knapp

This chapter develops the basics of group theory, with particular attention to the role of group actions of various kinds. The emphasis is on groups in Sections 1–3 and on group actions starting in Section 6. In between is a two-section digression that introduces rings, fields, vector spaces over general fields, and polynomial rings over commutative rings with identity

Section 1 introduces groups and a number of examples, and it establishes some easy results. Most of the examples arise either from number-theoretic settings or from geometric situations in which some auxiliary space plays a role. The direct product of two groups is discussed briefly so that it can be used in a table of some groups of low order.

Section 2 defines coset spaces, normal subgroups, homomorphisms, quotient groups, and quotient mappings. Lagrange’s Theorem is a simple but key result. Another simple but key result is the construction of a homomorphism with domain a quotient group when a given homomorphism is trivial on . The section concludes with two standard isomorphism theorems.

Section 3 introduces general direct products of groups and direct sums of abelian groups, together with their concrete “external” versions and their universal mapping properties.

Sections 4–5 are a digression to define rings, fields, and ring homomorphisms, and to extend the theories concerning polynomials and vector spaces as presented in Chapters I–II. The immediate purpose of the digression is to make prime fields and the notion of characteristic available for the remainder of the chapter. The definitions of polynomials are extended to allow coefficients from any commutative ring with identity and to allow more than one indeterminate, and universal mapping properties for polynomial rings are proved.

Sections 6–7 introduce group actions. Section 6 gives some geometric examples beyond those in Section 1, it establishes a counting formula concerning orbits and isotropy subgroups, and it develops some structure theory of groups by examining specific group actions on the group and its coset spaces. Section 7 uses a group action by automorphisms to define the semidirect product of two groups. This construction, in combination with results from Sections 5–6, allows one to form several new finite groups of interest.

Section 8 defines simple groups, proves that alternating groups on five or more letters are simple, and then establishes the Jordan-Hölder Theorem concerning the consecutive quotients that arise from composition series.

Section 9 deals with finitely generated abelian groups. It is proved that “rank” is well defined for any finitely generated free abelian group, that a subgroup of a free abelian group of finite rank is always free abelian, and that any finitely generated abelian group is the direct sum of cyclic groups.

Section 10 returns to structure theory for finite groups. It begins with the Sylow Theorems, which produce subgroups of prime-power order, and it gives two sample applications. One of these classifies the groups of order , where and are distinct primes, and the other provides the information necessary to classify the groups of order 12.

Section 11 introduces the language of “categories” and “functors.” The notion of category is a precise version of what is sometimes called a “context” at points in the book before this section, and some of the “constructions” in the book are examples of “functors.” The section treats in this language the notions of “product” and “coproduct,” which are abstractions of “direct product” and “direct sum.”

Pp. 117-210

Theory of a Single Linear Transformation

Anthony W. Knapp

This goal of this chapter is to find finitely many canonical representatives of each similarity class of square matrices with entries in a field and correspondingly of each isomorphism class of linear maps from a finite-dimensional vector space to itself.

Section 1 frames the problem in more detail. Section 2 develops the theory of determinants over a commutative ring with identity in order to be able to work easily with characteristic polynomials det( — . The discussion is built around the principle of “permanence of identities,” which allows for passage from certain identities with integer coefficients to identities with coefficients in the ring in question.

Section 3 introduces the minimal polynomial of a square matrix or linear map. The Cayley-Hamilton Theorem establishes that such a matrix satisfies its characteristic equation, and it follows that the minimal polynomial divides the characteristic polynomial. It is proved that a matrix is similar to a diagonal matrix if and only if its minimal polynomial is the product of distinct factors of degree 1. In combination with the fact that two diagonal matrices are similar if and only if their diagonal entries are permutations of one another, this result solves the canonical-form problem for matrices whose minimal polynomial is the product of distinct factors of degree 1.

Section 4 introduces general projection operators from a vector space to itself and relates them to vector-space direct-sum decompositions with finitely many summands. The summands of a directsum decomposition are invariant under a linear map if and only if the linear map commutes with each of the projections associated to the direct-sum decomposition.

Section 5 concerns the Primary Decomposition Theorem, whose subject is the operation of a linear map : ? with finite-dimensional. The statement is that if has minimal polynomial with the () distinct monic prime, then has a unique direct-sum decomposition in which the respective summands are the kernels of the linear maps , and moreover the minimal polynomial of the restriction of to the summand is .

Sections 6–7 concern Jordan canonical form. For the case that the prime factors of the minimal polynomial of a square matrix all have degree 1, the main theorem gives a canonical form under similarity, saying that a given matrix is similar to one in “Jordan form” and that the Jordan form is completely determined up to permutation of the constituent blocks. The theorem applies to all square matrices if the field is algebraically closed, as is the case for C. The theorem is stated and proved in Section 6, and Section 7 shows how to make computations in two different ways.

Pp. 211-247

Multilinear Algebra

Anthony W. Knapp

This chapter studies, in the setting of vector spaces over a field, the basics concerning multilinear functions, tensor products, spaces of linear functions, and algebras related to tensor products.

Sections 1–5 concern special properties of bilinear forms, all vector spaces being assumed to be finite-dimensional. Section 1 associates a matrix to each bilinear form in the presence of an ordered basis, and the section shows the effect on the matrix of changing the ordered basis. It then addresses the extent to which the notion of “orthogonal complement” in the theory of inner-product spaces applies to nondegenerate bilinear forms. Sections 2–3 treat symmetric and alternating bilinear forms, producing bases for which the matrix of such a form is particularly simple. Section 4 treats a related subject, Hermitian forms when the field is the complex numbers. Section 5 discusses the groups that leave some particular bilinear and Hermitian forms invariant.

Section 6 introduces the tensor product of two vector spaces, working with it in a way that does not depend on a choice of basis. The tensor product has a universal mapping property—that bilinear functions on the product of the two vector spaces extend uniquely to linear functions on the tensor product. The tensor product turns out to be a vector space whose dual is the vector space of all bilinear forms. One particular application is that tensor products provide a basis-independent way of extending scalars for a vector space from a field to a larger field. The section includes a number of results about the vector space of linear mappings from one vector space to another that go hand in hand with results about tensor products. These have convenient formulations in the language of category theory as “natural isomorphisms.”

Section 7 begins with the tensor product of three and then vector spaces, carefully considering the universal mapping property and the question of associativity. The section defines an algebra over a field as a vector space with a bilinear multiplication, not necessarily associative. If is a vector space, the tensor algebra of is the direct sum over = 0 of the -fold tensor product of with itself. This is an associative algebra with a universal mapping property relative to any linear mapping of into an associative algebra with identity: the linear map extends to an algebra homomorphism of into carrying 1 into 1.

Sections 8–9 define the symmetric and exterior algebras of a vector space . The symmetric algebra is a quotient of with the following universal mapping property: any linear mapping of into a commutative associative algebra with identity extends to an algebra homomorphism of into carrying 1 into 1. The symmetric algebra is commutative. Similarly the exterior algebra ? is a quotient of with this universal mapping property: any linear mapping of into an associative algebra with identity such that = 0 for all ? extends to an algebra homomorphism of ? into carrying 1 into 1.

The problems at the end of the chapter introduce some other algebras that are of importance in applications, and the problems relate some of these algebras to tensor, symmetric, and exterior algebras. Among the objects studied are Lie algebras, universal enveloping algebras, Clifford algebras, Weyl algebras, Jordan algebras, and the division algebra of octonions.

Pp. 248-305

Advanced Group Theory

Anthony W. Knapp

This chapter continues the development of group theory begun in Chapter IV, the main topics being the use of generators and relations, representation theory for finite groups, and group extensions. Representation theory uses linear algebra and inner-product spaces in an essential way, and a structure-theory theorem for finite groups is obtained as a consequence. Group extensions introduce the subject of cohomology of groups.

Sections 1–3 concern generators and relations. The context for generators and relations is that of a free group on the set of generators, and the relations indicate passage to a quotient of this free group by a normal subgroup. Section 1 constructs free groups in terms of words built from an alphabet and shows that free groups are characterized by a certain universal mapping property. This universal mapping property implies that any group may be defined by generators and relations. Computations with free groups are aided by the fact that two reduced words yield the same element of a free group if and only if the reduced words are identical. Section 2 obtains the Nielsen-Schreier Theorem that subgroups of free groups are free. Section 3 enlarges the construction of free groups to the notion of the free product of an arbitrary set of groups. Free product is what coproduct is for the category of groups; free groups themselves may be regarded as free products of copies of the integers.

Sections 4–5 introduce representation theory for finite groups and give an example of an important application whose statement lies outside representation theory. Section 4 contains various results giving an analysis of the space (, ?) of all complex-valued functions on a finite group . In this analysis those functions that are constant on conjugacy classes are shown to be linear combinations of the characters of the irreducible representations. Section 5 proves Burnside’s Theorem as an application of this theory—that any finite group of order with and prime and with + > 1 has a nontrivial normal subgroup.

Section 6 introduces cohomology of groups in connection with group extensions. If is to be a normal subgroup of and is to be isomorphic to , the first question is to parametrize the possibilities for up to isomorphism. A second question is to parametrize the possibilities for if is to be a semidirect product of and .

Pp. 306-369

Commutative Rings and Their Modules

Anthony W. Knapp

This chapter amplifies the theory of commutative rings that was begun in Chapter IV, and it introduces modules for any ring. Emphasis is on the topic of unique factorization.

Section 1 gives many examples of rings, some commutative and some noncommutative, and introduces the notion of a module for a ring.

Sections 2–4 discuss some of the tools related to questions of factorization in integral domains. Section 2 defines the field of fractions for an integral domain and gives its universal mapping property. Section 3 defines prime and maximal ideals and relates quotients of them to integral domains and fields. Section 4 introduces principal ideal domains, which are shown to have unique factorization, and it defines Euclidean domains as a special kind of principal ideal domain for which greatest common divisors can be obtained constructively.

Section 5 proves that if is an integral domain with unique factorization, then so is the polynomial ring []. This result is a consequence of Gauss’s Lemma, which addresses what happens to the greatest common divisor of the coefficients when one multiplies two members of []. Gauss’s Lemma has several other consequences that relate factorization in [] to factorization in [], where is the field of fractions of . Still another consequence is Eisenstein’s irreducibility criterion, which gives a sufficient condition for a member of [] to be irreducible.

Section 6 contains the theorem that every finitely generated unital module over a principal ideal domain is a direct sum of cyclic modules. The cyclic modules may be assumed to be primary in a suitable sense, and then the isomorphism types of the modules appearing in the direct-sum decomposition, together with their multiplicities, are uniquely determined. The main results transparently generalize the Fundamental Theorem for Finitely Generated Abelian Groups, and less transparently they generalize the existence and uniqueness of Jordan canonical form for square matrices with entries in an algebraically closed field.

Sections 7–11 contain foundational material related to factorization for the two subjects of algebraic number theory and algebraic geometry. Both these subjects rely heavily on the theory of commutative rings. Section 7 is a section of motivation, showing the analogy between a situation in algebraic number theory and a situation in algebraic geometry. Sections 8–10 introduce Noetherian rings, integral closures, and localizations. Section 11 uses this material to establish unique factorization of ideals for Dedekind domains, as well as some other properties.

Pp. 370-451

Fields and Galois Theory

Anthony W. Knapp

This chapter develops some general theory for field extensions and then goes on to study Galois groups and their uses. More than half the chapter illustrates by example the power and usefulness of the theory of Galois groups. Prerequisite material from Chapter VIII consists of Sections 1–6 for Sections 1–13 of the present chapter, and it consists of all of Chapter VIII for Sections 14–17 of the present chapter.

Sections 1–2 introduce field extensions. These are inclusions of a base field in a larger field. The fundamental construction is of a simple extension, algebraic or transcendental, and the next construction is of a splitting field. An algebraic simple extension is made by adjoining a root of an irreducible polynomial over the base field, and a splitting field is made by adjoining all the roots of such a polynomial. For both constructions, there are existence and uniqueness theorems.

Section 3 classifies finite fields. For each integer that is a power of some prime number, there exists one and only one finite field of order , up to isomorphism. One finite field is an extension of another, apart from isomorphisms, if and only if the order of the first field is a power of the order of the second field.

Section 4 concerns algebraic closure. Any field has an algebraic extension in which each nonconstant polynomial over the extension field has a root. Such a field exists and is unique up to isomorphism.

Section 5 applies the theory of Sections 1–2 to the problem of constructibility with straightedge and compass. First the problem is translated into the language of field theory. Then it is shown that three desired constructions from antiquity are impossible: “doubling a cube,” trisecting an arbitrary constructible angle, and “squaring a circle.” The full proof of the impossibility of squaring a circle uses the fact that is transcendental over the rationals, and the proof of this property of is deferred to Section 14. Section 5 concludes with a statement of the theorem of Gauss identifying integers such that a regular -gon is constructible and with some preliminary steps toward its proof.

Sections 6–8 introduce Galois groups and develop their theory. The theory applies to a field extension with three properties—that it is finite-dimensional, separable, and normal. Such an extension is called a “finite Galois extension.” The Fundamental Theorem of Galois Theory says in this case that the intermediate extensions are in one-one correspondence with subgroups of the Galois group, and it gives formulas relating the corresponding intermediate fields and Galois subgroups.

Sections 9–11 give three standard initial applications of Galois groups. The first is to proving the theorem of Gauss about constructibility of regular -gons, the second is to deriving the Fundamental Theorem of Algebra from the Intermediate Value Theorem, and the third is to proving the necessity of the condition of Abel and Galois for solvability of polynomial equations by radicals-that the Galois group of the splitting field of the polynomial have a composition series with abelian quotients.

Sections 12–13 begin to derive quantitative information, rather than qualitative information, from Galois groups. Section 12 shows how an appropriate Galois group points to the specific steps in the construction of a regular -gon when the construction is possible. Section 13 introduces a tool known as Lagrange resolvents, a precursor of modern harmonic analysis. Lagrange resolvents are used first to show that Galois extensions in characteristic 0 with cyclic Galois group of prime order are simple extensions obtained by adjoining a auth root, provided all the auth roots of 1 lie in the base field. Lagrange resolvents and this theorem about cyclic Galois groups combine to yield a derivation of Cardan’s formula for solving general cubic equations.

Section 14 begins the part of the chapter that depends on results in the later sections of Chapter VIII. Section 14 itself contains a proof that is transcendental; the proof is a nice illustration of the interplay of algebra and elementary real analysis.

Section 15 introduces the field polynomial of an element in a finite-dimensional extension field. The determinant and trace of this polynomial are called the norm and trace of the element. The section gives various formulas for the norm and trace, including formulas involving Galois groups. With these formulas in hand, the section concludes by completing the proof of Theorem 8.54 about extending Dedekind domains, part of the proof having been deferred from Section VIII.11.

Section 16 discusses how prime ideals split when one passes, for example, from the integers to the algebraic integers in a number field. The topic here was broached in the motivating examples for algebraic number theory and algebraic geometry as introduced in Section VIII.7, and it was the main topic of concern in that section. The present results put matters into a wider context.

Section 17 gives two tools that sometimes help in identifying Galois groups, particularly of splitting fields of monic polynomials with integer coefficients. One tool uses the discriminant of the polynomial. The other uses reduction of the coefficients modulo various primes.

Pp. 452-552

Modules over Noncommutative Rings

Anthony W. Knapp

This chapter contains two sets of tools for working with modules over a ring with identity. The first set concerns finiteness conditions on modules, and the second set concerns the Hom and tensor product functors.

Sections 1–3 concern finiteness conditions on modules. Section 1 deals with simple and semisimple modules. A simple module over a ring is a nonzero unital module with no proper nonzero submodules, and a semisimple module is a module generated by simple modules. It is proved that semisimple modules are direct sums of simple modules and that any quotient or submodule of a semisimple module is semisimple. Section 2 establishes an analog for modules of the Jordan-Hölder Theorem for groups that was proved in Chapter IV; the theorem says that any two composition series have matching consecutive quotients, apart from the order in which they appear. Section 3 shows that a module has a composition series if and only if it satisfies both the ascending chain condition and the descending chain condition for its submodules.

Sections 4–6 concern the Hom and tensor product functors. Section 4 regards Hom(), where and are unital left modules, as a contravariant functor of the variable and as a covariant functor of the variable. The section examines the interaction of these functors with the direct sum and direct product functors, the relationship between Hom and matrices, the role of bimodules, and the use of Hom to change the underlying ring. Section 5 introduces the tensor product ? of a unital right module and a unital left module , regarding tensor product as a covariant functor of either variable. The section examines the effect of interchanging and , the interaction of tensor product with direct sum, an associativity formula for triple tensor products, an associativity formula involving a mixture of Hom and tensor product, and the use of tensor product to change the underlying ring. Section 6 introduces the notions of a complex and an exact sequence in the category of all unital left modules and in the category of all unital right modules. It shows the extent to which the Hom and tensor product functors respect exactness for part of a short exact sequence, and it gives examples of how Hom and tensor product may fail to respect exactness completely.

Pp. 553-591