Basic Terminology


Back to Ring Theory Page

On this page:


Introduction

Rings are ubiquitous throughout mathematics and its applications. As students of mathematics, we see many examples of commutative rings as our first examples such as $\ZZ$. Later on, we are introduced to matrix rings as canonical examples of noncommutative rings. Our focus will be on a general theory that does not assume commutativity will emphasis on the noncommutative case.

Ideals and Quotient Rings

[definition] [deftitle]Definition %counter% (Ideals)[/deftitle]

Let $R$ be a ring. A left ideal $I$ of $R$ is a subset of $R$ such that:

  • $0 \in I$.
  • For every $x, y\in I$ and $r,s\in R$, we have $rx + sy \in I$.

We say that $I \subseteq R$ is a right ideal of $R$ provided that:

  • $0\in I$.
  • For every $x, y\in I$ and $r, s\in R$, we have $xr + ys \in I$.

Finally, if $I$ is a 2-sided ideal (both a left and right ideal), we say that $I$ is an ideal. [/definition]

As we do in the commutative case, we define the quotient $\overline{R} \coloneqq R/I$ in the same way we define quotients for commutative rings.

[definition] [deftitle]Definition %counter% (Quotient rings)[/deftitle]

Let $R$ be a ring and $I$ be an ideal in $R$. Then we define the quotient ring

\[\overline{R} \coloneqq R/I \coloneqq \{\overline{r} \coloneqq r + I \mid r \in R\}\]

to be the set of cosets of $I$. We define $\overline{r} + \overline{s}$ to be $\overline{r+s}$ and $\overline{r}\cdot\overline{s}$ to be $\overline{rs}$. [/definition]

[example] [extitle]Remark %counter%[/extitle]

The two-sided requirement of $I$ is necessary for $R/I$ to be a ring. If

\[(r + I)(s + I) = rs + I,\]

then $(r+x)(s+y)$ (where $x,y\in I$) needs to be an element of $rs + I$. Note

\[(r + x)(s + y) = rs + ry + xs + xy.\]

For $ry \in I$ we need $I$ to be a left ideal, and for $xs \in I$, we need $I$ to be a right ideal. Assuming $I$ to be left or right automatically implies $xy \in I$. [/example]

It is clear that we have a natural surjective ring homomorphism $R \to R/I$ that sends $r \mapsto \overline{r}$. The kernel of this map is $I$.

Simple Rings

[definition] [deftitle]Definition %counter% (Simple rings)[/deftitle]

Let $R$ be a nonzero ring. Then $R$ is simple if its only ideals are $0$ and $R$. [/definition]

[theorem] [thmtitle]Proposition %counter% (Commutative simple rings are fields)[/thmtitle]

Let $R$ be a commutative ring. Then $R$ is simple if and only if $R$ is a field. [/theorem]

[proof] ($\implies$) If $R$ is simple, then every ideal $\generator{a} = R$ which implies that there is some $r\in R$ for which $ra = 1$. So $a$ is a unit.

($\impliedby$) Let $I$ be an ideal of $R$. If $I = 0$, then there is nothing to show. If $I \neq 0$, then any $r\in I$ is a unit so $I$ contains $rr^{-1} = 1 \in I$ which means that $I = R$. [/proof]

There is an alternative way to characterize simplicity of a ring:

[theorem] [thmtitle]Proposition %counter%[/thmtitle]

Let $R$ be a nonzero ring. Then $R$ is simple if and only if for any nonzero $a \in R$, there are $b_i, c_i\in R$ such that

\[\sum_{i} b_i a c_i = 1.\]

[/theorem]

[proof] ($\implies$) If $R$ is simple, then the ideal $RaR = R$. So $1\in RaR$ which implies the existence of said equation.

($\impliedby$) Any ideal containing $a$ will contain $1$. Thus, the only nontrivial ideal containing $a$ is $R$ itself. [/proof]

Zero-divisors and Domains

[definition] [deftitle]Definition %counter% (Zero-divisors)[/deftitle]

Let $R$ be a ring.

  • A nonzero element $a\in R$ is a left $0$-divisor provided that there is some $b\in R$ such that $ab = 0$.
  • Similarly, we say that $a \in R$ is a right $0$-divisor provided that there is some $b \in R$ such that $ba = 0$. [/definition]

In commutative rings, specifying “left” and “right” for $0$-divisors is obviously useless. In noncommutative ring theory, this distinction is incredibly important as it’s possible to have left $0$-divisors that are not right $0$-divisors.

[example] [extitle]Example %counter% (Left $0$-divisor that is not a right $0$-divisor in $\End(\ZZ^\NN)$)[/extitle]

Let $\End(\ZZ^\NN)$ be the endomorphism ring of the abelian group $\ZZ^\NN$. Addition in this ring is defined pointwise and multiplication is functional composition. Let $L, R, P \in \End(\ZZ^\NN)$ be defined by

\[L(n_1, n_2, n_3, \ldots) \coloneqq (n_2, n_3, n_4, \ldots)\]

and

\[R(n_1, n_2, n_3, \ldots) \coloneqq (0, n_1, n_2, \ldots)\]

and

\[P(n_1, n_2, n_3, \ldots) \coloneqq (n_1, 0, 0, \ldots).\]

Then $LP = 0$ and $PR = 0$ which implies that $L$ is a left $0$-divisor and $R$ is a right $0$-divisor; however, $L$ is not a right $0$-divisor and $R$ is not a left $0$-divisor. [/example]

[example] [extitle]Example %counter% (Left $0$-divisor that is not a right $0$-divisor in $M_2(\RR)$)[/extitle]

An example that Lam gives is the ring

\[R = \mqty[\ZZ & \ZZ/2\ZZ \\ 0 & \ZZ].\]

In particular, if

\[A = \mqty[2 & 0 \\ 0 & 1] \quad\text{ and }\quad B = \mqty[0 & \overline{1} \\ 0 & 0],\]

then $A$ is a left $0$-divisor (because $AB = 0$) but not a right $0$-divisor. To see the later claim, suppose

\[0 = \mqty[x & y \\ 0 & z]\mqty[2 & 0 \\ 0 & 1] = \mqty[2x & y \\ 0 & z].\]

Then $x, z = 0$ and $y = 0$. Thus, $A$ is not a right $0$-divisor. It is straightforward to see that $B^2 = 0$ so $B$ is both a left and right $0$-divisor. [/example]

[definition] [deftitle]Definition %counter% (Domain)[/deftitle]

A ring $R$ is called a domain if it is nonzero and $ab = 0$ implies $a = 0$ or $b = 0$. Equivalently, $R$ has no $0$-divisors (left or right). [/definition]

[example] [extitle]Remark %counter%[/extitle]

If $R$ is a commutative domain, then we say that $R$ is an integral domain which gives tons of examples. [/example]

Reduced Rings

[definition] [deftitle]Definition %counter% (Reduced Rings)[/deftitle]

A ring $R$ is called reduced if it has no nonzero nilpotent elements. [/definition]

[example] [extitle]Example %counter%[/extitle]

The direct product of a family of domains is reduced since otherwise would imply there is some $a$ in one of the domains for which $a^2 = 0$. [/example]

Inverses, Units, and Division Rings

[definition] [deftitle]Definition %counter% (Left/right-invertible elements)[/deftitle]

Let $R$ be a ring. We say that $a\in R$ is right-invertible if there is some $b\in R$ such that $ab = 1$ and $b$ is said to be a right-inverse of $a$. Left-invertible elements and left inverses are defined similarly. [/definition]

[theorem] [thmtitle]Proposition %counter%[/thmtitle]

Let $R$ be a ring with an element $a$ that has right inverse $b$ and left inverse $b’$. Then $b = b’$. [/theorem]

[proof] As $ab = 1 = b’a$, it follows that

\[b = (b'a)b = b(ab) = b.\]

[/proof]

[definition] [deftitle]Definition %counter% (Invertible elements)[/deftitle]

Let $R$ be a ring. If $a$ has a left and right-inverse $b$, then we call $b$ the inverse of $a$ and we say that $a$ is invertible (or a unit). We denote the set of units in $R$ by $U(R)$. [/definition]

[definition] [deftitle]Definition %counter% (Invertible elements)[/deftitle]

We say a ring $R$ is Dedekind-finite (or von Neumann-finite) provided that right-invertible implies left-invertible. [/definition]

[example] [extitle]Example %counter% ($\End_k(V)$ is Dedekind-finite if $\dim_k(V) < \infty$)[/extitle]

Let $V$ be a finite-dimensional $k$-vector space. Then we know that $\End_k(V)$ is a Dedekind-finite ring. In particular, this says that right-invertible operators have left-inverse which is one of the first results we learn about linear operators in linear algebra.

Say that $A, B \in \End_k(V)$ with $AB = I$ — that is, $A$ is right-invertible. This implies that $A$ is a surjection since if $x\in V$, then

\[x = Ix = (AB)x = A(Bx).\]

By the Rank-Nullity Theorem, $A$ has a $0$-dimensional kernel so $A$ is also injective. Thus, $A$ is a bijection which implies two-sided invertibility. [/example]

[example] [extitle]Example %counter% ($\End(\ZZ^\NN)$ is not Dedekind-finite)[/extitle]

Let $\End(\ZZ^\NN)$ be the endomorphism ring of the abelian group $\ZZ^\NN$. Addition in this ring is defined pointwise and multiplication is functional composition. Let $L, R \in \End(\ZZ^\NN)$ be defined by

\[L(n_1, n_2, n_3, \ldots) \coloneqq (n_2, n_3, n_4, \ldots)\]

and

\[R(n_1, n_2, n_3, \ldots) \coloneqq (0, n_1, n_2, \ldots).\]

Clearly, $LR = 1$ but $RL \neq 1$. So $L$ is right-invertible but not left-invertible (if it was, then $RL = 1$). [/example]

[definition] [deftitle]Definition %counter% (Division rings)[/deftitle]

A ring $R$ is a division ring if it is nonzero and $U(R) = R\setminus \{0\}$. [/definition]

[theorem] [thmtitle]Proposition %counter%[/thmtitle]

Let $R$ be a nonzero ring. Then $R$ is a division ring if and only if the right ideals are $\{0\}$ and $R$. [/theorem]

[proof] Note that $R$ is a division ring if and only if every nonzero $a \in R$ is right-invertible. And this later property holds if and only if the only two ideals of $R$ are ${0}$ and $R$. [/proof]

Opposite Ring

[definition] [deftitle]Definition %counter% (Opposite ring)[/deftitle]

Let $R$ be a ring. The opposite ring $\oppring{R}$ is a ring such that for every $a \in R$, there is a unique $\oppring{a} \in \oppring{R}$. Furthermore, we define multiplication by

\[\oppring{a} \cdot \oppring{b} = \oppring{(ba)}.\]

[/definition]

Generally speaking, if we have results for $R$ “on the right,” then we can obtain analogous results “on the left” by applying the known results to $\oppring{R}$. Here’s a quick example:

[theorem] [thmtitle]Proposition %counter%[/thmtitle]

Let $R$ be a nonzero ring. Then $R$ is a division ring if and only if the left ideals are $\{0\}$ and $R$. [/theorem]

[proof] The previous proposition tells us that $\oppring{R}$ is a division ring if and only if the right ideals are $\{\oppring{0}\}$ and $\oppring{R}$. But since the left ideals of $R$ correspond to the right ideals of $\oppring{R}$, the claim follows. [/proof]