Bra–ket notation

From Wikipedia, the free encyclopedia

In quantum mechanics, bra–ket notation is a standard notation for describing quantum states, composed of angle brackets and vertical bars. It can also be used to denote abstract vectors and linear functionals in mathematics. It is so called because the inner product (or dot product on a complex vector space) of two states is denoted by a ⟨bra|ket⟩,

\langle\phi|\psi\rangle ,

consisting of a left part, ⟨φ|, called the bra /brɑː/, and a right part, |ψ⟩, called the ket /kɛt/. The notation was introduced in 1939 by Paul Dirac[1] and is also known as Dirac notation, though the notation has precursors in Grassmann's use of the notation [φ|ψ] for his inner products nearly 100 years previously.[2][3]

Bra–ket notation is widespread in quantum mechanics: almost every phenomenon that is explained using quantum mechanics—including a large portion of modern physics — is usually explained with the help of bra–ket notation. Part of the appeal of the notation is the abstract representation-independence it encodes, together with its versatility in producing a specific representation (e.g. x, or p, or eigenfunction base) without much ado, or excessive reliance on the nature of the linear spaces involved. The overlap expression ⟨φ|ψ⟩ is typically interpreted as the probability amplitude for the state ψ to collapse into the state ϕ.

Vector spaces

Background: Vector spaces

In physics, basis vectors allow any Euclidean vector to be represented geometrically using angles and lengths, in different directions, i.e. in terms of the spatial orientations. It is simpler to see the notational equivalences between ordinary notation and bra–ket notation; so, for now, consider a vector A starting at the origin and ending at an element of 3-d Euclidean space; the vector then is specified by this end-point, a triplet of elements in the field of real numbers, symbolically dubbed as A ∈ ℝ3.

The vector A can be written using any set of basis vectors and corresponding coordinate system. Informally basis vectors are like "building blocks of a vector": they are added together to compose a vector, and the coordinates are the numerical coefficients of basis vectors in each direction. Two useful representations of a vector are simply a linear combination of basis vectors, and column matrices. Using the familiar Cartesian basis, a vector A may be written as

3d real vector components and bases projection; similarities between vector calculus notation and Dirac notation. Projection is an important feature of the Dirac notation.
 \begin{align}
\mathbf{A} & = A_x \mathbf{e}_x + A_y \mathbf{e}_y + A_z \mathbf{e}_z \\
& = A_x \begin{pmatrix} 1 \\ 0 \\ 0 \end{pmatrix} +
A_y \begin{pmatrix} 0 \\ 1 \\ 0 \end{pmatrix} +
A_z \begin{pmatrix} 0 \\ 0 \\ 1 \end{pmatrix} \\
& = \begin{pmatrix} A_x \\ 0 \\ 0 \end{pmatrix} +
\begin{pmatrix} 0 \\ A_y \\ 0 \end{pmatrix} +
\begin{pmatrix} 0 \\ 0 \\ A_z \end{pmatrix} \\
& = \begin{pmatrix}
A_x \\
A_y \\
A_z \\
\end{pmatrix}
\end{align}

respectively, where ex, ey, ez denotes the Cartesian basis vectors (all are orthogonal unit vectors) and Ax, Ay, Az are the corresponding coordinates, in the x, y, z directions. In a more general notation, for any basis in 3-d space one writes

\mathbf{A} = A_1 \mathbf{e}_1 + A_2 \mathbf{e}_2 + A_3 \mathbf{e}_3 = \begin{pmatrix}
A_1 \\
A_2 \\
A_3 \\
\end{pmatrix}

Generalizing further, consider a vector A in an N dimensional vector space over the field of complex numbers ℂ, symbolically stated as A ∈ ℂN. The vector A is still conventionally represented by a linear combination of basis vectors or a column matrix:

\mathbf{A} = \sum_{n=1}^N A_n \mathbf{e}_n = \begin{pmatrix}
A_1 \\
A_2 \\
\vdots \\
A_N \\
\end{pmatrix}

though the coordinates are now all complex-valued.

Even more generally, A can be a vector in a complex Hilbert space. Some Hilbert spaces, like ℂN, have finite dimension, while others have infinite dimension. In an infinite-dimensional space, the column-vector representation of A would be a list of infinitely many complex numbers.

Ket notation for vectors

Rather than boldtype, over arrows, underscores etc. conventionally used elsewhere; \mathbf{A},\,\vec{A},\,\underline{A}, Dirac's notation for a vector uses vertical bars and angular brackets: |A⟩. When this notation is used, these vectors are called "ket", read as "ket-A".[4] This applies to all vectors, the resultant vector and the basis. The previous vectors are now written

 |A \rangle = A_x|e_x \rangle + A_y|e_y \rangle + A_z|e_z \rangle =
\begin{pmatrix} A_x \\ A_y \\ A_z \end{pmatrix},

or in a more easily generalized notation,

 |A \rangle = A_1|e_1 \rangle + A_2|e_2 \rangle + A_3|e_3 \rangle =
\begin{pmatrix} A_1 \\ A_2 \\ A_3 \end{pmatrix},

The last one may be written in short as

|A \rangle = A_1|1 \rangle + A_2|2 \rangle + A_3|3 \rangle

Notice how any symbols, letters, numbers, or even words — whatever serves as a convenient label — can be used as the label inside a ket. In other words, the symbol |A⟩ has a specific and universal mathematical meaning, but just the "A" by itself does not. Nevertheless, for convenience, there is usually some logical scheme behind the labels inside kets, such as the common practice of labeling energy eigenkets in quantum mechanics with a list of their quantum numbers.

Inner products and bras

An inner product is a generalization of the dot product. The inner product of two vectors is a complex number. Bra–ket notation uses a specific notation for inner products:

 \langle A | B \rangle = \text{the inner product of ket } | A \rangle \text{ with ket } | B \rangle

For example, in three-dimensional complex Euclidean space,

\langle A | B \rangle = A_x^*B_x + A_y^*B_y + A_z^*B_z

where A_i^* denotes the complex conjugate of Ai. A special case is the inner product of a vector with itself, which is the square of its norm (magnitude):

\langle A | A \rangle = |A_x|^2 + |A_y|^2 + |A_z|^2

Bra–ket notation splits this inner product (also called a "bracket") into two pieces, the "bra" and the "ket":

 \langle A | B \rangle = \left( \, \langle A | \, \right) \,\, \left( \, | B \rangle \, \right)

where ⟨A| is called a bra, read as "bra-A", and |B⟩ is a ket as above.

The purpose of "splitting" the inner product into a bra and a ket is that both the bra ⟨A| and the ket |B⟩ are meaningful on their own, and can be used in other contexts besides within an inner product. There are two main ways to think about the meanings of separate bras and kets:

Bras and kets as row and column vectors

For a finite-dimensional vector space, using a fixed orthonormal basis, the inner product can be written as a matrix multiplication of a row vector with a column vector:

 \langle A | B \rangle = A_1^* B_1 + A_2^* B_2 + \cdots + A_N^* B_N =
\begin{pmatrix} A_1^* & A_2^* & \cdots & A_N^* \end{pmatrix}
\begin{pmatrix} B_1 \\ B_2 \\ \vdots \\ B_N \end{pmatrix}

Based on this, the bras and kets can be defined as:

 \langle A | = \begin{pmatrix} A_1^* & A_2^* & \cdots & A_N^* \end{pmatrix}
 | B \rangle = \begin{pmatrix} B_1 \\ B_2 \\ \vdots \\ B_N \end{pmatrix}

and then it is understood that a bra next to a ket implies matrix multiplication.

The conjugate transpose (also called Hermitian conjugate) of a bra is the corresponding ket and vice versa:

\langle A |^\dagger = |A \rangle, \quad |A \rangle^\dagger = \langle A |

because if one starts with the bra

\begin{pmatrix} A_1^* & A_2^* & \cdots & A_N^* \end{pmatrix},

then performs a complex conjugation, and then a matrix transpose, one ends up with the ket

\begin{pmatrix} A_1 \\ A_2 \\ \vdots \\ A_N \end{pmatrix}

Bras as linear operators on kets

A more abstract definition, which is equivalent but more easily generalized to infinite-dimensional spaces, is to say that bras are linear functionals on kets, i.e. operators that input a ket and output a complex number. The bra operators are defined to be consistent with the inner product.

In mathematics terminology, the vector space of bras is the dual space to the vector space of kets, and corresponding bras and kets are related by the Riesz representation theorem.

Non-normalizable states and non-Hilbert spaces

Bra–ket notation can be used even if the vector space is not a Hilbert space.

In quantum mechanics, it is common practice to write down kets which have infinite norm, i.e. non-normalisable wavefunctions. Examples include states whose wavefunctions are Dirac delta functions or infinite plane waves. These do not, technically, belong to the Hilbert space itself. However, the definition of "Hilbert space" can be broadened to accommodate these states (see the Gelfand–Naimark–Segal construction or rigged Hilbert spaces). The bra–ket notation continues to work in an analogous way in this broader context.

For a rigorous treatment of the Dirac inner product of non-normalizable states, see the definition given by D. Carfì.[5][6] For a rigorous definition of basis with a continuous set of indices and consequently for a rigorous definition of position and momentum basis, see.[7] For a rigorous statement of the expansion of an S-diagonalizable operator, or observable, in its eigenbasis or in another basis, see.[8]

Banach spaces are a different generalization of Hilbert spaces. In a Banach space B, the vectors may be notated by kets and the continuous linear functionals by bras. Over any vector space without topology, we may also notate the vectors by kets and the linear functionals by bras. In these more general contexts, the bracket does not have the meaning of an inner product, because the Riesz representation theorem does not apply.

Usage in quantum mechanics

The mathematical structure of quantum mechanics is based in large part on linear algebra:

  • Wave functions and other quantum states can be represented as vectors in a complex Hilbert space. (The exact structure of this Hilbert space depends on the situation.) In bra–ket notation, for example, an electron might be in the "state" |ψ⟩. (Technically, the quantum states are rays of vectors in the Hilbert space, as c|ψ⟩ corresponds to the same state for any nonzero complex number c.)
  • Quantum superpositions can be described as vector sums of the constituent states. For example, an electron in the state |1⟩+i |2⟩ is in a quantum superposition of the states |1⟩ and |2⟩.
  • Measurements are associated with linear operators (called observables) on the Hilbert space of quantum states.
  • Dynamics are also described by linear operators on the Hilbert space. For example, in the Schrödinger picture, there is a linear time evolution operator U with the property that if an electron is in state |ψ⟩ right now, then in one second it will be in the state U|ψ⟩, the same U for every possible |ψ⟩.
  • Wave function normalization is scaling a wave function so that its norm is 1.

Since virtually every calculation in quantum mechanics involves vectors and linear operators, it can involve, and often does involve, bra–ket notation. A few examples follow:

Spinless position–space wave function

Discrete components Ak of a complex vector |A = ∑k Ak|ek, which belongs to a countably infinite-dimensional Hilbert space; there are countably infinitely many k values and basis vectors |ek.
Continuous components ψ(x) of a complex vector |ψ = ∫dx ψ(x)|x, which belongs to an uncountably infinite-dimensional Hilbert space; there are infinitely many x values and basis vectors |x.
Components of complex vectors plotted against index number; discrete k and continuous x. Two particular components out of infinitely many are highlighted.

The Hilbert space of a spin-0 point particle is spanned by a "position basis" \{ \, |\mathbf{r}\rangle \,\} , where the label r extends over the set of all points in position space. Since there are uncountably infinitely many vectors in the basis, this is an uncountably infinite-dimensional Hilbert space. The dimensions of the Hilbert space (usually infinite) and position space (usually 1, 2 or 3) are not to be conflated.

Starting from any ket |Ψ⟩ in this Hilbert space, we can define a complex scalar function of r, known as a wavefunction:

\Psi(\mathbf{r}) \ \stackrel{\text{def}}{=}\ \lang \mathbf{r}|\Psi\rang .

On the left side, Ψ(r) is a function mapping any point in space to a complex number; on the right side, |Ψ⟩ = ∫d3r Ψ(r) |r⟩ is a ket.

It is then customary to define linear operators acting on wavefunctions in terms of linear operators acting on kets, by

A \Psi(\mathbf{r}) \ \stackrel{\text{def}}{=}\ \lang \mathbf{r}|A|\Psi\rang .

For instance, the momentum operator p has the following form,

\mathbf{p} \Psi(\mathbf{r}) \ \stackrel{\text{def}}{=}\ \lang \mathbf{r} |\mathbf{p}|\Psi\rang = - i \hbar \nabla \Psi(\mathbf{r}) .

One occasionally encounters a sloppy expression like

\nabla |\Psi\rang ,

though this is something of a (common) abuse of notation. The differential operator must be understood to be an abstract operator, acting on kets, that has the effect of differentiating wavefunctions once the expression is projected into the position basis,

\nabla \lang\mathbf{r}|\Psi\rang ,

even though, in the momentum basis, the operator amounts to a mere multiplication operator (by iħp).

Overlap of states

In quantum mechanics the expression ⟨φ|ψ⟩ is typically interpreted as the probability amplitude for the state ψ to collapse into the state φ. Mathematically, this means the coefficient for the projection of ψ onto φ. It is also described as the projection of state ψ onto state φ.

Changing basis for a spin-1/2 particle

A stationary spin-½ particle has a two-dimensional Hilbert space. One orthonormal basis is:

|\uparrow_z \rangle, \; |\downarrow_z \rangle

where |\uparrow_z \rangle is the state with a definite value of the spin operator Sz equal to +1/2 and |\downarrow_z \rangle is the state with a definite value of the spin operator Sz equal to −1/2.

Since these are a basis, any quantum state of the particle can be expressed as a linear combination (i.e., quantum superposition) of these two states:

|\psi \rangle = a_{\psi} |\uparrow_z \rangle + b_{\psi} |\downarrow_z \rangle

where aψ, bψ are complex numbers.

A different basis for the same Hilbert space is:

|\uparrow_x \rangle, \; |\downarrow_x \rangle

defined in terms of Sx rather than Sz.

Again, any state of the particle can be expressed as a linear combination of these two:

|\psi \rangle = c_{\psi} |\uparrow_x \rangle + d_{\psi} |\downarrow_x \rangle

In vector form, you might write

|\psi\rangle = \begin{pmatrix} a_\psi \\ b_\psi \end{pmatrix}, \;\; \text{OR} \;\; |\psi\rangle = \begin{pmatrix} c_\psi \\ d_\psi \end{pmatrix}

depending on which basis you are using. In other words, the "coordinates" of a vector depend on the basis used.

There is a mathematical relationship between aψ, bψ,cψ, dψ; see change of basis.

Misleading uses

There are a few conventions and abuses of notation that are generally accepted by the physics community, but which might confuse the non-initiated.

It is common among physicists to use the same symbol for labels and constants in the same equation. It supposedly becomes easier to identify that the constant is related to the labeled object, and is claimed that the divergent nature of each will eliminate any ambiguity and no further differentiation is required. For example, α̂|α⟩= α|α⟩, where the symbol α is used simultaneously as the name of the operator α̂, its eigenvector |α⟩ and the associated eigenvalue α.

Something similar occurs in component notation of vectors. While Ψ (uppercase) is traditionally associated with wavefunctions, ψ (lowercase) may be used to denote a label, a wave function or complex constant in the same context, usually differentiated only by a subscript.

The main abuses are including operations inside the vector labels. This is usually done for a fast notation of scaling vectors. E.g. if the vector |α⟩ is scaled by 1/2, it might be denoted by | \alpha/\sqrt{2} \rangle, which makes no sense since α is a label, not a function or a number, so you can't perform operations on it.

This is especially common when denoting vectors as tensor products, where part of the labels are moved outside the designed slot. E.g.  |\alpha\rangle = |\alpha/\sqrt{2} \rangle_1 \otimes |\alpha/\sqrt{2} \rangle_2 . Here part of the labeling that should state that all three vectors are different was moved outside the kets, as subscripts 1 and 2. And a further abuse occurs, since α is meant to refer to the norm of the first vector – which is a label is denoting a value.

Linear operators

Linear operators acting on kets

A linear operator is a map that inputs a ket and outputs a ket. (In order to be called "linear", it is required to have certain properties.) In other words, if A is a linear operator and |ψ⟩ is a ket, then A|ψ⟩ is another ket.

In an N-dimensional Hilbert space, |ψ⟩ can be written as an N×1 column vector, and then A is an N×N matrix with complex entries. The ket A|ψ⟩ can be computed by normal matrix multiplication.

Linear operators are ubiquitous in the theory of quantum mechanics. For example, observable physical quantities are represented by self-adjoint operators, such as energy or momentum, whereas transformative processes are represented by unitary linear operators such as rotation or the progression of time.

Linear operators acting on bras

Operators can also be viewed as acting on bras from the right hand side. Specifically, if A is a linear operator and ⟨φ| is a bra, then ⟨φ|A is another bra defined by the rule

\bigg(\langle\phi|A\bigg) \; |\psi\rangle = \langle\phi| \; \bigg(A|\psi\rangle\bigg) ,

(in other words, a function composition). This expression is commonly written as (cf. energy inner product)

\langle\phi|A|\psi\rangle .

In an N-dimensional Hilbert space, ⟨φ| can be written as a 1×N row vector, and A (as in the previous section) is an N×N matrix. Then the bra ⟨φ|A can be computed by normal matrix multiplication.

If the same state vector appears on both bra and ket side,

\langle\psi|A|\psi\rangle ,

then this expression gives the expectation value, or mean or average value, of the observable represented by operator A for the physical system in the state |ψ⟩.

Outer products

A convenient way to define linear operators on H is given by the outer product: if ⟨φ| is a bra and |ψ⟩ is a ket, the outer product

 |\phi\rang \lang \psi|

denotes the rank-one operator that maps the ket |ρ⟩ to the ket |φ⟩⟨ψ|ρ⟩ (where ⟨ψ|ρ⟩ is a scalar multiplying the vector |φ⟩).

For a finite-dimensional vector space, the outer product can be understood as simple matrix multiplication:

 |\phi \rangle \, \langle \psi | =
\begin{pmatrix} \phi_1 \\ \phi_2 \\ \vdots \\ \phi_N \end{pmatrix}
\begin{pmatrix} \psi_1^* & \psi_2^* & \cdots & \psi_N^* \end{pmatrix}
= \begin{pmatrix}
\phi_1 \psi_1^* & \phi_1 \psi_2^* & \cdots & \phi_1 \psi_N^* \\
\phi_2 \psi_1^* & \phi_2 \psi_2^* & \cdots & \phi_2 \psi_N^* \\
\vdots & \vdots & \ddots & \vdots \\
\phi_N \psi_1^* & \phi_N \psi_2^* & \cdots & \phi_N \psi_N^* \end{pmatrix}

The outer product is an N×N matrix, as expected for a linear operator.

One of the uses of the outer product is to construct projection operators. Given a ket |ψ⟩ of norm 1, the orthogonal projection onto the subspace spanned by |ψ⟩ is

|\psi\rangle\langle\psi|.

Hermitian conjugate operator

Just as kets and bras can be transformed into each other (making |\psi\rangle into \langle\psi|) the element from the dual space corresponding with A|\psi\rangle is \langle \psi | A^\dagger where A denotes the Hermitian conjugate (or adjoint) of the operator A. In other words,

 |\phi\rangle = A |\psi\rangle if and only if  \langle\phi| = \langle \psi | A^\dagger.

If A is expressed as an N×N matrix, then A is its conjugate transpose.

Self-adjoint operators, where A=A, play an important role in quantum mechanics; for example, an observable is always described by a self-adjoint operator. If A is a self-adjoint operator, then  \langle \psi | A | \psi \rangle is always a real number (not complex). This implies that expectation values of observables are real.

Properties

Bra–ket notation was designed to facilitate the formal manipulation of linear-algebraic expressions. Some of the properties that allow this manipulation are listed herein. In what follows, c1 and c2 denote arbitrary complex numbers, c* denotes the complex conjugate of c, A and B denote arbitrary linear operators, and these properties are to hold for any choice of bras and kets.

Linearity

  • Since bras are linear functionals,
\langle\phi| \; \bigg( c_1|\psi_1\rangle + c_2|\psi_2\rangle \bigg) = c_1\langle\phi|\psi_1\rangle + c_2\langle\phi|\psi_2\rangle.
  • By the definition of addition and scalar multiplication of linear functionals in the dual space,[9]
\bigg(c_1 \langle\phi_1| + c_2 \langle\phi_2|\bigg) \; |\psi\rangle = c_1 \langle\phi_1|\psi\rangle + c_2 \langle\phi_2|\psi\rangle.

Associativity

Given any expression involving complex numbers, bras, kets, inner products, outer products, and/or linear operators (but not addition), written in bra–ket notation, the parenthetical groupings do not matter (i.e., the associative property holds). For example:

 \lang \psi| (A |\phi\rang) = (\lang \psi|A)|\phi\rang \, \stackrel{\text{def}}{=} \, \lang \psi | A | \phi \rang
 (A|\psi\rang)\lang \phi| = A(|\psi\rang \lang \phi|) \, \stackrel{\text{def}}{=} \, A | \psi \rang \lang \phi |

and so forth. The expressions on the right (with no parentheses whatsoever) are allowed to be written unambiguously because of the equalities on the left. Note that the associative property does not hold for expressions that include non-linear operators, such as the antilinear time reversal operator in physics.

Hermitian conjugation

Bra–ket notation makes it particularly easy to compute the Hermitian conjugate (also called dagger, and denoted ) of expressions. The formal rules are:

  • The Hermitian conjugate of a bra is the corresponding ket, and vice-versa.
  • The Hermitian conjugate of a complex number is its complex conjugate.
  • The Hermitian conjugate of the Hermitian conjugate of anything (linear operators, bras, kets, numbers) is itself—i.e.,
(x) = x.
  • Given any combination of complex numbers, bras, kets, inner products, outer products, and/or linear operators, written in bra–ket notation, its Hermitian conjugate can be computed by reversing the order of the components, and taking the Hermitian conjugate of each.

These rules are sufficient to formally write the Hermitian conjugate of any such expression; some examples are as follows:

  • Kets:

\left(c_1|\psi_1\rangle + c_2|\psi_2\rangle\right)^\dagger = c_1^* \langle\psi_1| + c_2^* \langle\psi_2| ~.
  • Inner products:
\langle \phi | \psi \rangle^* = \langle \psi|\phi\rangle ~.
  • Matrix elements:
\langle \phi| A | \psi \rangle^* = \langle \psi | A^\dagger |\phi \rangle
\langle \phi| A^\dagger B^\dagger | \psi \rangle^* = \langle \psi | BA |\phi \rangle ~.
  • Outer products:
\left((c_1|\phi_1\rangle\langle \psi_1|) + (c_2|\phi_2\rangle\langle\psi_2|)\right)^\dagger = (c_1^* |\psi_1\rangle\langle \phi_1|) + (c_2^*|\psi_2\rangle\langle\phi_2|)~.

Composite bras and kets

Two Hilbert spaces V and W may form a third space VW by a tensor product. In quantum mechanics, this is used for describing composite systems. If a system is composed of two subsystems described in V and W respectively, then the Hilbert space of the entire system is the tensor product of the two spaces. (The exception to this is if the subsystems are actually identical particles. In that case, the situation is a little more complicated.)

If |ψ⟩ is a ket in V and |φ⟩ is a ket in W, the direct product of the two kets is a ket in VW. This is written in various notations:

|\psi\rangle|\phi\rangle \,,\quad |\psi\rangle \otimes |\phi\rangle\,,\quad|\psi \phi\rangle\,,\quad|\psi ,\phi\rangle\,.

See quantum entanglement and the EPR paradox for applications of this product.

The unit operator

Consider a complete orthonormal system (basis), \{ e_i \ | \ i \in \mathbb{N} \}, for a Hilbert space H, with respect to the norm from an inner product \langle\cdot,\cdot\rangle. From basic functional analysis we know that any ket |ψ⟩ can also be written as

|\psi\rangle = \sum_{i \in \mathbb{N}} \langle e_i | \psi \rangle | e_i \rangle,

with \langle\cdot|\cdot\rangle the inner product on the Hilbert space.

From the commutativity of kets with (complex) scalars now follows that

\sum_{i \in \mathbb{N}} | e_i \rangle \langle e_i | = \hat{1}

must be the identity operator, which sends each vector to itself. This can be inserted in any expression without affecting its value, for example

 \langle v | w \rangle = \langle v | \sum_{i \in \mathbb{N}} | e_i \rangle \langle e_i | w \rangle = \langle v | \sum_{i \in \mathbb{N}} | e_i \rangle \langle e_i | \sum_{j \in \mathbb{N}} | e_j \rangle \langle e_j | w \rangle = \langle v | e_i \rangle \langle e_i | e_j \rangle \langle e_j | w \rangle ,

where, in the last identity, the Einstein summation convention has been used.

In quantum mechanics, it often occurs that little or no information about the inner product \langle\psi|\phi\rangle of two arbitrary (state) kets is present, while it is still possible to say something about the expansion coefficients \langle\psi|e_i\rangle = \langle e_i|\psi\rangle^* and \langle e_i|\phi\rangle of those vectors with respect to a specific (orthonormalized) basis. In this case, it is particularly useful to insert the unit operator into the bracket one time or more.

For more information, see Resolution of the identity, 1 = ∫dx |x⟩⟨x| = ∫dp |p⟩⟨p|, where |p⟩ = ∫dx eixp|x⟩/√2πħ; since ⟨x' |x⟩= δ(xx' ), plane waves follow, ⟨x|p⟩= exp(ixp/ħ)/√2πħ.

Notation used by mathematicians

The object physicists are considering when using the "bra–ket" notation is a Hilbert space (a complete inner product space).

Let  \mathcal{H} be a Hilbert space and  h\in\mathcal{H} is a vector in  \mathcal{H} . What physicists would denote as |h⟩ is the vector itself. That is

 (|h\rangle)\in \mathcal{H} .

Let  \mathcal{H}^* be the dual space of  \mathcal{H} . This is the space of linear functionals on \mathcal{H}. The isomorphism  \Phi:\mathcal{H}\to\mathcal{H}^* is defined by  \Phi(h) = \phi_h where for all  g\in\mathcal{H} we have

 \phi_h(g) = \mbox{IP}(h,g) = (h,g) = \langle h,g \rangle = \langle h|g \rangle ,

where  \mbox{IP}(\cdot,\cdot), (\cdot,\cdot),\langle \cdot,\cdot \rangle and \langle \cdot | \cdot \rangle are just different notations for expressing an inner product between two elements in a Hilbert space (or for the first three, in any inner product space). Notational confusion arises when identifying  \phi_h and  g with  \langle h | and |g \rangle respectively. This is because of literal symbolic substitutions. Let  \phi_h = H = \langle h| and let  g=G=|g\rangle . This gives

 \phi_h(g) = H(g) = H(G)=\langle h|(G) = \langle h|(
|g\rangle).

One ignores the parentheses and removes the double bars. Some properties of this notation are convenient since we are dealing with linear operators and composition acts like a ring multiplication.

Moreover, mathematicians usually write the dual entity not at the first place, as the physicists do, but at the second one, and they don't use the *-symbol, but an overline (which the physicists reserve for averages and Dirac conjugation) to denote conjugate-complex numbers, i.e. for scalar products mathematicians usually write

(\phi ,\psi )=\int \phi (x)\cdot \overline{\psi(x)}\, {\rm d}x \,,

whereas physicists would write for the same quantity

 \langle\psi |\phi \rangle=\int {\rm d}x\,\psi^*(x)\cdot\phi(x)\,.

See also

References and notes

  1. PAM Dirac (1939). "A new notation for quantum mechanics". Mathematical Proceedings of the Cambridge Philosophical Society 35 (3). pp. 416–418. doi:10.1017/S0305004100021162. 
  2. H. Grassmann (1862). Extension Theory. History of Mathematics Sources. American Mathematical Society, London Mathematical Society, 2000 translation by Lloyd C. Kannenberg. 
  3. Cajori, Florian (1929). A History Of Mathematical Notations Volume II. Open Court Publishing. p.  134. ISBN 978-0-486-67766-8 
  4. Quantum Mechanics Demystified, D. McMahon, Mc Graw Hill (USA), 2006, ISBN(10-) 0-07-145546 9
  5. Carfì, David (April 2003). "Dirac-orthogonality in the space of tempered distributions". Journal of Computational and Applied Mathematics 153 (1–2): 99–107. Bibcode:2003JCoAM.153...99C. doi:10.1016/S0377-0427(02)00634-9. 
  6. Carfì, David (April 2003). "Some properties of a new product in the space of tempered distributions". Journal of Computational and Applied Mathematics 153 (1–2): 109–118. Bibcode:2003JCoAM.153..109C. doi:10.1016/S0377-0427(02)00635-0. 
  7. Carfì, David (2007). "TOPOLOGICAL CHARACTERIZATIONS OF S-LINEARITY". AAPP-PHYSICAL, MATHEMATICAL AND NATURAL SCIENCES 85 (2): 1–16. doi:10.1478/C1A0702005. 
  8. Carfì, David (2005). "S-DIAGONALIZABLE OPERATORS IN QUANTUM MECHANICS". Glasnik Matematicki 40 (2): 261–301. doi:10.3336/gm.40.2.08. 
  9. Lecture notes by Robert Littlejohn, eqns 12 and 13

Further reading

  • Feynman, Leighton and Sands (1965). The Feynman Lectures on Physics Vol. III. Addison-Wesley. ISBN 0-201-02115-3. 

External links

This article is issued from Wikipedia. The text is available under the Creative Commons Attribution/Share Alike; additional terms may apply for the media files.