Classifying Bulk-Edge Anomalies in the Dirac Hamiltonian

Chern number

Recall that \(k = (k_x^2+k_y^2)^\). We rewrite the Hamiltonian in terms of Pauli matrices as

$$\begin H = \vec \cdot \vec ,\quad \vec =\begin -k_x\\ -k_y\\ m-\epsilon ^2 \end, \end$$

(A.1)

where \(\vec =(\sigma _1,\sigma _2,\sigma _3)\) is a vector of Pauli matrices

$$\begin \sigma _1 =\begin 0 & \quad 1\\ 1 & \quad 0 \end,\quad \sigma _2 = \begin 0 & \quad -\\ & \quad 0 \end,\quad \sigma _3 =\begin 1 & \quad 0\\ 0 & \quad -1 \end. \end$$

(A.2)

The eigenprojections of H are shared with those of the flat Hamiltonian \(H'=\vec \cdot \vec \), where \(\vec =\vec /\left|\vec \right|\). They are

$$\begin P_\pm =\frac\left( 1\pm \vec \cdot \vec \right) , \end$$

(A.3)

(\(\left( \vec \cdot \vec \right) \left( \vec \cdot \vec \right) =\left( \vec \cdot \vec \right) +\left( \vec \times \vec \right) \cdot \vec \)). Note that \(\vec =\vec ()\) is convergent for \(k\rightarrow \infty \)

$$\begin \vec \rightarrow \begin 0\\ 0\\ \pm 1 \end\quad (k\rightarrow \infty ), \quad \textrm\,\, \epsilon \gtrless 0. \end$$

(A.4)

Therefore, also the eigenprojections converge and the Chern number is a well-defined topological invariant

$$\begin C(P)=\frac}\int _^2} \textrmk_x\textrmk_y\,}}\left( P\left[ \partial _P,\partial _P\right] \right) . \end$$

(A.5)

If the regulator \(\epsilon \ne 0\) we can compactify the momentum plane to the 2-sphere \(\mathbb S^2\) and we can compute the r.h.s. on a closed manifold with the map \(\textbf: \mathbb S^2 \rightarrow \mathbb S^2\). According to [20, Proposition 1], we get

$$\begin C_\pm = \pm \dfrac \int _ \textbf \cdot ( \partial _1 \textbf \wedge \partial _2 \textbf)\, x_1 x_2, \end$$

which, in our case, leads to

$$\begin C_\pm = \pm \frac\,}}+\,}}}. \end$$

(A.6)

Self-adjoint Boundary Condition Classes

We start back from \(A= A_0+k_x A_1 \in M_(\mathbb C)\) with

$$\begin A_0 = \begin a_ & a_ & a_ & a_ \\ a_ & a_ & a_ & a_ \end = [B_0 \quad B_2], \qquad A_1 = \begin b_ & b_ & 0 & 0 \\ b_ & b_ & 0 & 0 \end = [B_1 \quad 0], \nonumber \\ \end$$

(B.1)

with \(B_0, B_1\) and \(B_2 \in \textrm_2( \mathbb C)\). According to Proposition 20, A has to be a rank-2 matrix. Depending on the rank of \(A_0\) and \(A_1\), such matrices can be simplified further using the \(\textrm_2(\mathbb C)\)-invariance

Moreover, A satisfies

$$\begin A\Omega ^ A^* = 0 \end$$

with

$$\begin \Omega ^ = \epsilon ^ \begin 0 & \quad 0 & \quad -\epsilon & \quad 0\\ 0 & \quad 0 & \quad 0 & \quad \epsilon \\ \epsilon & \quad 0 & \quad 0 & \quad -1\\ 0 & \quad -\epsilon & \quad 1 & \quad 0 \end:= \epsilon ^ \begin 0 & \quad -\Omega _1 \\ \Omega _1 & \quad \Omega _2 \end. \end$$

(B.2)

with \(\Omega _1,\Omega _2 \in \textrm_2(\mathbb C)\) and \(\Omega _1^*=\Omega _1\) and \(\Omega _2^*=-\Omega _2\). The condition \(A\Omega ^ A^* = 0\) becomes

$$\begin - B_0 \Omega _1 B_2^* + B_2 \Omega _1 B_0^* + B_2 \Omega _2 B_2^* - k_x (B_1 \Omega _1 B_2^* + B_2 \Omega _1 B_1^*) = 0. \end$$

(B.3)

This relation has to be valid for every \(k_x \in \mathbb R\), so we infer

$$\begin - B_0 \Omega _1 B_2^* + B_2 \Omega _1 B_0^* + B_2 \Omega _2 B_2^* =0, \qquad B_1 \Omega _1 B_2^* + B_2 \Omega _1 B_1^* =0. \end$$

(B.4)

1.1 Class \(}\): \(\,}}(A_0)=2\)

In this section, we assume \(\,}}(A_0)=2\) and \(A_1\) arbitrary:

$$\begin A_1 = \begin b_ & \quad b_ & \quad 0 & \quad 0 \\ b_ & \quad b_ & \quad 0 & \quad 0 \end = [B_1 \quad 0]. \end$$

(B.5)

The \(\textrm_2(\mathbb C)\)-invariance allows to reduce \(A_0\) to one of the six Schubert cells from (3.11). For each of them, we investigate (B.4) and possibly restrict some parameters.

Class \(\mathfrak _\) In that case \(A_0 \in }_}\), namely

$$\begin A_0 = \begin 1 & 0 & 0 & 0\\ 0 & 1 & 0 & 0 \end. \end$$

(B.6)

One can check that (B.4) is satisfied for any \(B_1 \in \mathrm }_2(\mathbb )\).

Class \(\mathfrak _\) In that case \(A_0 \in }_}\), namely

$$\begin A_0 = \begin 1 & \quad 0 & \quad 1& \quad 0\\ 0 & \quad a_ & \quad 0 & \quad 0 \end. \end$$

(B.7)

We have

$$\begin - B_0 \Omega _1 B_2^* + B_2 \Omega _1 B_0^* + B_2 \Omega _2 B_2^* = \begin 0 & \quad -\epsilon \\ \epsilon & \quad 0 \end \end$$

(B.8)

which never vanishes for \(\epsilon >0\) so that (B.4) is never satisfied. Thus, there is no self-adjoint boundary condition in this class, this is why it is not appearing in Table 1.

Class \(\mathfrak _\) In that case, \(A_0 \in }_}\), namely

$$\begin A_0 = \begin 1 & \quad 0 & \quad 0& \quad 0\\ 0 & \quad a_ & \quad a_ & \quad 1 \end. \end$$

(B.9)

We have

$$\begin - B_0 \Omega _1 B_2^* + B_2 \Omega _1 B_0^* + B_2 \Omega _2 B_2^* = \left( \begin 0 & -\epsilon \left( a_\right) ^* \\ a_ \epsilon & -\epsilon \left( a_\right) ^*+\left( a_\right) ^*+a_ \epsilon -a_ \\ \end \right) ,\nonumber \\ \end$$

(B.10)

from which we infer \(a_=0\) and then \(a_ \in \mathbb R\). Moreover, with that knowledge, we compute

$$\begin B_1 \Omega _1 B_2^* + B_2 \Omega _1 B_1^* = \left( \begin 0 & -\epsilon b_ \\ -\epsilon \left( b_\right) ^* & -\epsilon (b_ + \left( b_\right) ^*) \\ \end \right) , \end$$

(B.11)

from which we infer \(b_=0\) and \(b_ \in \mathbb R\). Denoting \(a_=\alpha \) and \(b_= \beta \), we deduce the self-adjoint matrices in that case

$$\begin A_0 = \begin 1 & \quad 0 & \quad 0& \quad 0\\ 0 & \quad \alpha & \quad 0 & \quad 1 \end, \qquad A_1 = \begin b_ & \quad 0 & \quad 0& \quad 0\\ b_ & \quad \beta & \quad 0 & \quad 0 \end, \end$$

(B.12)

with \(\alpha ,\beta \in \mathbb R\) and \(b_, b_ \in \mathbb C\).

Class \(\mathfrak _\) In that case, \(A_0 \in }_}\), namely

$$\begin A_0 = \begin a_ & \quad 1& \quad 0& \quad 0\\ a_ & \quad 0& \quad 1& \quad 0 \end. \end$$

(B.13)

We have

$$\begin - B_0 \Omega _1 B_2^* + B_2 \Omega _1 B_0^* + B_2 \Omega _2 B_2^* = \left( \begin 0 & -\epsilon a_ \\ \epsilon \left( a_\right) ^* & \epsilon \left( a_\right) ^*-a_ \epsilon \\ \end \right) , \end$$

(B.14)

from which we infer \(a_ =0\) and \(a_ = \alpha \in \mathbb R\). Moreover, one has

$$\begin B_1 \Omega _1 B_2^* + B_2 \Omega _1 B_1^* = \left( \begin 0 & b_ \epsilon \\ \epsilon \left( b_\right) ^* & \epsilon \left( b_\right) ^*+b_ \epsilon \\ \end \right) , \end$$

(B.15)

from which we infer \(b_ =0\) and \(b_=\beta \in \mathbb R\). The self-adjoint matrices in that case are

$$\begin A_0 = \begin 0 & \quad 1 & \quad 0& \quad 0\\ \alpha & \quad 0 & \quad 1 & \quad 0 \end, \qquad A_1 = \begin 0 & \quad b_ & \quad 0& \quad 0\\ \beta & \quad b_ & \quad 0 & \quad 0 \end, \end$$

(B.16)

with \(\alpha ,\beta \in \mathbb R\) and \(b_, b_ \in \mathbb C\).

Class \(\mathfrak _\) In that case \(A_0 \in }_}\), namely

$$\begin A_0 = \begin a_ & \quad 1& \quad 0& \quad 0\\ a_ & \quad 0& \quad a_& \quad 1 \end. \end$$

(B.17)

We have

$$\begin & - B_0 \Omega _1 B_2^* + B_2 \Omega _1 B_0^* + B_2 \Omega _2 B_2^* \nonumber \\ & \quad = \left( \begin 0 & \epsilon (1 -a_ \left( a_\right) ^*) \\ -\epsilon (1- a_ \left( a_\right) ^*) & \epsilon ( a_ \left( a_\right) ^* - a_ \left( a_\right) ^*)+\left( a_\right) ^*-a_ \\ \end \right) , \end$$

(B.18)

from which we infer \(1 -a_ (a_)^* =0\), that can be rewritten \(a_ \ne 0\) and \(a_ = (a_^)^*\). The lower right coefficient of the matrix from the previous equation then implies

$$\begin \epsilon \left( \left( \dfrac}}\right) ^* - \dfrac}}\right) +\dfrac}-\dfrac)^*} = 0, \end$$

(B.19)

which can be rephrased as

$$\begin \Im \left( \dfrac-\epsilon ^}}\right) = 0. \end$$

(B.20)

Thus, we have \(a_= \alpha a_+\epsilon ^\) with \(\alpha \in \mathbb R\).

The second part of (B.4) reads

$$\begin & B_1 \Omega _1 B_2^* + B_2 \Omega _1 B_1^* \nonumber \\ & \quad = \left( \begin 0 & \epsilon (b_ a_^-b_) \\ \epsilon (b_ a_^-b_)^* & \epsilon \left( \left( b_a_^\right) ^*+b_ a_^- \left( b_\right) ^*-b_ \right) \\ \end \right) , \end$$

(B.21)

which implies \(b_ = b_ a_^\) and

$$\begin \Re \left( b_ a_^-b_ \right) =0. \end$$

(B.22)

Thus, we have \(b_ a_^-b_ = \beta \) with \(\beta \in \mathbb R\).

The self-adjoint matrices in that case are

$$\begin A_0 = \begin a_ & \quad 1& \quad 0& \quad 0\\ a_ & \quad 0& \quad (a_^)^*& \quad 1 \end, \qquad A_1 = \begin b_ & \quad b_a_^ & \quad 0& \quad 0\\ b_ & \quad b_ & \quad 0 & \quad 0 \end, \end$$

(B.23)

with \(a_ \in \mathbb C\\), \(a_, b_, b_, b_ \in \mathbb C\) and \(a_= \alpha a_+\epsilon ^\), \(\alpha \in \mathbb R\) as well as \(b_ a_^-b_ = \beta \), \(\beta \in \mathbb R\).

Class \(\mathfrak _\) In that case \(A_0 \in }_}\), namely

$$\begin A_0 = \begin a_ & \quad a_& \quad 1& \quad 0\\ a_ & \quad a_& \quad 0& \quad 1 \end. \end$$

(B.24)

We have

$$\begin 0 & = - B_0 \Omega _1 B_2^* + B_2 \Omega _1 B_0^* + B_2 \Omega _2 B_2^* \nonumber \\ & = \left( \begin \epsilon \left( a_\right) ^*-a_ \epsilon & \epsilon \left( a_\right) ^*+a_ \epsilon -1 \\ -\epsilon \left( a_\right) ^*-a_ \epsilon +1 & a_ \epsilon -\epsilon \left( a_\right) ^* \\ \end \right) , \end$$

(B.25)

from which we infer \(a_ = \alpha _1 \in \mathbb R\), \(a_ = \alpha _2 \in \mathbb R\) and \(a_=\epsilon ^-(a_)^*\). Moreover, we have

$$\begin 0 = B_1 \Omega _1 B_2^* + B_2 \Omega _1 B_1^* = \left( \begin \epsilon \left( b_\right) ^*+b_ \epsilon & \epsilon \left( b_\right) ^*-b_ \epsilon \\ b_ \epsilon -\epsilon \left( b_\right) ^* & -\epsilon \left( b_\right) ^*-b_ \epsilon \\ \end \right) , \end$$

(B.26)

from which we infer \(b_ = \beta _1 \in \mathbb R\), \(b_ = \beta _2 \in \mathbb R\) and \(b_= (b_)^*\).

The self-adjoint matrices in that case are

$$\begin A_0 = \begin \alpha _1 & \quad a_& \quad 1& \quad 0\\ \epsilon ^-(a_)^* & \quad \alpha _2 & \quad 0& \quad 1 \end, \qquad A_1 = \begin \beta _1 & \quad b_& \quad 0& \quad 0\\ (b_)^* & \quad \beta _2 & \quad 0 & \quad 0 \end, \end$$

(B.27)

with \(\alpha _1, \alpha _2, \beta _1, \beta _2 \in \mathbb R\) and \(a_, b_ \in \mathbb C\).

1.2 Class \(\mathfrak \): \(\,}}(A_1) = 2\)

In this section, we assume \(\,}}(A_1) =2\). Since \(A_1 = [B_1\, |\, 0]\) this means that \(\,}}(B_1)=2\) and thus by the \(\textrm_2(\mathbb C)\)-invariance we can reduce the study to

$$\begin A_1 = \begin 1 & \quad 0 & \quad 0& \quad 0\\ 0 & \quad 1 & \quad 0 & \quad 0 \end. \end$$

(B.28)

To avoid any overlap with class \(}\), it is sufficient to consider the case where \(\,}}(A_0) \le 1\). Thus, we write

$$\begin A_0 = \begin a_1 & \quad a_2 & \quad a_3 & \quad a_4\\ \mu a_1 & \quad \mu a_2 & \quad \mu a_3 & \quad \mu a_4 \end, \end$$

(B.29)

with \(a_1, a_2, a_3, a_4, \mu \in \mathbb C\).

We have

$$\begin 0 = B_1 \Omega _1 B_2^* + B_2 \Omega _1 B_1^* = \left( \begin \epsilon \left( a_3\right) ^*+a_3 \epsilon & \epsilon \left( \mu a_3\right) ^*-a_4 \epsilon \\ a_3 \mu \epsilon -\epsilon \left( a_4\right) ^* & -\epsilon \left( \mu a_4\right) ^*-a_4 \mu \epsilon \\ \end \right) , \end$$

(B.30)

from which we infer \(a_3 = \alpha \in \mathbb R\) and \(a_4 = \alpha \mu ^*\) (in particular \(\mu a_4 \in \mathbb R\)). Moreover, we have

$$\begin 0&= - B_0 \Omega _1 B_2^* + B_2 \Omega _1 B_0^* + B_2 \Omega _2 B_2^* \nonumber \\ &= \alpha \left( \alpha (\mu ^*-\mu ) + \epsilon \alpha (a_1^*+a_1 - a_2 \mu - a_2^*\mu ^*)\right) \begin 1 & \quad \mu ^* \\ \mu & \quad |\mu |^2 \end, \end$$

from which we infer

$$\begin \alpha \Big (\alpha \Im (\mu ) - \epsilon \Re (a_1-a_2\mu )\Big )=0. \end$$

(B.31)

The self-adjoint matrices in that case are

$$\begin A_0 = \begin a_1 & \quad a_2 & \quad \alpha & \quad \alpha \mu ^* \\ \mu a_1 & \quad \mu a_2 & \quad \alpha \mu & \quad \alpha |\mu |^2 \end, \qquad A_1 = \begin 1 & \quad 0 & \quad 0& \quad 0\\ 0 & \quad 1 & \quad 0 & \quad 0 \end, \end$$

(B.32)

with \(a_1, a_2, \mu \in \mathbb C\), \(\alpha \in \mathbb R\) and \(\alpha \big (\alpha \Im (\mu ) - \epsilon \Re (a_1-a_2\mu )\big )=0\). This last condition is equivalent to one of the three cases:

1.

\(\alpha =0\),

2.

\(\alpha \ne 0,\, \mu \in \mathbb R, \, a_1 =a_2\mu + \beta , \, \beta \in \mathbb R\),

3.

\( \mu \in \mathbb C \mathbb R\) and \(\alpha =\tfrac \Re (a_1-a_2\mu ) \in \mathbb R\),

but we shall not use them explicitly, so we keep the general constraint instead.

1.3 Class \(\mathfrak \): \(\,}}(A_0) = \,}}(A_1) =1 \)

In this section, we consider the case where \(A_0\) and \(A_1\) are exactly of rank 1, with \(\,}}(A_0+k_x A_1)=2\). Since \(A_1 = [B_1\, |\, 0]\), this means that \(\,}}(B_1)=1\) and thus by the \(\textrm_2(\mathbb C)\)-invariance, we can reduce the study to

$$\begin A_1 = \begin 1 & \quad 0 & \quad 0& \quad 0\\ 0 & \quad 0 & \quad 0 & \quad 0 \end, \end$$

(B.33)

Then, since \(\,}}(A_0)=1\), we write

$$\begin A_0 = \begin a_1 & \quad a_2 & \quad a_3 & \quad a_4\\ \mu a_1 & \quad \mu a_2 & \quad \mu a_3 & \quad \mu a_4 \end, \end$$

(B.34)

with \(a_1, a_2, a_3, a_4, \mu \in \mathbb C\). Moreover, one has \(\mu \ne 0\) and \((a_2,a_3,a_4)\ne 0\), otherwise \(\,}}(A_0 + k_x A_1)=1<2\).

We have

$$\begin 0 = B_1 \Omega _1 B_2^* + B_2 \Omega _1 B_1^* = \left( \begin \epsilon \left( a_3\right) ^*+a_3 \epsilon & \epsilon \left( \mu a_3\right) ^* \\ a_3 \mu \epsilon & 0 \\ \end \right) , \end$$

(B.35)

from which we infer \(a_3=0\). Moreover, we have

$$\begin 0&=- B_0 \Omega _1 B_2^* + B_2 \Omega _1 B_0^* + B_2 \Omega _2 B_2^* = \epsilon \left( a_2 \left( a_4\right) ^*-a_4 \left( a_2\right) ^*\right) \left( \begin 1 & \mu ^* \\ \mu & |\mu |^2 \\ \end \right) , \end$$

from which we infer \(\Im (a_2 a_4^*) = 0\).

The self-adjoint matrices in that case are

$$\begin A_0 = \begin a_1 & \quad a_2 & \quad 0 & \quad a_4 \\ \mu a_1 & \quad \mu a_2 & \quad 0 & \quad \mu a_4 \end, \qquad A_1 = \begin 1 & \quad 0 & \quad 0& \quad 0\\ 0 & \quad 0 & \quad 0 & \quad 0 \end, \end$$

(B.36)

with \(a_1, a_2, a_4, \mu \in \mathbb C\) such that \(\mu \ne 0\), \((a_2,a_4)\ne 0\) and \(\Im (a_2 a_4^*) = 0\).

Comments (0)

No login
gif