Whenever a student asks a question that is of general interest, it will be put here, with an answer. The answers are hidden behind a drop-down menu. This allows you to formulate the answer first for yourself, and then check it immediately thereafter. This can be used as a self-test, to assess your understanding.
Let’s examine first what happens for the derivative of a function. These are the ingredients:
It’s analogous for a functional derivative. These are the ingredients:
No. In principle, you find the exchange-correlation energy by plugging in the ground state density into the universal exchange-correlation functional. Simply plugging in, there is no derivative needed. Read on to see where the derivative does play a role.
The functional derivative of the exchange-correlation functional with respect to density, is the exchange-correlation potential. This object is a functional of the density itself, but by plugging in the density it returns not an energy but rather a potential. Multiplying that potential with the density and integrating (i.e. applying the inverse of a derivation) returns the exchange-correlation energy.
You see that most of the technical terms that have been used in this explanation, do appear in the initial question. Yet, one has to be careful to formulate the correct relationships between all terms. The way in which the question was formulated, does not represent a correct relationship.
Have a look at the ground state situation. Take any pair of particles of the system: two electrons, two nuclei, or a nucleus and an electron. These two particles are interacting with each other, and that represents an amount of energy. The ground state total energy represents the sum of all interaction energies in the system. What is the limiting situation when all interaction energy is absent? The situation where the distance between any pair of particles has become infinitely large. Only then the Coulomb energy between two charges vanishes.
Of course, in practice ‘infinity’ means just ‘very far away’. And ‘very far away’ for electrons is still pretty close for humans.
To get a feeling, take two electrons that are 1 Angstrom apart. Calculate their Couloumb interaction energy. How far should you put the away to reduce the Coulomb energy to 1% of the original value? And to 0.1% 0.001%? …
An external potential basically tells you the positions of the nuclei. For a given external potential (read: for a set of nuclei at these particular positions), there is one unique electron density that represents the minimal energy situation (ground state) of the electron system in the presence of these nuclei. That is the unique corresponde the question hints to: a unique correspondence between the ground state of the electron system in the presence of this particular set of nuclei, and this set of nuclei.
The interaction between the nuclei and the electron system in its ground state, represents an amount of potential energy. If the electron system would be excited, i.e. it is not in its ground state any more, then the value of this potential energy would change. But as the nuclei are still at the same positions, the external potential itself is unchanged. This does not violate the unique correspondence, because the density is now not the ground state density yet an excited density.
Imagine you would know for every infinitesimally small volume element of the unit cell how much that volume element contributes to the exchange-correlation energy. Call that contribution epsilon_E (the subscript refers to ‘Exact’). By integrating epsilon_E(r) over the entire unite cell, you get the exact exchange-correlation energy for that unit cell.
Of course, you don’t know epsilon_E. But you do know for every infinitesimally small volume element what is the value of the density at that place. And there is one special kind of systems — the homogeneous electron gas — for which you do know the exact value of the exchange-correlation energy, for any possible density the homogenous electron gas can have. With this knowledge, we will approximate epsilon_E(r). For every point r (vectorial), look what the density rho is at that point. Then look up what is the exchange-correlation energy of a homogeneous electron gas with that density. Use that value of the exchange-correlation energy as you approximation.
In short: epsilon_E(r) is not known, but the expression epsilon_LDA(rho(r)) is fully known using the reasoning given hereabove. The leap of faith that follows is that epsilon_LDA(rho(r)) is a good approximation of epsilon_E(r) — which turns out to be pretty well the case.
The crucial word in this question, is the verb: “what is a wave function?”
Well, a wave function is not. A wave function does not exist. In this sense: there is no experiment by which you can measure a wave function. And there will never be one. A wave function exists only in a mathematical way, as the solution of the Schrödinger equation for the quantum system.
That does not mean the wave function is not useful. Knowing the wave function of a system is extremely useful, because it contains all information about the system that any experiment ever will be able to reveal. And there are even procedures known in quantum physics to extract those experimental predictions from the wave function (expectation values, bra-operator-ket). In a way, you can consider the wave function as a kind of mathematical DNA for your quantum system: it’s the only thing you need to know, to know at once everything about your quantum system.
As an aside: DFT tells you that all this information is contained in the density as well. The density is therefore the DNA of your quantum system too. That’s interesting, because the density is a mathematically much simpler function, and because it is experimentally measurable. The density ‘exists’, it is. At the downside, we do not know procedures to extract experimental information from the density right away.
We compared a multi-electron system to the solar system. In the solar system, the independent particle approximation corresponds to summing all interactions between an individual planet and the sun, neglecting planet-planet interactions. What corresponds to this level of approximation in a multi-electron system, the Hartree approximation or the Hartree-Fock approximation? That’s the question.
To mimic the independent particle approximation of the solar system, we should have ‘electrons’ (quasi-particles) that interact with the nuclei, yet not with other ‘electrons’. Doing this in the same way as for the solar system, i.e. in a classical way, leads to the Hartree approximation.
However, a quantum system has properties that are fundamentally different from a classical system, even before considering the details of the interaction. For fermions (and electrons are fermions), the many body wave function should change sign when swapping two particles. Whatever the interaction is. That is not the case for the Hartree many-body wave function. The Hartree-Fock many-body wave function is the simplest wave function that does have this property (by construction, because swapping two columns in a determinant changes the sign).
That is why the Hartree-Fock approximation is the most meaningful equivalent of the independent particle approximation in the solar system.
This question reveals a misconception. The electron-electron interaction term W appears in a hamiltonian. Therefore, it is an operator, not a number. It is an operator that calculates the Coulomb energy between every possible pair of in total N electrons. Wherever these electrons are, independent of the number and type of nuclei that are present (Vext). It has nothing to do with wave functions being mapped to the same density.
This question might reflect a misunderstanding. The first Hohenberg-Kohn theorem is valid for any system, whether it has a degenerate ground state or not. However, our proof was limited to non-degenerate ground states. The more general proof is more involved. You find it, for instance, in the book by Dreizler and Gross.
DFT itself is equally well applicable to crystals without a degenerate ground state and with.
Full question: SInce the derivation of the first Hohenberg-Kohn theorem was done for the ground state, does this mean that all DFT is carried out at 0K?
No. DFT (and the first Hohenberg-Kohn theorem) is not restricted to zero Kelvin, even though they deal with the ground state of a system. This thought experiment shows why:
“The first Hohenberg-Kohn theorem states that there is a one-to-one relationship between the ground state density of a system and its hamiltonian (because V_ext is the only part of the hamiltonian that is system-specific). If we know the hamiltonian, then we know in principle all states of the system — the ground state as well as all excited states. Indeed, we can in principle solve the hamiltonian and find all these states (even though in practice we might not be able to do that). This means that the ground state density implies full knowledge about every state of the system, ground state as well as excited states.”
The practical reason why plain DFT calculations are strictly spoken for the ground state, is the Born-Oppenheimer approximation: the nuclei are at frozen positions, and this prevents the most important effect of temperature to manifest itself: motion of the nuclei. This can be reintroduced via phonon calculations in the quasi-harmonic approximation. There is nothing within DFT that holds us back studying excited states.
The full question:
Is this interpretation correct – Since $|\psi(\vec{r})^2|$ is the probability of finding a particle at $\vec{r}$, if both $\psi$ and $\psi’$ map to the same $\rho$ it means that both show the same probability of finding a particle at $\vec{r}$. Therefore $\psi = \psi’$. I guess this is not a good argument since $\psi$ can be a complex number?
Answer: indeed, your suspicion at the end is correct. If the moduli of two complex numbers are identical, this does not imply that the complex numbers themselves are identical. The reason is that a complex number is not determined only by its modulus, but also by its phase.
Full question: I’m confused about the plane-wave basis used to determine the wave functions. If we are determining something we don’t know yet, how can we express that unknown something into a plane wave basis (or into any other basis)?
Answer: ‘not knowing the wave function’ becomes after expressing it into a basis ‘not knowing the coefficients in front of all the (plane wave) basis functions’. And the rest of the procedure is meant to determine those coefficients. Once you know them, you know the wave function.
Examples that are easy to imagine, are readily taken from Fourier analysis. The time domain plays the role of direct space, the frequency domain plays the role of reciprocal space.
Take a sine function (in time domain), and another, different sine function. Add them. That takes some work, you have to make the addition for every point in time. But in frequency domain, this same operation is as trivial as adding to delta functions.
Another example. Multiplying a function in time domain by a complex exponential exp(i * omega_0 * t) requires a multiplication at any time t. In frequency domain, it boils down to shifting the Fourier transform of the function over a distance omega_0.
Similar things hold for the mathematical operations on densities in real space and their reciprocal space representations.
This was the recipe for a cubic crystal:
Let us consider now a tetragonal crystal. Instead of the cubic lattice parameter a, a and a, it has the lattice parameters a, a and N*a (if N>1, then the unit is longer in the c-direction, if N<1 it is shorter in the c-direction). Let’s assume N>1, and take as an example N=3 (the reasoning will be similar for other values of N and for N<1). If the unit cell is 3 times longer in the c-direction, then the Brillouin zone will be 3 times smaller in the c*-direction. That means that there are 3 times less k-points needed in the c*-direction for the same sampling quality. A uniform sampling mesh will therefore be of the form n*n*(n/3) (or in general: n*n*(n/N)). You can go now through the same procedure as for the cubic crystal, increasing the value of n in this n*n*(n/3) mesh. If n/N is not an integer, then round to the nearest integer, never going below 1.
The generalization to crystals with lower than tetragonal symmetry, is straightforward: if the lattice parameters are a, M*a and N*a (M and N are real numbers, not necessarily integers, and can be smaller or large than 1), then a uniform k-mesh has the form n*(n/M)*(n/N).
The 21 independent elastic constants Cij of a crystal are components of the elastic tensor of that crystal. Therefore, they do depend on the axis system in which you write these components. How do we know which axis system has been chosen?
Look at the pdf with the stress tensor procedure to determine elastic constants. At the very beginning, we form the standard root tensor of a crystal by listing the components of the lattice vectors in a 3×3 matrix. In order to express these components, you had to chose an orthonormal axis system. That’s the one. Everything else in that procedure is always expressed in that axis system – including the components of the elastic tensor.
If you emphasize the word ‘cohesive‘, then the answer is ‘no’: cohesive energies involve free atoms, and those are not involved when considering the adhesive energy in layered materials. But the concept of formation energy can be used for this purpose, and in particular the way how we used it to calculate a surface energy: calculate the total energy of the layered material when it forms a 3D infinite crystal, and calculate the energy of a supercell in which you have put vacuum between two layers. The energy difference between both is the energy it has cost to separate the layers. This procedure is identical to the one for calculating the surface energy of the surface formed by such a layer.
Full version of the question: “When calculating the formation energy of an alloy (say FeAl), we have to subtract the energies of the elemental crystals (bcc-Fe and fcc-Al) from the energy of the alloy. But in a thermochemistry course, I learned a law that states that the formation energies of pure substances are zero. Therefore, why do we even need to calculate the energy of these elemental phases? Why isn’t the energy of the alloy immediately the formation energy?“
Answer: It’s a matter of convention (and therefore it can be debated whether or not the word ‘law’ in your thermochemistry course is appropriate). The convention in thermochemistry is to define the energy of pure substances to be zero. The formation energy of all other substances can then be expressed with respect to these zero levels: how much energy do we gain if we create this alloy out of these pure substances? This is a practical convention for doing experimental formation energy measurements. In quantum physics, however, we do not need to make this convention which is best suited for experiments. We have chosen one unique zero level for all atoms, molecules and crystals: the energy when all nuclei and electrons that make up the crystal are at rest and at infinite distances from each other. The total energy of a pure substance (bcc-Fe, fcc-Al) is the energy difference between this zero level and the ground state geometry of those crystals. That is the total energy given by a DFT code. The formation energy of the alloy is then the energy gain w.r.t. these ground state energies of the pure substances — which are not zero now, but which are actual numbers calculated by a DFT code.
Ideally, experiment and DFT will yield the same value for the formation energy of the alloy. And even for the formation energy of a pure substance, they will agree: how much energy do you gain if you create bcc-Fe out of bcc-Fe? Nothing, obviously. That answer of ‘zero’ can be found by subtracting zero from zero (=thermochemistry) or by subtracting a large negative number (total energy of bcc-Fe according to DFT) from the same large negative number.