Changing the state of matter - Physical Chemistry: A Very Short Introduction (2014)

Physical Chemistry: A Very Short Introduction (2014)

Chapter 5. Changing the state of matter

Physical chemists are interested not only in the states of matter themselves but also the transformations they undergo from one form to another, as in the familiar processes of freezing and boiling, which chemists regard as a part of their territory. I shall include dissolving, for observations on the dissolution of gases in liquids were made in the very early days of physical chemistry and are still relevant today, in environmental studies, anaesthesia, respiration, and recreation as well as in the formal structure of physical chemistry.

It should have become apparent from earlier chapters, particularly Chapters 2 and 3, that physical chemistry plays a great deal of attention to processes at equilibrium. In chemistry we are concerned with dynamic equilibria in which the forward and reverse processes continue but at matching rates and there is no net change. Chemical equilibria are living equilibria in the sense that because the underlying processes are still active, they are able to respond to changes in the conditions. This aspect of equilibrium is hugely important for both chemical equilibrium (of reactions) and physical equilibrium, the changes of state that I consider here.

Boiling and freezing

Why, a physical chemist might ask, does one form of matter change into another when the conditions are changed? The answer, at one level, must lie in thermodynamics, which identifies the direction of spontaneous change. As we saw in Chapter 2, the signpost of the direction of spontaneous change is the Gibbs energy. In discussions of equilibria, however, physical chemists find it appropriate to use a property introduced by Josiah Gibbs in his original formulation of chemical thermodynamics: the chemical potential, μ (mu). As far as we are concerned, the chemical potential can be thought of as the Gibbs energy possessed by a standard-size block of sample. (More precisely, for a pure substance the chemical potential is the molar Gibbs energy, the Gibbs energy per mole of atoms or molecules.) The name is very evocative, for the chemical potential can be thought of as the chemical pushing power of a substance. Thus, if the chemical potential of a liquid is greater than that of a vapour, then the liquid has a tendency to form the vapour. If the opposite is true, then the push is in the opposite direction and the vapour has a tendency to condense into the liquid. If the chemical potentials, the pushing powers, of vapour and liquid are equal, then the two states of matter are in balance: they are in equilibrium and there is no net tendency for either vaporization or condensation. Think tug-of-war, with pushing in place of pulling.

All kinds of changes of state and the corresponding equilibria that are reached in transitions between the states can be expressed in terms of the chemical potential and the pushing power it represents. Physical chemists conjure with expressions for the chemical potential of a substance to identify the conditions in which any two (or more) states of matter are in equilibrium. For instance, they might fix the pressure at 1 atm (normal atmospheric pressure at sea level) and then vary the value of the temperature in their equations until the chemical potentials of the liquid and vapour states coincide: that temperature is the ‘boiling point’ of the substance. Alternatively, they might vary the temperature in the expressions for the chemical potentials of the liquid and solid forms of the substance and look for the temperature at which the two chemical potentials are equal. At that temperature the solid and liquid forms of the substance are in equilibrium and is called the ‘freezing point’ of the substance.

Of course, a well-rounded physical chemist will always have in mind what is going on at a molecular level to cause the chemical potentials to change. Chemical potentials, like the Gibbs energy itself, are disguised forms of the total entropy of the system and its surroundings. If the chemical potential of the vapour is lower than that of the liquid (signalling a tendency to vaporize), it really means that the entropy of the universe is greater after vaporization than before it. There are two contributions to that increase. One is the increase in the entropy of the system that accompanies the dispersal of a compact liquid as a gas. The other is the reduction in entropy of the surroundings as heat flows from them into the system to enable the molecules to break free from their neighbours. That reduction in entropy works against the increase in entropy of the system and might well overwhelm it. In that case the total entropy is lower after vaporization and the spontaneous direction of change is in the opposite direction, condensation. However, you should recall from Chapter 2 that a change in entropy is calculated by dividing the heat supplied or lost by the temperature at which the transfer takes place. It follows that if the temperature is raised, then the decrease in entropy of the surroundings will be smaller (the same heat transfer is being divided by a larger number). There will come a point when the temperature is so high that the net change switches from a decrease to an increase (Figure 17). Vaporization then becomes spontaneous.

Physical chemistry has exposed why substances vaporize when the temperature is raised, and the answer is really quite extraordinary: the increase in temperature lowers the entropy decrease in the surroundings to the point where it no longer dominates the entropy increase of the system. In effect, raising the temperature soothes changes in the surroundings so that the entropy change in the system becomes dominant. The boiling point, and the freezing point by a similar argument, are manifestations of our ability to manipulate the entropy change in the surroundings by modifying their temperature.

Images

17. Three stages in the events resulting in boiling. At low temperatures the decrease in entropy of the surroundings is so large that it dominates the increase in entropy of the system and condensation is spontaneous. At high temperatures the opposite is true and vaporization is spontaneous. Equilibrium occurs when the temperature is intermediate and the two entropy changes balance. This temperature is the boiling point. The arrows denote the entropy changes

Now consider the role of pressure. One of the earliest successes of chemical thermodynamics, one that suggested to the Victorians that they were on the right track with the emerging science of energy, was the prediction made by Benoit (Emile) Clapeyron (1799–1864) of the effect of pressure on the freezing point of liquids. The intuitive picture is quite clear. For most liquids the sample contracts when it freezes, so (according to Le Chatelier’s principle and in practice) increasing the pressure favours the solid and the temperature does not need to be lowered so much for freezing to occur. That is, the application of pressure raises the freezing point. Water, as in most things, is anomalous, and ice is less dense than liquid water, so water expands when it freezes (icebergs float; the Titanic was sunk by this anomaly). In this case, the application of pressure favours the liquid and so freezing is achieved only by lowering the temperature even further. That is, for water the application of pressure lowers the freezing point. That response contributes to the advance of glaciers, where the ice melts when pressed against the sharp edges of underlying rocks.

Clapeyron’s calculation made predictions about the effect of pressure on the melting point of water in terms of the difference in density of ice and liquid water, and his predictions were not only in the correct direction (pressure lowers the freezing point) but also close to the numerical value observed. Physical chemists now know how to set up the general expression: they can calculate the effect of pressure on the chemical potentials of ice and water and then decide how to adjust the temperature so that the two chemical potentials remain equal when the pressure is changed. They can carry out similar calculations on the effect of pressure on boiling points (increasing the pressure invariably increases the boiling point).

The phase rule

One very special aspect of boiling and freezing is a conclusion from thermodynamics concerning the equilibria between various forms of matter. This is the phase rule of Josiah Gibbs, one of the most elegant of all conclusions in chemical thermodynamics. He formulated it in the late 1870s, the golden age of chemical thermodynamics.

A ‘phase’ is a form of matter, such as a liquid, solid, or gas. It is more specific than ‘physical state’ because a solid might exist in several different phases. For instance, graphite is one solid phase of carbon, diamond is another. Only helium is known to have two liquid phases, one a conventional liquid and the other a superfluid, which flows without viscosity. No substance has more than one gaseous phase. Each phase of a substance is the most stable under a range of pressures and temperatures. For instance, the common solid phase of water (ordinary ice; there are about a dozen different forms of ice in fact) is its most stable phase at 1 atm and temperatures below 0°C and the gas phase (‘water vapour’) is the most stable phase at 1 atm and temperatures above 100°C. By ‘most stable’ we mean that the phase has the lowest chemical potential and all other phases, with higher chemical potentials, have a spontaneous tendency to change into it. A map can therefore be drawn using pressure and temperature as coordinates that shows the regions of pressure and temperature where each phase is the most stable. An analogy is a map of a continent, where each country or state represents the ranges of pressures and temperatures for which the corresponding phase is the most stable (Figure 18). Such a diagram is called a phase diagram and is of the greatest importance in materials science, especially when the system under consideration consists of more than one component (iron and several varieties of steel, for instance).

Images

18. A phase diagram, in this case of water, showing the regions of pressure and temperature where each phase is the most stable. There are many forms of ice. Ice-I is the ordinary variety

The lines marking the frontiers in a phase diagram are special places, just as they are in actual continents. In a phase diagram they represent the conditions under which the two neighbouring phases are in equilibrium. For example, the line demarking the liquid from the vapour shows the conditions of pressure and temperature at which those two phases are in equilibrium and so it can be regarded as a plot of the boiling temperature against pressure.

There are commonly places in a phase diagram, just as there are in maps of continents, where three phases meet. At this so-called ‘triple point’ the three phases are in mutual equilibrium, and there are ceaseless interchanges of molecules between all three phases at matching rates. The triple point of a single substance, such as water, is fixed by Nature, and everywhere in the universe (we can suppose) it has exactly the same value. In fact, the triple point of water has been used to define the Kelvin temperature scale, with it (in the current form of the definition) set at 273.16 K exactly; in turn, that definition is used to define the everyday Celsius scale as the temperature on the Kelvin scale minus 273.15 (yes, 273.15, not 273.16).

Gibbs’s contribution to understanding phase diagrams was to derive an exceptionally simple rule, the phase rule, for accounting for the structure of any phase diagram, not just the simple one-component phase diagram that I have described. The phase rule helps a chemist interpret the diagrams and draw conclusions about the compositions of mixtures, including the changing composition of liquids such as petroleum, in the course of their distillation and purification. Phase diagrams of more elaborate kinds are essential in mineralogy and metallurgy, where they summarize the composition of minerals and alloys.

Comment

For completeness, the phase rule states that F = CP + 2, where F is the number of external variables (such as the pressure and temperature) that can be changed, C the number of components, and P the number of phases.

Dissolving and mixing

The important processes of dissolving and that similar phenomenon, mixing, come within the scope of physical chemistry. Interest in them goes back right to the beginning of physical chemistry and William Henry (1774–1836) who formulated a law of gas solution in 1803.

The mixing of gases is easy to formulate in terms of thermodynamics, and is the starting point for the discussion of mixing and dissolving in general. Physical chemists focus initially on the mixing of perfect gases in which there are no interactions between the molecules: when two gases are allowed to occupy the same container they invariably mix and each spreads uniformly through it. A physical chemist thinks about this mixing as follows.

Because the mixing of two perfect gases is spontaneous, we can infer that the Gibbs energy of any mixture they form must be lower than the total Gibbs energy of the separate gases at the same pressure and temperature, for then mixing is spontaneous in all proportions. There is no change in the entropy of the surroundings: no energy is shipped into or out of the container as heat when the molecules mingle because there are no interactions between them and they are blind to each other’s presence. The reason for spontaneous mixing must therefore lie in the increasing entropy of the gases in the container itself. That is perfectly plausible, because the system is less disordered before mixing takes place than after it, when molecules of the two gases are mingled.

Physical chemists import the same reasoning to the class of matter they call an ideal solution, the starting point for all discussions of the thermodynamics of mixing of liquids. An ideal solution is very much like a mixture of perfect gases in the sense that it is supposed that mixing occurs without change of energy. Thus, in liquid A the molecules interact with one another and have a certain energy by virtue of these interactions; likewise in liquid B all the molecules interact with one another and also have a certain energy. When the ideal solution forms, A and B molecules are surrounded by one another—around any A there will be A and B molecules in proportions in which the mixture was prepared, and likewise around any B there will be A and B molecules in those proportions—and have a certain energy. In an ideal solution that energy is just the same as in the pure liquids when each molecule is surrounded by its own kind. This approximation, for it is an approximation, implies that when the mixing occurs the molecules mingle together without any change of energy and therefore do not affect the entropy of the surroundings. As in perfect gases, the spontaneity of the mixing is due solely to the increase in entropy of the system as the molecules mingle and the disorder of the system increases.

There aren’t many liquid mixtures that behave in this ideal way: the molecules must be very similar for it to be plausible that their interactions are independent of their mixed or unmixed environments: benzene and methylbenzene (toluene) are often cited as plausible examples, but even they are not perfectly ideal. The ideal solution is another example of an idealization in physical chemistry that although a sensible starting point requires elaboration.

The elaboration that has been explored fruitfully in physical chemistry is the regular solution. In a regular solution it is supposed that the two types of molecule are distributed through the mixture perfectly randomly, just like in an ideal solution. However, unlike in an ideal solution, the energy of an A molecule does depend on the proportion of A and B molecules that surround it, and likewise for B. This model captures quite a lot of the properties of real solutions. For instance, if the strengths of the A–A and B–B interactions outweigh the A–B interactions, then the liquids will not mix fully and the system consists of two phases, one of A dissolved in an excess of B and the other of B dissolved in an excess of A.

I have spoken of gases dissolved in (mixed with) one another and of liquids dissolved in (mixed with) one another. What about gases dissolved in liquids? This is where Henry made his contribution more than two centuries ago and established a principle that is still used today. Henry found that the quantity of gas that dissolves in any liquid is proportional to the pressure of the gas. That might, to us, seem a rather trivial conclusion from a lifetime’s work, but it is worth reflecting on how a physical chemist might view the conclusion in terms of the molecular processes occurring at the interface between the liquid and the gas.

As we saw in Chapter 4, according to the kinetic model, a gas is a maelstrom of molecules in ceaseless motion, colliding with each other billions of times a second. The molecules of a gas above the surface of a liquid are ceaselessly pounding on the surface and, if splashing is appropriate on this scale, splashing down into the liquid. At the same time, molecules of gas already embedded in the liquid are rising to the surface by the ceaseless molecular jiggling in the liquid, and once there can fly off and join their colleagues above the surface (Figure 19). At equilibrium, the rate at which the gas molecules escape from the liquid matches the rate at which molecules splash down into it. At this equilibrium we can report a certain solubility of the gas (such as how many dissolved molecules are present in each volume of liquid). If the gas pressure is increased, the rain of molecules from the gas increases in proportion, but the rate of escape of dissolved molecules remains unchanged. When equilibrium is renewed, more molecules will be found in the liquid, just as Henry’s law describes.

Images

19. A simple explanation of Henry’s law is that at equilibrium the rate at which gas molecules strike the surface of a liquid and penetrate into its bulk matches the rate at which those already present leave. The rate at which gas molecules strike the surface of a liquid and penetrate into its bulk, but not the rate at which those already present leave, is increased by raising the pressure of the gas

When the temperature of the liquid is raised, it is easier for a dissolved molecule to gather sufficient energy to escape back up into the gas; the rate of impacts from the gas is largely unchanged. The outcome is a lowering of the concentration of dissolved gas at equilibrium. Thus, gases appear to be less soluble in hot water than in cold. When the eyes of physical chemists waking up in the morning see bubbles in the water on their bedside tables as it has warmed over night, they are, or more realistically should be, reminded of William Henry and his discussion of the dissolving of gases.

Henry’s law contributes to our understanding of the network of processes that underlie life. The ability of aquatic life to thrive depends on the presence of dissolved oxygen: the pressure of oxygen in air is sufficient to maintain its concentration at a viable level. However, if the water is warmed by industrial or natural causes, the oxygen level might fall to fatal values. Henry’s law also plays a role in recreation, as in scuba diving, and its commercial counterpart deep-sea diving, where the dissolution of oxygen and nitrogen in blood and their possible formation of dangerous bubbles giving rise to ‘the bends’, can be expressed in its terms.

Transitions of solutions

Physical chemists have worked out how the presence of dissolved substances affects the properties of solutions. For instance, the everyday experience of spreading salt on roads to hinder the formation of ice makes use of the lowering of freezing point of water when a salt is present. There is also another hugely important property of solutions that pure liquids do not exhibit, namely osmosis. Osmosis (the word comes from the Greek word for ‘push’) is the tendency of a solvent to push its way into a solution through a membrane. It is responsible for a variety of biological phenomena, such as the rise of sap in trees and the maintenance of the shape of our red blood cells.

When physical chemists consider the thermodynamics of solutions they have in mind the roles of energy and entropy. I have explained that for vaporization to become spontaneous, the temperature of the surroundings must be increased to reduce the change in their entropy when energy flows out of them as heat and into the liquid. When a substance is dissolved in the liquid, the entropy is greater than when the solvent is pure because it is no longer possible to be sure that a blind selection of a molecule is that of a solute or a solvent: there is more disorder and therefore higher entropy. Because the entropy of the liquid is already higher than before, and the vapour (of the solvent alone, because the solute does not vaporize) has the same entropy, the increase in the entropy of the system when the liquid vaporizes is not as great as for the pure liquid. As usual, the entropy of the surroundings falls because heat flows into the liquid and at low temperatures is so big that it dominates the increase in entropy of the system. To reduce the entropy change of the surroundings to match the now smaller entropy increase of the system, the temperature must be raised, but it must be raised further than before. That is, the boiling point is raised by the presence of a dissolved substance (Figure 20).

Similar reasoning applies to the freezing point, which is lowered by the presence of a solute. It is often said that the addition of antifreeze to a car engine is an example of this lowering of freezing point. Although there are similarities, the function of the antifreeze at the high concentrations used is quite different: the antifreeze molecules simply mingle with the water molecules and prevent their forming bonds and freezing to a solid.

When a liquid and its vapour are present in a closed container the vapour exerts a characteristic pressure (when the escape of molecules from the liquid matches the rate at which they splash back down into it, as in Figure 19). This characteristic pressure depends on the temperature and is called the ‘vapour pressure’ of the liquid. When a solute is present, the vapour pressure at a given temperature is lower than that of the pure liquid for reasons related to entropy, more or less as I explained above. The extent of lowering is summarized by yet another limiting law of physical chemistry, this one having been formulated by François-Marie Raoult (1830–1901), who spent much of the later decades of his life measuring vapour pressures. In essence, ‘Raoult’s law’ states that the vapour pressure of a solvent or of a component of a liquid mixture is proportional to the proportion of solvent or liquid molecules present. Mixtures of liquids that obey this law strictly are the ‘ideal solutions’ that I have already mentioned. Actual solutions, ‘real solutions’, obey it only in a limiting sense when the concentration of the solute or of the second liquid component goes to zero. As for most limiting laws, Raoult’s law is used as the starting point for the discussion of the thermodynamic properties of mixtures and solutions and is the foundation for more elaborate treatments, such as treating the solution as regular rather than ideal.

Images

20. The effect of a solute on the boiling point of a solvent. The temperature must be raised for the entropy decrease of the surroundings to match the now smaller increase in entropy of the solvent when it vaporizes from the solution

Osmosis, the tendency of solvent molecules to flow from the pure solvent to a solution separated from it by a permeable membrane (technically, a ‘semipermeable membrane’, one that allows the transit of solvent molecules but not solute molecules or ions), is another manifestation of the effects of entropy caused by the presence of a solute. The entropy when a solute is present in a solvent is higher than when the solute is absent, so an increase in entropy, and therefore a spontaneous process, is achieved when solvent flows through the membrane from the pure liquid into the solution. The tendency for this flow to occur can be overcome by applying pressure to the solution, and the minimum pressure needed to overcome the tendency to flow is called the ‘osmotic pressure’.

If one solution is put into contact with another through a semipermeable membrane, then there will be no net flow if they exert the same osmotic pressures and are ‘isotonic’. On the other hand, if a greater pressure than the osmotic pressure is applied to the solution in contact through a semipermeable membrane with the pure solvent, then the solvent will have a tendency to flow in the opposite direction, flowing from solution to pure solvent. This effect is the ‘reverse osmosis’ that is used to purify sea water and render it potable.

If the solution is ideal, then there is a very simple relation between the osmotic pressure and the concentration of solute, which is summarized by an equation proposed by Jacobus van ’t Hoff (1852–1911), the winner of the first Nobel Prize for chemistry (in 1901). His equation, which sets the osmotic pressure proportional to the concentration of solute and the temperature, is yet another of physical chemistry’s limiting laws, for it is strictly valid only in the limit of zero concentration of solute. Nevertheless, it is a very useful starting point for the discussion of actual solutions.

The principal application of van ’t Hoff’s equation and its elaborations is to the determination of the molecular weights of polymers. The problem with such molecules is that they are so huge that even a substantial mass of them does not amount to a concentrated solution (in terms of numbers present). Osmotic pressure, however, is very sensitive to concentration, and their molecular weights can be inferred from its value. One remaining problem, however, is that because they are huge, they form far from ideal solutions, so elaborations of van ’t Hoff’s equation are essential for the analysis of the results.

Other transitions

More subtle than either boiling or freezing are the transitions between varieties of solid phases, all of which are open to examination by physical chemists, although in many cases they lie more naturally in physics’ grasp. Transitions between the various phases of solids include those between different kinds of magnetic behaviour and from metallic to superconducting states. The latter have become highly important since the discovery of the ceramic materials with transition temperatures not very far below room temperature. Solid-to-solid phase transitions are also important in inorganic chemistry, geochemistry, metallurgy, and materials science in general and all these fields collaborate with physical chemists to elucidate the processes involved and to seek ways of expressing the transitions quantitatively.

The current challenge

There are two very subtle kinds of phase transition that are now becoming of great interest to physical chemists. One is the conversion of one kind of matter into another by the process of ‘self assembly’. In this process, the intrinsic structure of individual molecules encourages them to assemble into complex structures without external intervention. Even more subtle is the modification of structures locally, which underlies the storage of information. An extreme example is the storage of vast amounts of information, such as the works of Shakespeare, by the synthesis of strands of DNA with an appropriate sequence of bases.