Splasho |

Maxwell: Thermodynamics meets the demon

Introduction

James Clerk Maxwell was born 175 years ago in a remote area of Scotland. He died just 48 years later, but his legacy remains today in Physics classrooms around the world. When someone at Cambridge remarked to Einstein that he “stood on Newton’s shoulders,” he replied “No, I stand on Maxwell’s shoulders”.

We owe Maxwell a great deal, but in order to understand a great puzzle he created we must first understand the nature and history of the laws which it seems to challenge.

A brief history of thermodynamics

Thermodynamics has its roots in the distant past, stretching back to the Ancient Greeks’ debates on the nature of the Universe. In around 485 BC Parmenides, a philosopher, wrote that “a void” was nothingness and that nothingness was defined by its non-existence. Taken together this meant that the void could not exist.

Around 25 years later Leucippus posited that everything in the Universe was made up of units called ‘atoms’ which existed in the void. Plato and later Aristotle rebutted these ideas, Aristotle proclaiming his horror vacui, “nature abhors a vacuum”.

Such was the reverence with which classical sources were held that Aristotle was not proved wrong until in 1643 Evangelista Torricelli constructed an experiment which would create an artificial vacuum. It was known that suction pumps were unable to raise water more than 10m. Some, including Galileo, surmised from this merely that there was a limit to the abhorrence with which nature saw a vacuum.

But Torricelli realised that it was not the horror vacui that raised water in these pumps but the surrounding air exerting pressure on it. He realised that the effect would be exaggerated by using mercury, 14 times as heavy as water. He filled a metre-long tube with mercury and inverted it into a bowl filled with mercury so that it was sealed at the top. The level of the mercury in the tube fell to around 70% of its original height. Torricelli had created a vacuum in the top 30%. He had also invented the barometer, for as atmospheric pressure rose and fell it was reflected in the level of mercury in the tube.

With ideas of pressure understood, Otto von Guericke was able to make the first vacuum pump in 1650. He used it to evacuate the space between two hollow metal hemispheres. When the pump was running these Magdeburg hemispheres could not be prised apart, even by 8 horses pulling on each side.

When news of this design reached Robert Boyle in England he constructed an air pump and noted that vessels would warm up when the air in them was pressurised. This discovery led to the formulation of the ideal gas law connecting pressure, volume and temperature.

Denis Papin, a student of Boyle’s, used this idea to create a ‘bone digester’ which pressurised heated steam to allow very high temperatures to be reached inside. This allowed fat to be extracted from bones placed inside. It was the first pressure cooker. Initially there were problems with the digesters exploding as steam reached pressures too high for the metal to withstand. To avoid this, a pressure release valve was added which would open when the pressure reached a certain point. This level was set by hanging weights on a lever which came out from the valve. Papin saw that the weight was moved rhythmically up and down as the pressure rose up and was released. He realised that this power could be harnessed and was inspired to create a design for a piston and cylinder steam engine.

But it was Thomas Savery who ended up building the first engine, designed to raise water from mines. Soon other engines including the Newcomen engine, based on Papin’s design, and the Watt engine. But all of these engines were very inefficient. 98% of the energy input was wasted and not converted to do the job for which the engine was being used.

Thermodynamics Begins

Sadi Carnot wanted to be able study these problems in a scientific and mathematical way. In 1824 he published his paper “Reflexions sur la puissance motrice du feu”, the “Reflections on the Motive Power of Fire”. This paper marked the start of thermodynamics as a science. Carnot realised that all of these previous engines were ‘heat engines’. They worked by exploiting the difference in temperature between the thermal energy created by burning fuel and the cool air outside. The fuel source and the cold air outside are both heat reservoirs- sources which are able to maintain their temperature.

Carnot imagined an engine using some particular cycle for doing work using a difference in temperature. He knew that this engine would have a certain efficiency for the difference in temperature between the two reservoirs. Carnot’s hypothetical engine (A) was reversible, meaning that it could be run forwards, taking heat from the hot reservoir and ejecting it into the cold reservoir and using this movement to do work (say spinning a wheel). But it could also be run backwards: if the same amount of work was done in reverse (spinning the wheel backwards) the engine would return to its initial conditions, the ejected heat would be put back into the hot reservoir.

Carnot defined the efficiency of this reversible engine, for a particular temperature difference, as X. He then pictured a non-reversible engine (B) with a more efficient cycle, Y>X. However he realised that this more efficient engine, were it to exist, would allow perpetual motion.

If you connected both of the engines to the same heat reservoirs you could use the more efficient one to do an amount of work Y. You could then use some of this work on the reversible engine, running it backwards, putting heat X back into the hot reservoir and restoring the initial conditions. But since Y is greater than X, some energy will be left over after restoring the initial conditions. This is ‘free’ energy; we could repeat the process over and over and extract an infinite amount of energy from a tiny piece of coal. This would allow for perpetual motion, but Carnot said that ideas of perpetual motion were inconsistent with our experience of the Universe and so concluded that any engine more efficient than a reversible engine was physically impossible.

This was useful because it made a reversible engine the ideal engine. As we decrease heat losses can get closer and closer to it but we can never make an engine more efficient than a reversible engine.This paper also introduced the idea of ‘work’ , which Carnot defined as “weight lifted through a height” this is essentially the same as our current definition, force multiplied by distance. Since a particular force is needed to counteract gravity.

Although Carnot had discovered great things he had done it all using an outmoded theoretical model of temperature called the Caloric Theory. ‘The caloric’ was a self-repelling substance which was found in hot bodies. It flowed from hotter bodies to cooler ones, travelling in and out through pores in the material. The theory was quite successful; it could explain why two bodies of different temperatures left in contact with each other reached the same temperature. The caloric spread out as much as it could due to its repulsion, so the high concentration in the hot body moved out into the cold body where there was less caloric.

The caloric theory however slowed down progress because the fact that work could be converted into heat and vice-versa was not fully realised. This idea was proposed, all but simultaneously, by three scientists working independently during the 1840s. Julius von Mayer worked on the problem in Germany; Ludwig Colding did the same in Denmark. But it was James Prescott Joule who would eventually give his name to the unit of work. In 1843 he published “The Mechanical Equivalent of Heat”.

Joule used a number of methods to heat water from spinning a paddle wheel powered by a falling weight to using an early electrical cell. He found that the work needed to achieve a particular temperature rise was very similar for all of these disparate methods and caclulated a value for it, which we now know as the specific heat of water. All this research pointed to the idea that energy was neither created nor destroyed but converted into various forms. The theory did not initially gain acceptance but in 1850 Rudolf Clausius restated it as the first law of thermodynamics: “There is a state function E, called ‘energy’, whose differential equals the work exchanged with the surroundings during an adiabatic process.” This is wordy, but it means that systems have a quantity called ‘energy’, and that it changes at a rate equal to the amount of work being done on the system, or being done by the system to something outside it. This only applies in an adiabatic process – one in which no heat is transferred to or from the system.

The major contribution that Clausius would make to thermodynamics, however, was the Second Law, which he stated as: “Heat cannot of itself pass from a colder to a hotter body”. This law is simple, and perhaps seems obvious. But it creates an entirely new concept, the idea of entropy.

Entropy is a measure of a system’s inability to do work, and sometimes imagined as disorder. The second law states that, if unhindered, localised energy tends to spread out. A cup of tea will dissipate its heat to the atmosphere and slowly cool. Air will rush out of a tyre when it is punctured, but will not when it is prevented from doing so by the tyre rubber.

The formula for the increase in entropy of a thermodynamic system when energy enters it is this:

Or

From this we can see why Entropy must increase. Let us take the simple example of a block of ice, weighing 2 kg, sitting in a bowl in a room at 298K (room temperature). We’ll define two thermodynamic systems. The block of ice is a system and the room is a system. We know that in these circumstances the ice will melt, but that doing so will require an energy input. We can calculate how much energy by multiplying the mass of ice by water’s enthalpy of fusion, 334 kJ kg -1.

We know that this is the energy q transferred to the block of ice from the air in the room. So we can calculate the entropy increase of the ice-system as it melts, by dividing this energy gain by the ice’s temperature (273 K).

That’s it. We have calculated entropy change. But now let’s see how the air is affected. We can work out an entropy change for the room’s thermodynamic system too, provided we remember that it has transferred energy away from it and remember to put in a minus sign. We just divide the energy transferred by the temperature of the air (298 K).

Now we can work out the total entropy change when the ice melts.

The important result here is that we have a positive number. Though one thermodynamic system has lost energy and another has gained it the net result has been an increase in entropy. ‘Ah!’ you may say ‘But that’s just because 298K is higher than 273K.’ And of course you’re quite right, but what would happen if we swapped around the figures and imagined the ice (somewhat improbably) at a temperature of 298K and the air at 273K? You know what would happen, heat only flows from hotter bodies to cooler bodies, the energy change would be in the opposite direction, the ice would warm the air. We could do the calculation in the same way, but we’d have to reverse the sign of q, so the result would be the same. An entropy increase occurs in any spontaneous process and so nowadays the most common definition of the 2nd Law is: “The entropy of an isolated system not in equilibrium will tend to increase over time, approaching a maximum value at equilibrium”.

Enter Maxwell

It is at this point that James Clerk Maxwell appears on the scene. But don’t look on him as an enemy of the laws of thermodynamics just yet.

In 1865, following great discoveries about electromagnetics, Maxwell turned his attention to gases. In 1738 Daniel Bernoulli had suggested that pressure exerted by a gas might be caused by the gas molecules continually crashing into each other, and the walls of the container, at very high speed. But since the ideas of conservation of energy had not yet been accepted people struggled to see how gas molecules could collide without slowing down at every collision. By Maxwell’s time the idea was more accepted and John Herapath had even suggested that the temperature of a gas might be linked to the energy of molecules inside it.

Maxwell’s main addition to the theory was the discovery of what we now call the Maxwell-Boltzmann distribution.

Where k is the Boltzmann constant, T is the absolute temperature (in Kelvin)and m is the molecular mass of the gas.

This formula gives the probability of a molecule in a gas having a certain speed v.

Three graphs of this probability distribution (at different temperatures) are shown at the right. The probability is highest at a particular point, the most likely speed, which is associated with the temperature. But on either side of this point there are many molecules with lower and higher speeds. In a gas some molecules will have speeds in the thousands of ms-1 and others speeds just above 0.The area under each of the curves is 1, because this is a distribution of probabilities and every possibility must be accounted for, and this explains the lower curves for higher temperatures.

Very significantly, this distribution could also predict the second law of thermodynamics. Suppose you have a hot substance with many fast moving molecules and you bring it into contact with a cold substance containing slow moving molecules. The fast molecules will hit the slow molecules accelerating them, as they do this they will slow down. This is the same as in the second law, ‘heat flows from a hot body to a colder body’.

What was so revolutionary about this? After all, the two theories agreed with each other. The difference was that the Maxwell-Boltzmann distribution was, unashamedly, just a statistical distribution. It was what was most likely to happen. But just occasionally every single molecule in the hot substance might be hit from the side by a molecule in the cold substance. This would mean heat flowing from cold to hot.

When Maxwell published his “Theory of Heat” in 1871, which explained ideas of heat and molecular kinetics in a manner simple enough for a student to understand, he introduced at the end the idea of a demon to explain the nature of this paradox.

… if we conceive of a being whose faculties are so sharpened that he can follow every molecule in its course, such a being, whose attributes are as essentially finite as our own, would be able to do what is impossible to us. For we have seen that molecules in a vessel full of air at uniform temperature are moving with velocities by no means uniform, though the mean velocity of any great number of them, arbitrarily selected, is almost exactly uniform. Now let us suppose that such a vessel is divided into two portions, A and B, by a division in which there is a small hole, and that a being, who can see the individual molecules, opens and closes this hole, so as to allow only the swifter molecules to pass from A to B, and only the slower molecules to pass from B to A. He will thus, without expenditure of work, raise the temperature of B and lower that of A, in contradiction to the second law of thermodynamics.

demon.jpg

The demon watches molecules bouncing around in a box and opens a trapdoor to allow fast molecules from A to B and slow molecules from B to A. Eventually all ‘cold’ particles will have been sorted into A and all of the ‘hot’ particles into B. All the demon has done is to watch the particles and open a trapdoor. If this trapdoor is frictionless we can perhaps imagine him doing no work at all.

There are a number of serious problems here. The second law of thermodynamics has been violated. All the way through the sorting process heat has been passing from a colder gas to a hotter one.

This has major implications. Once we have sorted the molecules like this we can put a turbine by the trapdoor and open it. Molecules in B will move to A, spinning the turbine and doing work. We could extract all of the thermal energy of molecules in a single reservoir until they almost come to a standstill by cycling between the use of a demon and a turbine. We would then have brought the temperature of the gas down to close to absolute zero. We would have done all this without using energy but in fact in the process of getting energy.

You may ask how the door can open without work being performed on it. In practice it couldn’t quite. But we model the movement of the door as a quasistatic process, meaning that it happens infinitely slowly. For the door to open infinitely slowly we only need to accelerate it an infinitesimally small amount, so we need to use an infinitesimally small amount of energy. Aquasistatic process is thermodynamically reversible, since the same slow movement can be done in reverse. In practice we wouldn’t open it infinitely slowly and we would use energy but we could use as little energy as we wanted to by slowing down the whole process and so could get as close as we needed to using no energy in order to make the process viable. If someone wants to prove Maxwell’s demon is impossible then they must prove that it is impossible even in the best of conditions, so it is assumed that the quasistatic movement is possible. As Carlton Caves, one of those who tried to prove the possibility of Maxwell’s demon put it: ‘A crack in the second law is a crack, and it doesn’t make any sense to talk about how small the crack is.’

Three years after Theory of Heat Maxwell explained an insight the demon had given him into the second law: ‘Moral. The 2nd law of thermodynamics has the same degree of truth as the statement that if you throw a tumblerful of water into the sea, you cannot get the same tumblerful of water out again.’ The second law had always been an empirical law, proven by experiment. There was not a direct mechanical explanation for it, as there had been with the self-repelling caloric. Instead it was a statistical law, which was bound to be true the vast majority of the time, provided you were looking at masses and not controlling individual molecules. Once one started controlling individual molecules, Maxwell could not see a reason that the second law could not be able to be systematically violated, a worrying prospect for science.

Having unleashed this most challenging of adversaries upon thermodynamics Maxwell left the idea unresolved. It was William Thomson, who became Lord Kelvin, and after whom the SI unit of absolute temperature is named, who first called it “Maxwell’s Demon”. As time as progressed many physicists have attempted to find reasons that such a demon cannot function as described by Maxwell and thus save the second law.

Enforcing the Law

Marian von Smoluchowski proposed a similar system, Smoluchowski’s valve, in 1912. This was a simpler idea in which pressure would be created in B using a central valve which would only allow molecules moving from A to B to pass through. He presented this valve as a miniature door with a weak spring behind it, which would only open in one direction, because the other was blocked. A molecule would hit it, opening the door and passing through it, provided it was travelling the right direction. The spring would then close the door ready for the next molecule. But Smoluchowski performed calculations to show that if the door was light enough to be opened by a single molecule, after a few collisions it would simply bounce around randomly due to bombardment by molecules, and be unable to control the flow of gas molecules. He realised that it was important to take into account the temperature or entropy of whatever system was controlling the opening and closing of the hole.

In Maxwell’s day and for some time after the idea of vitalism was prevalent. This suggested that living things had an ‘energy’ completely different to anything else which was not and, perhaps could never be, understood by science. People thought that there was a fundamental difference between living material and other matter, rather than seeing that the same laws of mechanics apply in both. As long as Maxwell’s Demon was seen as a living entity, the thermodynamic processes of which could not be scientifically understood, its viability could not be proved or disproved. But when scientists started to evaluate the possibility of a machine acting as a Maxwellian Demon it became more possible to understand. Von Smoluchowski’s mechanism was not ‘intelligent’, it was simply pushed open by gas molecules. To have a hope of solving the paradox of Maxwell’s Demon scientist would need to describe it as an ‘intelligent automaton’, able to use information about the positions of molecules but still obeying the basic laws of physics.

Leo Szilard was probably the first man to do this, he wrote:

It appears that the ignorance of biological phenomena need not prevent us from understanding [...] Intelligent beings – insofar as we are dealing with their intervention in a thermodynamic system – can be replaced by non-living devices whose “biological phenomena” one could follow and determine whether in fact a compensation of the entropy decrease takes place as a result of the intervention by such a device in the system.

Maxwell’s Demon works by converting information (on the positions and velocities of particles) into work. In 1929 Leo Szilard sought to understand this process by inventing a simple hypothetical engine as Carnot had. The mechanism was like Maxwell’s with a single container divided in two by a partition. Two pistons could be extended to the partition, one from each side. The crucial difference was that the box contained just a single gas molecule.

The principles of how it worked are shown in the diagram below:

szilard.jpg

1.Initially the demon has no knowledge of the position of the molecule. The hole in the partition is closed. The molecule is either in the left or the right side.

2. The demon ‘discovers’ that the molecule is on one side, say the left. This is the only information it has. (The path and the position of the molecule are merely shown for illustration)

3. The demon moves the piston on the right hand side to the centre. Since there is no pressure opposing this movement, theoretically no work is required.

4. The demon opens the hole. The pressure on the left hand side of the piston is higher than the pressure on the left hand side so it is moved right, and work could be extracted from it.

5. The piston reaches the right hand side. Since the hole is open the Demon does not know which side of the partition the molecule will be on at any moment. The hole is closed and the cycle is repeated again from step one. (If the molecule is found on the right hand side a mirrored process occurs.)

Szilard realised that the ‘discovery’ process entailed making a measurement and storing and remembering the result. In step 2 the demon discovers the molecule is ‘left of divider’ which is why work is extracted using the right piston. However in step 4 the molecule is sometimes ‘right of divider’. Despite this we must continue to extract work from the right piston. It is important that the Demon does not forget its initial measurement. This idea of the necessity for memory will prove important later.

Szilard also thought that the demon could not just ‘discover’ the position of the molecule. It would need to measure it, and he decided that the second law was saved because the measurement created a certain amount of entropy which cancelled out the entropy decrease. He calculated this to be , where k is the Boltzmann constant as before.The advantage of Szilard’s model is that it made it easier to see the ideas without being confused in attempts to calculate what would happen to the vast numbers of molecules in a macroscopic gas. Szilard’s assertion that the energy dissipated by measurement was equal, or greater to the amount of work a Maxwellian demon could do with the information measured stood for a long time.

Léon Brillouin, a French physicist, took Szilard’s ideas further in his paper Maxwell’s Demon Cannot Operate , published in 1951. Brillouin was one of the pioneers of the recent ideas of Information Theory. He defined a new term “negentropy” meaning negative entropy. Brillouin proposed that negentropy and information could be converted interchangeably. A box containing a gas, as Maxwell describes it, would have black body radiation inside it. This prevents the demon from using black-body radiation to try to find the location of the molecule as it would simply be confused by the rest of it. Brillouin assumed that some form of light would be needed to find it and so suggested it use a torch with a hot filament. As the torch emits a pulse, energy is dissipated. This means that entropy increases and negentropy decreases. But Brillouin said this negentropy decrease was because the negentropy was being converted into information. When the demon uses this information to control the piston and remove entropy from the system, the information is turned back to the original amount of negentropy and nothing has changed. The second law survives.

It seemed that the puzzle of Maxwell’s Demon had been solved. Most physicists agreed that, as Szilard had put it ‘we must conclude that [...] the measurement [...] must be accompanied by a production of entropy.’

In 1971 Rolf Landauer published a paper on what limited the power of computers in terms of thermodynamics. He realised that certain computing operations had to result in an increase in entropy because of their lack of reversibility.

In any logical operation carried out in computers there are inputs which are processed to create outputs. To take the simplest of examples a NOT gate has a truth table like this:

Input A

Output B

1

0

0

1

An AND gate has a truth table like this:

Inputs

Output

A

B

C

1

1

1

1

0

0

0

1

0

0

0

0

Landauer created the idea of logical reversibility. A device performs a process which is logically irreversible if ‘the output of the device does not uniquely define the inputs’. This is intuitive. If it is possible to work backwards from the result of an operation and find out what you started with then it is possible to ‘reverse’ the operation. So the NOT operation is logically reversible because if you end up with a 0, you know you started with a 1 and vice versa. On the other hand when the AND gate gives a 0 as an output you could have inputted (1,0) or (0,1) or even (0,0) and so its operation not reversible.

Landauer goes on to say that a logically irreversible process must be a thermodynamically irreversible process. And in a thermodynamically irreversible process, entropy increases (this was proved by Carnot’s efficiency experiment). Landauer went on to show that erasing a bit of memory increases entropy by where is again the Boltzmann constant.

Charles H. Bennett applied these ideas to an automated Maxwellian demon. Computers store information they measure in memory, and a hypothetical Maxwellian demon which worked using electronics would do the same. However big the demon’s memory space, it would eventually be filled with information about where the molecule was in all of the previous cycles. Many people run into this problem of running out of storage space on their own computers but fortunately there is a simple solution – deleting some of the unnecessary data. A Maxwellian demon could do the same thing, deleting the now useless data on where molecules were thousands of cycles earlier. But there’s a catch. Memory erasure is by definition an irreversible process. Once you’ve deleted the data on a piece of memory, resetting all the bits to 0, it is impossible to reconstruct the original data from this string of 0s. This irreversible process increases entropy by per bit. Bennett realised that one bit of storage was needed for each Szilard cycle to store the states 0 (left) or 1 (right).The entropy increase when these are erased offsets the entropy decrease effected by the demon. Thus when we look at the system as a whole entropy has not decreased, and so the second law is saved.

Although Bennett had reached the same conclusion as Szilard’s 1929 paper, that a Maxwellian demon could not violate the second law because of entropy would be created, he had reached it for different reasons, and in science the reasons are just as important as results. Szilard had assumed that one could not acquire information without disturbing the system and increasing its entropy. He had shown that a few specific examples, such as the demon using a lamp with a hot filament to watch the molecules, would increase the entropy of the system more than the demon would have been able to reduce it. It was assumed to be a universal law that acquiring one bit of information required of energy to be dissipated partly because of a similar phenomenon in quantum mechanics whereby measuring the position of a photon destroys the system you are trying to measure. However in one paper, Bennett provided an example of a type of measurement in which the initial system was left unchanged using the idea of a ‘billiard ball computer’.

Look at the diagram above. The black rectangles are ‘mirrors’ against which billiard balls will bounce (and be reflected). Imagine we place the white ball at F and accelerate it in direction X. In this idealised system we neglect air resistance and friction so that the ball will, if uninterrupted, forever bounce from F to A to C to E and back to F. We can store a bit of information in the device by calling the state before we introduce the ball ‘state 0’ and the state in which the ball is bouncing around continually ‘state 1’. Suppose we wish to then measure the state of the device. If we introduce a black ball at Y at the right time, firing it towards N, it will be affected by the presence of the white ball. If the white ball is not present (i.e. the device is in state 0) the black ball will continue uninterrupted to N where it could if wished trigger some action. If, on the other hand, the white ball is present (i.e. the device is in state 1) the black ball will be bounced along the path H to I to J to K to L and then will shoot out in direction M (where it could trigger a different action). The white ball too will be diverted, but only temporarily, it will bounce from the collision at B to G and then to D where (after the second collision) it will resume its usual course, bouncing to E and then to F, A, C and back to E again.

To summarise, the state of the system has been measured without affecting it at all, it is in the same thermodynamic state it was before hand but we have now acquired information about the state of it which we can use. An exception to Szilard’s rule is possible. Though we cannot directly use this billiard ball computing model in a demon design similar mechanisms have been proposed to measure which side of the partition Szilard’s molecule is on without disturbing the environment. Having shown that measurement did not necessarily increase energy Bennett knew that it was the problem of memory erasure that made Maxwell’s Demon impossible. The correct solution to the paradox had been found, more than a century after its creation.

The Demon Returns

In 1990 the demon briefly reared its ugly head again. Carlton Caves had read Bennett’s refutation but believed that by employing a clever strategy a Demon could evade the restraints placed upon it. He used an “Unruh demon”, this is like a Maxwellian demon but watches not a single box containing a molecule, but a large number of boxes, say 10, with a molecule in each of them.

This is basically 10 Szilard engines in parallel and if each operated independently Landauer’s loss of energy would obviously still apply. 10 bits would be needed to store the information about the molecules in all the boxes and these would eventually need to be erased, cancelling out any entropy decrease effected by the demon.

However Caves thought there was a loophole in the argument. He proposed that the demon monitor the location of the molecules and wait until the very rare occasion where by chance they were all on a particular side of their partitions, say the left hand side. It would store this information that all were on the left in a single bit. “1” would represent all molecules being on the left. “0” would represent any other state. When the rare coincidence occurred pistons would be placed on the right side of all of the engines and work would be extracted from them. Because there were 10 engines this would be 10x the work that could be extracted from one. However since the information was stored in a single bit, Caves suggested that only 1 bit of memory would need erasing, so very little energy would need to be used up.

It seems a compelling argument but a flaw was soon realised. To work out whether all of the molecules were on the left hand side, the demon would first need to measure the location of all of them. It would then carry out a function like this to work out whether they were all on the left side.

Inputs

?

?

?

?

?

?

?

?

?

Output

A

B

C

D

E

F

G

H

I

J

All on left?

1

1

1

1

1

1

1

1

1

1

1

0

1

1

1

1

1

1

1

1

1

0

1

0

1

1

1

1

1

1

1

1

0

0

0

0

0

1

1

1

1

1

1

0

0

You may be able to guess the flaw. This intermediate processing operation is an additional logically irreversible process. We do not merely have to dissipate energy to destroy the 1 or 0 we get at the end. We also have to account for the loss of 10 bits of information about the location of the molecule in each individual box. If we get a 0 output from the above function the inputs could have been any one of 1023 possible different combinations (210 – 1) and we have no way to work backwards and find out which. 10 bits of data have to be destroyed, more energy must be dissipated and the second law is again saved.

Some ‘real’ demons

Over time a number of devices with some of the properties of Maxwell’s demon have been created. For one reason or another, none of these machines violate the Second Law of Thermodynamics but they are nevertheless interesting.

The vortex tube

When you pass compressed gas of ‘medium’ temperature into a Ranque-Hilsch vortex tube, it produces two streams out, one hot and one cold. At first sight this device seems to do the same thing as Maxwell’s demon, moving heat from a cold to hot body without expenditure of work.

The (orange) original gas is injected in a spiralling motion towards the right. As it moves further right the gas at the outer edge of the spiral increases in temperature. The mechanism for this is still disputed, but it is believe that the outer gas is pressurised by centrifugal force and also that when the inner gas loses radial momentum it is transferred as kinetic energy to gas molecules further out. When the gas reaches the right hand side only the outer gas (hot) is allowed out. The inner (cold) gas is forced back and out of the left side.

Pressurised gas at 20°C can be split into streams at 80°C and -30°. The catch is that 70% of the gas comes out through the hot stream. This means that the entropy increase of the hot stream, created by all of the interactions and spontaneous thermodynamics going on inside the tube, is much higher than the decrease of the cold stream. While you could run a heat engine using the difference in temperature between the hot and cold streams it would be far more efficient to simply pass the initial gas through a turbine. The tube, while an interesting curiosity still not fully understood, does not break the second law.

Nanotechnology – the demon realised

In Maxwell’s day the idea that we would ever be able to control matter on a molecular scale seemed very unlikely. That has completely changed. The introduction of the scanning tunnelling and atomic force microscopes has allowed us a view at the atomic scale and scientists now routinely trap single atoms in a trap in order to examine their quantum properties.

The IBM logo written with atoms of xenon

Nanotechnologists working at the University of Edinburgh have constructed something an actual molecule which they call a Maxwellian Demon. In David Leigh’s paper, published in Nature in February 2007, he acknowledges that the device does not contravene the second law, but it is nevertheless interesting. Chemical reactions tend to proceed towards an equilibrium position, just as gas molecules usually tend towards equilibrium, as stated in the second law.

In biology there are millions of tiny ‘molecular machines’ which drive reactions efficiently by manipulating individual molecules at the atomic scale. They are ever present, from the Krebs cycle in mitochondria to the glowing abdomen of a firefly. But when we perform chemical reactions in the laboratory or in industry we generally cannot achieve the same thing. And so the only way to carry out many reactions is to clumsily change the macroscopic conditions in which a reaction takes place so that we get a product that we want when the reaction proceeds towards equilibrium. Artificially we currently have very little control over what happens on a molecular scale but nanotechnologists are trying to change this by constructing molecular machines.

Idealised image of rotaxane/macrocyle Actual structure of two of these molecules

David Leigh’s machine was designed to force a chemical system away from equilibrium using information about molecular positions. They first had two create and link two molecules, a rotaxane and a macrocycle.

A rotaxane is a dumbbell shaped molecule, with a thin section in the middle and thicker areas at each end. A macrocyle is a ring that fits onto this rotaxane, as shown and is free to slip up and down between the ends. Making the complex molecular machine is a very complex process which involves mixing together a large amount of ‘axles’ with the macrocycles and hoping that a proportion of the macrocycles will be around an axle. Then ‘caps’ are added which bind to the axle, forming the dumbbell end. Yields are very low and so it is an expensive process.

The exact chemical composition and mechanism for action of the molecular machine, as shown in the paper, is below. On the left hand side the chemical structures of the molecule are shown. On the right, Leigh’s paper showed the analogous process being carried out by a Maxwellian demon.

a) The macrocycle (red) on the rotaxane. It cannot move past the methyl group labelled L below it so this point is a ‘gate’ blocking it. A photon (wavy line) hits the macrocycle, which contains a photosensitiser chemical. The photosensitiser causes the double bond at K to flip to a different provided the macrocycle is close eneough to K.

b) The bond has now moved to the other orientation. This change makes the molecule much straighter and so ‘opens’ the gate: the macrocycle can now move freely over the entire rotaxane. Despite this freedom, there are two binding sites where it is most often found. One of these is on the left, where the macrocycle is shown in b.

c) There is another binding site on the right, the location of the macrocycle shown in c. But the two binding sites are not mirror images of each other, the right binding site is further from the gate than the one on the left.

d) Another photosensitiser causes the molecule to revert to its original ‘closed’ form. This is when the difference in binding sites becomes relevant. If the molecule is on the left of the gate it is likely to be bound near to K. This means it will be able to open the gate when a photon hits it. On the other hand when the macrocycle is on the right it is likely to be bound far away from K. So when a photon hits it, it will be unable to change K and open the gate.

This imbalance means that when light is shone upon the molecular machines they shift from having the macrocycle evenly between the left and right sides of the molecule to a 30%, 70% ratio as the macrocycles become trapped on the right side.

This is a very exciting scientific development and it shows that Maxwell’s idea is today as relevant as it ever was. It is proposed that in the future we might be able to propel objects by trapping air molecules hitting them from particular directions and harnessing their momentum. Leigh envisages that using this technique a solid object could be moved simply by shining a laser pen on one side of it. But he credits Maxwell with the initial idea, 150 years ago, and with discovering much of the science that made this molecule possible. ‘It is fitting’ he writes, ‘ that advances in science mean that we can finally create a machine like the hypothetical one he pondered over so long ago’.

Conclusion

It has finally been proven that Maxwell’s Demon cannot violate the second law of thermodynamics. Maxwell originally wrote that:

‘He will thus, without expenditure of work, raise the temperature of B and lower that of A, in contradiction to the second law of thermodynamics.’

But Landauer’s Principle demonstrates that the demon will have to ‘expend work’, when he wants to erase his memory of the positions of some molecules. And he will inevitably have to do that eventually. With this expenditure, which is equal or greater than the potential energy provided by the separation of the molecules, the problem of contravening the law of thermodynamics has been removed.

However, despite ultimately being shown to be impossible, Maxwell’s idea has contributed greatly to our knowledge of the universe. As physicists have wrestled with the challenge presented by the demon they have made discoveries about statistical mechanics, information theory and the fundamental thermodynamic limits to the speed of computers. It influences fields Maxwell could not even have imagined, from nanotechnology to biochemistry. The demon is Maxwell’s extraordinary legacy which will forever fascinate those who study it.

Bibliography

H.S. Leff & F.Rex, 1990, Maxwell’s Demon: Entropy, Information, Computing (Adam Hilger)

This contained 25 individual scientific papers on the Demon:

W Thomson (1874) Kinetic theory of the dissipation of energy

E E Daub (1970) Maxwell’s demon

P M Heimann (1970) Molecular forces, statistical representation and Maxwell’s demon

M J Klein (1970) Maxwell, his demon, and the second law of thermodynamics

L Brillouin (1949) Life, thermodynamics, and cybernetics

J Rothstein (1951) Information, measurement, and quantum mechanics

K Denbigh (1981) How subjective is entropy?

A M Weinberg (1982) On the relation between information and energy systems: A family of Maxwell’s demons

L Szilard (1929) On the decrease of entropy in a thermodynamic system by the intervention of intelligent beings

I L Brillouin (1951) Maxwell’s demon cannot operate: Information I and entropy. I

R C Raymond (1951) The well-informed heat engine

C Finfgeld and S Machlup (1960) Well-informed heat engine: efficiency and maxinmUm power

I P Rodd (1964) Some comments on entropy and information

I D Gabor (1951; published in 1964) Light and Information

J M Jauch and J G Baron (1972) Entropy, information and Szilard’s paradox

Costa de Beauregard and M Tribus (1974) Information theory and thermodynamics

A F Rex (1987) The operation of Maxwell’s demon in a low entropy system

R Landauer (1961) Irreversibility and heat generation in the computing process

C H Bennett (1973) Logical reversibility of computation

R Laing (1973) Maxwell’s demon and computation

C H Bennett (1982) The thermodynamics of computation – a review

W H Zurek (1984) Maxwell’s demon, Szilard’s engine and quantum measurements

Landauer (1987) Computation: A fundamental physical view

E Lubkin (1987) Keeping the entropy of measurement: Szilard revisited

CH Bennett (1988) Notes on the history of reversible computation

Various articles, Wikipedia, http://en.wikipedia.org

Adams, No Way Back! The Second Law of Thermodynamics, New Scientist, 22 October 1994

Brown, A demon blow to the second law of thermodynamics? New Scientist, 14 July 1990

Brown, Sacred law of physics is safe after all, New Scientist, 15 September 1990

S Carnot (1824) Reflexions sur La Puissance Motrice de Feu

from http://www.thermohistory.com/carnot.pdf

?

Lynde Phelps Wheeler, 1998, Josiah Willard Gibbs: The History of a Great Mind (Oxbow Books)

relevant chapter at http://www.thermohistory.com/historyoverview.pdf

?

Maxwell’s Demon Becomes Reality- University of Edinburgh Press Release http://www.ed.ac.uk/news/070130maxwell.html

?

Biography of Maxwell on VictorianWeb

http://www.victorianweb.org/science/maxwell/banerjee.html

?

Leigh’s site with his paper (‘Exercising Demons: A Molecular Information Ratchet’) and other content

http://www.s119716185.websitehome.co.uk/home/

Image Credits

Magdeburg spheres – originally ‘ c.1896. A. Ganot, “Natural Philosophy”, p. 124 but accessed at http://archives.scu.edu/exhibits/sci_inst/12.html

Savery Engine – from http://www.humanthermodynamics.com/HT-history.html

Maxwell Boltzmann distribution – from http://www.steve.gb.com/science/kinetics.html

Demon diagrams – original illustrations by me (clipart used for Devil)

Billiard ball diagram – from Maxwell’s Demon: Entropy, Information, Computing.

IBM logo –from http://molaire1.club.fr/e_electron.html

Images of vortex tube, and green/blue images of macrocycle and rotaxane from Wikipedia.

Image of rotaxane chemistry, with demon analogy – from David Leigh’s paper (adapted by me to remove some unnecessary complexities.)

12 comments

  • Con qu? c?a Maxwell | Dam Thanh Son's Blog · November 7, 2010 at 5:39 pm

    [...] N?u b?n th?y bài trên h?i tr?u t??ng quá thì ??c bài sau ?ây. Bài này ch?a nhi?u thông tin v? l?ch s? h?n bài tr??c, và ph?n cu?i có nhi?u thí d? r?t m?i v? ?ng d?ng c?a nhi?t ??ng h?c: Maxwell: thermodynamics meets the demon [...]

  • Nate Napoletano · December 13, 2012 at 5:37 pm

    Thank you. This is a tremendously lucid explanation of the Maxwell’s Demon thought experiment, maybe the best I have read. I know that you must have put some hours into this.

    –Old Electrical Engineer from Ohio

  • sachin · October 11, 2013 at 10:49 am

    The demon can let the faster molecules from A to B but how he can transfer slower molecules from B to A????

  • BryanSanctuary · October 31, 2013 at 12:59 pm

    Thank you very much. I really appreciate to explain Maxwell’s Demon throught experiment, I know that you must have put some hours into this. Its so nice.

  • John · December 23, 2013 at 7:47 pm

    cool

  • www.easy-thermodynamics.blogspot.in/ · December 23, 2013 at 7:48 pm

    intrstng

  • ENTROP? ?LKES?N?N KAD?M DÜ?MANI: MAXWELL’?N C?N? | Aç?k Bilim - Ayl?k Çevrimiçi Bilim Dergisi · March 2, 2014 at 9:51 pm

    [...] Maxwell’s Demon Meets Quantum Dots, Scientific American Maxwell: Thermodynamics meets the demon, Splasho The Wrath of Maxwell’s Demon, University of Pittsburgh (Part 1) [...]

  • Nature of information | ???? · May 15, 2014 at 10:07 am

  • Issac · July 6, 2014 at 1:35 pm

    I see a lot of interesting articles on your blog. You have to spend a
    lot of time writing, i know how to save you a lot of work, there is
    a tool that creates unique, SEO friendly articles in couple
    of seconds, just type in google – laranita’s free content source

  • http://naturalremediestolowerbloodpressure.yashenkt.com/ · July 8, 2014 at 3:21 pm

    they are not the only factors to a successful blood pressure reduction.

    Cooked cereals are excellent if no salt is added in
    their preparation. Additionally researchers have now discovered broccoli
    is a rich source of glucoraphanin which boosts the body’s antioxidant defence mechanism and as a result
    can remedy major health problems such as high blood pressure and
    stroke.

  • Rochell · July 8, 2014 at 10:01 pm

    I see a lot of interesting content on your website.

    You have to spend a lot of time writing, i know how to save you a lot of time, there is a tool that creates unique,
    SEO friendly articles in couple of minutes, just search in google – laranita’s free content source

  • faishal arifin · July 20, 2014 at 4:16 am

    follower me if you want download about Ebook thermodynamic free
    http://lesson-ofengineer.blogspot.com/

Leave a Reply

Theme Design by devolux.nh2.me