Dr Karl’s strange science

image

Australian author, radio and TV presenter Dr Karl Kruszelnicki came back to discuss some very strange science facts.

In this talk, we discovered how Big Fossil Fuel successfully did a cover-up on Global Warming, how we’ve been making coffee the wrong way for fourteen centuries and the animal that grows a new anus every time it needs a poo.

Dr Karl Kruszelnicki is an Australian science populariser with insatiable curiosity. He has degrees in Physics and Maths, Biomedical Engineering, Medicine and Surgery and has held a wide range of jobs, from doctor to film-maker, radio personality to labourer, car mechanic to physicist. He has written 45 books to date and plans to write a few more.

https://drkarl.com/

https://en.wikipedia.org/wiki/Karl_Kruszelnicki

https://twitter.com/DoctorKarl?ref_src=twsrc%5Egoogle%7Ctwcamp%5Eserp%7Ctwgr%5Eauthor

https://www.sydney.edu.au/science/industry-and-community/community-engagement/dr-karl.html

The following are notes from the on-line lecture. Even though I could stop the video and go back over things there are likely to be mistakes because I haven’t heard things correctly or not understood them. I hope Dr Kruszelnicki, and my readers will forgive any mistakes and let me know what I got wrong.

Part one: Carbon and climate change

Why did people burn carbon in the first place?

The answer is that it is “loaded” with energy

https://people.wou.edu/~courtna/GS361/Energy_From_Fossil_Fuels.htm

image

image

image

Every mole of methane (16 g) releases 810 KJ of energy on burning.

Energy content of other fuels

image

Of course, another way to look at it is how much energy did a typical oil worker or a miner have to use to obtain the oil/gas or coal in the first place if he (and it was usually a he) worked for 40 hours a day, 46 weeks a year for 50 years. The answer is a great deal (Typically 15000kJ per day giving 1.73 x 108kJ for a typical working life).

Anyway, we burn carbon in air because it releases a great deal of energy, which is necessary for our modern lives.

So, what is climate change, how is it appearing and where did it come from?

It isn’t due to combustion. It is ultimately caused by the heat from the Sun being trapped by the Earth’s atmosphere. The amount of heat trapped per day is the equivalent of 400,000 Hiroshima atomic bombs exploding per day.

https://en.wikipedia.org/wiki/Hiroshima

Hiroshima is the capital of Hiroshima Prefecture in Japan.

On Monday, August 6, 1945, at 8:15 a.m. (Hiroshima time), the nuclear weapon “Little Boy” was dropped on Hiroshima from an American Boeing B-29 Superfortress, the Enola Gay, flown by Colonel Paul Tibbets, directly killing at least 70,000 people.

You could get away with storing that amount of heat for a day, a week, a month or a year but not for decades.

https://www.livescience.com/11350-top-10-surprising-results-global-warming.html

The effects of global warming have been well documented but there are some odd ones you might not be aware of. These include the Earth being tipped off its axis, mountain range growth spurts, satellite orbits changing and increased allergies.

Can we fix global warming? Yes, we can. We can actually stop it and reverse it, and it will be better and cheaper in the long run.

However there has been a cover up.

The history of climate change and the cover up began way back in the 1820s when scientists began to think about the temperature of the Earth’s atmosphere. Joseph Baron Fourier wrote an article for an American journal.

https://en.wikipedia.org/wiki/Joseph_Fourier

Jean-Baptiste Joseph Fourier (21 March 1768 – 16 May 1830) was a French mathematician and physicist born in Auxerre and best known for initiating the investigation of Fourier series, which eventually developed into Fourier analysis and harmonic analysis, and their applications to problems of heat transfer and vibrations. The Fourier transform and Fourier’s law of conduction are also named in his honour. Fourier is also generally credited with the discovery of the greenhouse effect.

image

In 1822 Fourier published his work on heat flow in Théorie analytique de la chaleur (The Analytical Theory of Heat), in which he based his reasoning on Newton’s law of cooling, namely, that the flow of heat between two adjacent molecules is proportional to the extremely small difference of their temperatures.

In the 1820s he calculated that an object the size of the Earth, and at its distance from the Sun, should be considerably colder than the planet actually is if warmed by only the effects of incoming solar radiation. He examined various possible sources of the additional observed heat in articles published in 1824 and 1827. While he ultimately suggested that interstellar radiation might be responsible for a large portion of the additional warmth, Fourier’s consideration of the possibility that the Earth’s atmosphere might act as an insulator of some kind is widely recognized as the first proposal of what is now known as the greenhouse effect, although Fourier never called it that.

In his articles, Fourier referred to an experiment by de Saussure, who lined a vase with blackened cork. Into the cork, he inserted several panes of transparent glass, separated by intervals of air. Midday sunlight was allowed to enter at the top of the vase through the glass panes. The temperature became more elevated in the more interior compartments of this device. Fourier concluded that gases in the atmosphere could form a stable barrier like the glass panes. This conclusion may have contributed to the later use of the metaphor of the “greenhouse effect” to refer to the processes that determine atmospheric temperatures. Fourier noted that the actual mechanisms that determine the temperatures of the atmosphere included convection, which was not present in de Saussure’s experimental device.

https://en.wikipedia.org/wiki/Horace_B%C3%A9n%C3%A9dict_de_Saussure

image

Horace Bénédict de Saussure (17 February 1740 – 22 January 1799) was a Genevan geologist, meteorologist, physicist, mountaineer and Alpine explorer, often called the founder of alpinism and modern meteorology, and considered to be the first person to build a successful solar oven.

image

In 1824 French mathematician and physicist Jean Baptiste Joseph Fourier published “Remarques générales sur les températures du globe terrestre et des espaces planétaires,” Annales de Chimie et de Physique, 27 (1824) 136–67. In this paper Fourier showed how gases in the atmosphere might increase the surface temperature of the earth. This was later called the greenhouse effect.

Fourier’s paper was translated into English by Ebeneser Burgess, and published in the American Journal of Science 32 (1837) 1-20.

image

First pages of Fourier’s paper as it was originally published.

Fourier asked a question about terrestrial temperatures. He felt it was terribly confusing and odd, and he was right. He thought that the temperature that we have anywhere on Earth at any time is due to the heat coming from the Sun, space and the interior of the planet. He also postulated that the atmosphere was somehow trapping some sort of heat. Without an atmosphere the Earth would be a lot colder. However, Fourier didn’t really do any experiments.

Fourteen years later, in 1838, Claude Pouillet did do some experiments.

https://en.wikipedia.org/wiki/Claude_Pouillet

Claude Servais Mathias Pouillet (16 February 1790 – 14 June 1868) was a French physicist and a professor of physics at the Sorbonne and member of the French Academy of Sciences (elected 1837).

image

He developed a pyrheliometer and made, between 1837 and 1838, the first quantitative measurements of the solar constant. His estimate was 1228 W/m2 (+/- ~10% of the value we have today), very close to the current estimate of 1367 W/m2. Using the Dulong-Petit law inappropriately, he estimated the temperature of the Sun’s surface to be around 1800 °C. This value was corrected in 1879 to 5430 °C by Jožef Stefan (1835–1893).

He also postulated that the Earth got extra warmth from various parts of the atmosphere such as atmospheric water vapour and carbon dioxide.

image

Pouillet pyrheliometer, 1863

https://en.wikipedia.org/wiki/Dulong%E2%80%93Petit_law

The Dulong–Petit law, a thermodynamic law proposed in 1819 by French physicists Pierre Louis Dulong and Alexis Thérèse Petit, states the classical expression for the molar specific heat capacity of certain chemical elements. Experimentally the two scientists had found that the heat capacity per weight (the mass-specific heat capacity) for a number of elements was close to a constant value, after it had been multiplied by a number representing the presumed relative atomic weight of the element.

https://en.wikipedia.org/wiki/Pierre_Louis_Dulong

https://en.wikipedia.org/wiki/Alexis_Th%C3%A9r%C3%A8se_Petit

https://en.wikipedia.org/wiki/Pyrheliometer

A pyrheliometer is an instrument for measurement of direct beam solar irradiance. Sunlight enters the instrument through a window and is directed onto a thermopile which converts heat to an electrical signal that can be recorded. The signal voltage is converted via a formula to measure watts per square metre.

https://www.researchgate.net/figure/A-schematic-illustration-of-a-pyranometer-a-and-a-pyrheliometer-b-based-on-9-and_fig5_332606005

image

https://en.wikipedia.org/wiki/Thermopile

A thermopile is an electronic device that converts thermal energy into electrical energy. It is composed of several thermocouples connected usually in series or, less commonly, in parallel. Such a device works on the principle of the thermoelectric effect, i.e., generating a voltage when its dissimilar metals (thermocouples) are exposed to a temperature difference.

https://en.wikipedia.org/wiki/Thermocouple

A thermocouple is an electrical device consisting of two dissimilar electrical conductors forming an electrical junction. A thermocouple produces a temperature-dependent voltage as a result of the thermoelectric effect, and this voltage can be interpreted to measure temperature.

https://en.wikipedia.org/wiki/Thermoelectric_effect

The thermoelectric effect is the direct conversion of temperature differences to electric voltage and vice versa via a thermocouple. A thermoelectric device creates a voltage when there is a different temperature on each side. Conversely, when a voltage is applied to it, heat is transferred from one side to the other, creating a temperature difference.

image

Thermopile made from iron and copper wires

image

Diagram of a differential temperature thermopile with two sets of thermocouple pairs connected in series. The two top thermocouple junctions are at temperature T1 while the two bottom thermocouple junctions are at temperature T2. The output voltage from the thermopile, ΔV, is directly proportional to the temperature differential, ΔT or T1 – T2, across the thermal resistance layer and number of thermocouple junction pairs. The thermopile voltage output is also directly proportional to the heat flux, q”, through the thermal resistance layer.

In 1856 it was proved that carbon dioxide absorbed and retained heat and that there were consequences.

Eunice Newton Foote did the first experiments involving climate change that quantified what was going on.

https://en.wikipedia.org/wiki/Eunice_Newton_Foote

http://rosslandtelegraph.com/news/column-woman-who-discovered-global-warming-%E2%80%94-1856#.X99Qadj7RaQ

image

In 1938 a rise in the amount of carbon dioxide and temperature was seen.

Eunice Newton Foote (July 17, 1819 – September 30, 1888) was an American scientist, inventor, and women’s rights campaigner from Seneca Falls, New York.

She was the first scientist known to have experimented on the warming effect of sunlight on different gases, and went on to theorize that changing the proportion of carbon dioxide in the atmosphere would change its temperature, in her paper Circumstances affecting the heat of the sun’s rays at the American Association for the Advancement of Science conference in 1856.

Foote conducted a series of experiments that demonstrated the interactions of the sun’s rays on different gases. She used an air pump, four mercury thermometers, and two glass cylinders. First, she placed two thermometers in each cylinder, then by using the air pump, she evacuated the air from one cylinder and compressed it in the other. Allowing both cylinders to reach the same temperature, she placed the cylinders in the sunlight to measure temperature variance once heated and under different moisture conditions. She performed this experiment on CO2, common air, and hydrogen. Of the gases she tested, Foote concluded that carbonic acid (CO2) trapped the most heat, reaching a temperature of  52 °C. From this experiment, she stated ““The receiver containing this gas became itself much heated—very sensibly more so than the other—and on being removed [from the Sun], it was many times as long in cooling.” Looking to the history of the Earth, Foote theorised that “An atmosphere of that gas would give to our earth a high temperature; and if, as some suppose, at one period of its history, the air had mixed with it a larger proportion than at present, an increased temperature from its own action, as well as from increased weight, must have necessarily resulted.” A rather chilling prediction.

https://publicdomainreview.org/collection/first-paper-to-link-co2-and-global-warming-by-eunice-foote-1856

image

Above left: The front cover of physicist Eunice Foote’s 1856 paper on global warming

image

Eunice Foote, “Circumstances Affecting the Heat of Sun’s Rays”, in American Journal of Art and Science, 2nd Series, v. XXII/no. LXVI, November 1856, p. 382-383.

“An atmosphere of that gas would give to our Earth a high temperature” August 23rd, 1856

The work of Foote was extended by John Tyndall (although he, apparently, wasn’t aware of her work) around 1860. He carried out some experiments and wrote about them.

https://en.wikipedia.org/wiki/John_Tyndall

image

John Tyndall FRS (2 August 1820 – 4 December 1893) was a prominent 19th-century Irish physicist. His initial scientific fame arose in the 1850s from his study of diamagnetism. Later he made discoveries in the realms of infrared radiation and the physical properties of air, proving the connection between atmospheric CO2 and what is now known as the greenhouse effect in 1859.

His main experiment is explained below and basically involved placing one of the gases in the tube and heating it at one end and measuring the heat output at the other end. But how much heat was there?

He used a standard calibrated source. This allowed him to compare the heat output with the input. The thermopile allowed him to work out the temperatures of the output gases and he found that carbon dioxide could trap 1000 times as much heat as dry air. He summed up the greenhouse effect (although it wasn’t known as this, then) with this elegant sentence “Thus the atmosphere admits of the entrance of solar heat; but checks its exit and the result is a tendency to accumulate heat at the surface of the planet”.

The heat entered the gas (carbon dioxide, water vapour, methane etc.) and bounced around, unable to escape. What was going on with carbon?

image

Tyndall’s sensitive ratio spectrophotometer (drawing published in 1861) measured the extent to which infrared radiation was absorbed and emitted by various gases filling its central tube.

Illustration of John Tyndall’s setup for measuring the radiant heat absorption of gases. This illustration dates from 1861 and it is taken from one of John Tyndall’s presentations where he describes his setup for measuring the relative radiant-heat absorption of gases and vapours. The galvanometer quantifies the difference in temperature between the left and right sides of the thermopile. The reading on the galvanometer is settable to zero by moving the Heat Screen a bit closer or farther from the left-hand heat source. That is the only role for the heat source on the left. The heat source on the righthand side directs radiant heat into the long brass tube. The long brass tube is highly polished on the inside, which makes it a good reflector (and non-absorber) of the radiant heat inside the tube. Rock-salt (NaCl) is practically transparent to radiant heat, and so plugging the ends of the long brass tube with rock-salt plates allows radiant heat to move freely in and out at the tube endpoints, yet completely blocks the gas within from moving out. To begin the measurements, both heat sources are turned on, the long brass tube is evacuated as much as possible with an air suction pump, the galvanometer is set to zero, and then the gas under study is released into the long brass tube. The galvanometer is looked at again. The extent to which the galvanometer has changed from zero indicates the extent to which the gas has absorbed the radiant heat from the righthand heat source and blocked this heat from radiating to the thermopile through the tube. If a highly polished metal disc is placed in the space between the thermopile and the brass tube it will completely block the radiant heat coming out of the tube from reaching the thermopile, thereby deflecting the galvanometer by the maximum extent possible with respect to blockage in the tube. Thus, the system has minimum and maximum readings available, and can express other readings in percentage terms. (The galvanometer’s responsiveness was physically nonlinear, but well understood, and mathematically linearizable.) In one of his public lectures to non-professional audiences Tyndall gave the following indication of instrument sensitivity: “My assistant stands several feet off. I turn the thermopile towards him. The heat from his face, even at this distance, produces a deflection of 90 degrees [on the galvanometer dial]. I turn the instrument towards a distant wall, judged to be a little below the average temperature of the room. The needle descends and passes to the other side of zero, declaring by this negative deflection that the pile feels the chill of the wall.” (quote from Six Lectures On Light). To reduce interference from human bodies, the galvanometer was read through a telescope from across the room. The thermopile & galvanometer system was invented by Leopoldo Nobili and Macedonio Melloni. Melloni measured radiant heat absorption in solids and liquids but didn’t have the sensitivity for gases. Tyndall greatly improved the sensitivity of the overall setup (including putting an offsetting heat source on the other side of the thermopile, and putting the gas in a brass tube), and as a result of his superior apparatus he was able to confidently reach conclusions that were quite different from Melloni’s concerning radiant heat in gases (book ref below, in chapter I). Air from which water vapor and carbon dioxide had been removed deflected the galvanometer dial by less than 1 degree, in other words a detectable but very small amount (same ref, chapter II). Many other gases and vapours deflected the galvanometer by a large amount — thousands of times greater than air. As a check on his system’s reliability, Tyndall painted the inside walls of the brass tube with a strong absorber of radiant heat (namely lampblack). This greatly reduced the radiant heat that reached the thermopile when the tube was empty. Nevertheless, the percentage absorptions by the different gases and vapours relative to the empty tube were largely and essentially unchanged by this change to the absorption property of the tube’s walls. That’s excluding a few gases and vapours such as chlorine that must be excluded because they tarnish brass, changing its heat reflectivity. As another test of the reliability of the system, the long brass tube was cut to about a quarter of its original length, and the exact same quantity of gas was released into the shorter tube. Thus, the shorter tube will have about four times higher gas density. It was found that the percentage of radiant heat absorbed by or transmitted through the gas relative to the empty-tube state was entirely unchanged by this (even though the two tubes don’t have equal empty-tube states). Varying the absolute quantity of the gas in the tube causes corresponding changes in the absorption percentages, but varying the density doesn’t matter, nor does the absolute value of the empty-tube reference point. The emission spectrum of the particular source of heat makes a difference — sometimes a big difference — in the amount of radiant heat a gas will absorb, and different gases can respond differently to a change in the source. Tyndall said in 1864, “a long series of experiments enables me to state that probably no two substances at a temperature of 100°C emit heat of the same quality [i.e., of the same spectral profile]. The heat emitted by isinglass, for example, is different from that emitted by lampblack, and the heat emitted by cloth, or paper, differs from both.” Looking at an electrically-heated platinum wire, it is obvious to the human eye that the heat’s spectral profile depends on whether the wire is heated to dull red, bright orange, or white hot. Some gases were relatively stronger absorbers of the dull-red platinum heat while other gases were relatively stronger absorbers of the white-hot platinum heat, he found. For his original and primary benchmark in 1859, he used the heat from 100°C lampblack (akin to a theoretical “blackbody radiator”). Later he got some of his more interesting findings from using other heat sources. E.g., when the source of radiant heat was any one kind of gas, then this heat was strongly absorbed by another body of the same kind of gas, regardless of whether the gas was a weak absorber of broad-spectrum sources. In the illustration above, the radiant heat that is going into the brass tube comes from a pot of simmering water; the heat radiates from the exterior surface of the pot, not from the water, and not from the gas flame that keeps the water at a simmer. There is an alternative illustration with a modified setup taken from the same book (page 112). The main difference is that the heat source is separated from the brass tube by open air, which eliminates the need for circulating cold water cooling at the interface between heat source and brass tube.

Tyndall explained the heat in the Earth’s atmosphere in terms of the capacities of the various gases in the air to absorb radiant heat, in the form of infrared radiation. His measuring device, which used thermopile technology, is an early landmark in the history of absorption spectroscopy of gases. He was the first to correctly measure the relative infrared absorptive powers of the gases nitrogen, oxygen, water vapour, carbon dioxide, ozone, methane, and other trace gases and vapours. He concluded that water vapour is the strongest absorber of radiant heat in the atmosphere and is the principal gas controlling air temperature. Absorption by the other gases is not negligible but relatively small. Prior to Tyndall it was widely surmised that the Earth’s atmosphere warms the surface in what was later called a greenhouse effect, but he was the first to prove it. The proof was that water vapour strongly absorbed infrared radiation. Relatedly, Tyndall in 1860 was first to demonstrate and quantify that visually transparent gases are infrared emitters.

https://www.youtube.com/watch?v=1thc8JDgfXo

https://www.rigb.org/our-history/people/t/john-tyndall

https://helenthehare.files.wordpress.com/2020/12/28f8f-6a00d83542d51e69e20240a4a5e672200c.jpg

https://www.jstor.org/stable/111604?seq=1#metadata_info_tab_contents

image

image

image

image

https://www.jstor.org/stable/108823?seq=1#metadata_info_tab_contents

image

https://en.wikipedia.org/wiki/Greenhouse_gas

A greenhouse gas (sometimes abbreviated GHG) is a gas that absorbs and emits radiant energy within the thermal infrared range

image

The greenhouse effect of solar radiation on the Earth’s surface caused by greenhouse gases

In 1894 Arvid Högbom measured natural and human sources of carbon dioxide and devised the carbon cycle.

https://en.wikipedia.org/wiki/Arvid_H%C3%B6gbom

image

Arvid Gustaf Högbom was Swedish geologist. He was a professor of mineralogy and geology at Uppsala University.

https://en.wikipedia.org/wiki/Carbon_cycle

The carbon cycle is the biogeochemical cycle by which carbon is exchanged among the biosphere, pedosphere, geosphere, hydrosphere, and atmosphere of the Earth. Carbon is the main component of biological compounds as well as a major component of many minerals such as limestone.

image

Above left: The ocean and land have continued to, over time, absorb about half of all carbon dioxide emissions, even as those emissions have risen dramatically in recent decades. It remains unclear if carbon absorption will continue at this rate. NASA author Hurtt.

Above right: Fast carbon cycle showing the movement of carbon between land, atmosphere, and oceans of carbon between land, atmosphere, and ocean in billions of tons (gigatons) per year. Yellow numbers are natural fluxes, red are human contributions, white is stored carbon. The effects of the slow carbon cycle, such as volcanic and tectonic activity are not included.

Högbom started to work out when and where carbon atoms were in the biosphere and how they moved about on the Earth. Carbon dioxide has carbon atoms going in and out of the ocean, land and plants and, of course, humans add some as well. It’s quite messy. He was the first person to start working out where carbon atoms were going. The work was carried on by one of his colleagues.

https://en.wikipedia.org/wiki/Svante_Arrhenius

image

Svante August Arrhenius (19 February 1859 – 2 October 1927) was a Swedish scientist. Originally a physicist, but often referred to as a chemist, Arrhenius was one of the founders of the science of physical chemistry. He received the Nobel Prize for Chemistry in 1903, becoming the first Swedish Nobel laureate. In 1905, he became director of the Nobel Institute, where he remained until his death.

Arrhenius was the first to use basic principles of physical chemistry to estimate the extent to which increases in atmospheric carbon dioxide are responsible for the Earth’s increasing surface temperature.

He used Högbom’s work to work out how much warming or cooling you’d get by increasing or decreasing the amount of carbon dioxide in the atmosphere.

He was a very clever man, a self-taught maths prodigy and he began the calculations into the effect of carbon dioxide on the Earth’s surface temperature in 1895, finishing them in 1900. He said that, “it is unbelievable that so trifling a matter would cost me a full year”. It wasn’t really trifling as he did something like 50000 calculations by hand (a time before calculators and computers) and then checked them all by himself. He came up with a prediction that was brilliantly accurate. He said any doubling of the percentage of carbon dioxide in the atmosphere would raise the temperature of the Earth’s surface by 4oC. This was very close; however, he was very wrong in another prediction. He said it would take 2000 to 3000 years to raise the amount of carbon dioxide in the atmosphere by 50%. It actually took only 120 years to raise the amount of carbon dioxide by 40%.

https://en.wikipedia.org/wiki/Guy_Stewart_Callendar

image

Guy Stewart Callendar (February 1898 – October 1964) was an English steam engineer and inventor. His main contribution to knowledge was developing the theory that linked rising carbon dioxide concentrations in the atmosphere to global temperature. This theory, earlier proposed by Svante Arrhenius, has been called the Callendar effect. Callendar thought this warming would be beneficial, delaying a “return of the deadly glaciers.

In 1938, Callendar compiled measurements of temperatures from the 19th century on (over fifty years of temperature and carbon dioxide data from all over the world), and correlated these measurements with old measurements of atmospheric CO2 concentrations. He concluded that over the previous fifty years the global land temperatures had increased, and proposed that this increase could be explained as an effect of the increase in carbon dioxide. These estimates have now been shown to be remarkably accurate, especially as they were performed without the aid of a computer. Callendar assessed the climate sensitivity value at 2 °C, which is on the low end of the IPCC range. His findings were met with scepticism at the time. However, his ideas did influence the scientific discourse of the time, which had been generally sceptical about the influence of changes in CO2 levels on global temperatures in the previous decades after debate over the idea in the early 20th century. His papers throughout the 1940s and 50s slowly convinced some other scientists of the need to conduct an organised research programme on CO2 concentrations in the atmosphere, leading eventually to Charles Keeling’s Mauna Loa Observatory measurements from 1958, which proved pivotal to advancing the theory of anthropogenic global warming. He remained convinced of the accuracy of his theory until his death in 1964 despite continued mainstream scepticism.

Between 1938 and 1964 he wrote 25 papers on global warming and realised that human activity was increasing the amount of carbon dioxide in the air and therefore increasing the amount of infra-red radiation being trapped. There was a clear link between rising carbon dioxide levels and rising temperatures.

https://en.wikipedia.org/wiki/Infrared

Infrared (IR), sometimes called infrared light, is electromagnetic radiation (EMR) with wavelengths longer than those of visible light. It is therefore generally invisible to the human eye, although IR at wavelengths up to 1050 nanometres (nm)s from specially pulsed lasers can be seen by humans under certain conditions. IR wavelengths extend from the nominal red edge of the visible spectrum at 700 nanometres (frequency 430 THz), to 1 millimetre (300 GHz). Most of the thermal radiation emitted by objects near room temperature is infrared.

Callendar felt that further research was necessary an proposed an investigation on carbon dioxide levels in the atmosphere. This was done twenty years later by Charles David Keeling

https://en.wikipedia.org/wiki/Charles_David_Keeling

image

Charles David Keeling (April 20, 1928 – June 20, 2005) was an American scientist whose recording of carbon dioxide at the Mauna Loa Observatory confirmed Svante Arrhenius’s proposition (1896) of the possibility of anthropogenic contribution to the “greenhouse effect” and global warming, by documenting the steadily rising carbon dioxide levels. The Keeling Curve measures the progressive buildup of carbon dioxide, a greenhouse gas, in the atmosphere.

In 1958 Keeling received funding to establish a base on Mauna Loa in Hawaii, 3,000 m above sea level where he started collecting carbon dioxide samples at. By 1960, he had established that there are strong seasonal variations in carbon dioxide levels with peak levels reached in the late northern hemisphere winter. A reduction in carbon dioxide followed during spring and early summer each year as plant growth increased in the land-rich northern hemisphere. In 1961, Keeling produced data showing that carbon dioxide levels were rising steadily in what later became known as the “Keeling Curve”.

In the early 1960s, the National Science Foundation stopped supporting his research, calling the outcome “routine”. Despite this lack of interest, the Foundation used Keeling’s research in its warning in 1963 of rapidly increasing amounts of heat-trapping gases. A 1965 report from President Johnson’s Science Advisory Committee similarly warned of the dangers of extra heat-trapping gases, which cause the temperature of the Earth to rise.

The data collection started by Keeling and continued at Mauna Loa is the longest continuous record of atmospheric carbon dioxide in the world and is considered a reliable indicator of the global trend in the mid-level troposphere. Keeling’s research showed that the atmospheric concentration of carbon dioxide grew from 315 parts per million (ppm) in 1958 to 380 (ppm) in 2005, with increases correlated to fossil fuel emissions. There has also been an increase in seasonal variation in samples from the late 20th century and early 21st century. On the 14th December 2019 the concentration was 411.68ppm and on the 14th October 2020 it was 413.35ppm. In 62 years, the concentration of carbon dioxide had risen by 30%

image

This figure shows the history of atmospheric carbon dioxide concentrations as directly measured at Mauna Loa, Hawaii since 1958. This curve is known as the Keeling curve, and is an essential piece of evidence of the man-made increases in greenhouse gases that are believed to be the cause of global warming. The longest such record exists at Mauna Loa, but these measurements have been independently confirmed at many other sites around the world. The annual fluctuation in carbon dioxide is caused by seasonal variations in carbon dioxide uptake by land plants. Since many more forests are concentrated in the Northern Hemisphere, more carbon dioxide is removed from the atmosphere during Northern Hemisphere summer than Southern Hemisphere summer. This annual cycle is shown in the inset figure by taking the average concentration for each month across all measured years. The red curve shows the average monthly concentrations, and blue curve is a smoothed trend.

Keeling could see there was a strong variation of about 3 parts per million in the carbon dioxide concentration as it got sucked in by plants as they were growing and thrown out when they weren’t. By 1961 he could see a distinct rise in CO2.

image

The little red arrows in the image above are emphasising that each year was showing an increasing concentration of carbon dioxide.

In 1973 insurance companies recognised the effects of global warming and put-up premiums.

https://en.wikipedia.org/wiki/Munich_Re

Munich Re Group or Munich Reinsurance Company (German: Münchener Rück; Münchener Rückversicherungs-Gesellschaft) is a reinsurance company based in Munich, Germany. It is one of the world’s leading reinsurers. ERGO, a Munich Re subsidiary, is the Group’s primary insurance arm. Munich Re’s shares are listed on all German stock exchanges and on the Xetra electronic trading system. Munich Re is included in the DAX index at the Frankfurt Stock Exchange, the Euro Stoxx 50, and other indices.

What is a re-insurance company? We adults, insure our home, cars, bikes etc. against burglary or damage. But what happens if large scale damage occurs during a storm? Insurance companies that we deal with work with larger companies like Munich Re in order to spread the risk. Munich Re (one of the largest re-insures in the world) were big and clever enough to remain solvent after paying out claims for the 1906 San Francisco earthquake.

https://en.wikipedia.org/wiki/1906_San_Francisco_earthquake

Insurers are very clever. They saw the risks of tobacco before the medical profession and put up the premiums for life cover. They also saw that man-made climate change; global warming and the greenhouse effect were real in 1973 and that this would lead to increases in weather-related natural catastrophes. The financial burden of these catastrophes would have to be borne by insurers and the public so insurance companies put up the buildings and contents premiums. Insurers will bear the cost incurred from one storm in one year but the next and following years, we the public, will bear it.

“It’s not personal, it’s strictly business”

Assessing the costs of historical inaction on climate change

https://www.nature.com/articles/s41598-020-66275-4

It has been proposed that inaction on climate change will cost us 15% of the global GDP by 2085.

https://en.wikipedia.org/wiki/Gross_domestic_product

Gross domestic product (GDP) is a monetary measure of the market value of all the final goods and services produced in a specific time period.

Between 1970 and 1990 things started to happen and climate change got accepted. Fossil fuel companies funded research into global warming.

Surprisingly in the 1970s, Exxon, one of the world’s largest fossil fuel companies, confirmed climate change.

https://www.wired.com/2015/09/exxons-scientists-confirmed-climate-changein-70s/

image

In 1977, James Black, senior company scientist for Exxon said “there is general scientific agreement that … mankind is influencing the global climate through carbon dioxide release from the burning of fossil fuels”

Bit of a problem in that we knew we were releasing carbon dioxide, but where did it go. How much stayed in the air, how much went into plants, oceans etc.

Exxon paid a million US dollars to modify one of their largest ships (super tanker) to monitor carbon dioxide levels over three years. They were serious about it.

image

military ship next to tanker for scale

image

Researchers conducted Exxon’s first climate-related project aboard the Esso Atlantic tanker, pictured here, between 1979 and 1982. It sailed around the world for three years measuring carbon dioxide levels

There have only been seven ships built that are heavier than 0.5 million tonnes and Exxon had one of them.

The image below shows the difference between an empty tanker and a loaded tanker. The empty tanker has a bigger distance between the anchor and the water line

image

https://insideclimatenews.org/news/17092015/exxon-believed-deep-dive-into-climate-research-would-protect-its-business/

image

The route of Exxon’s Esso Atlantic tanker.

In 1980 Exxon recognised this carbon dioxide thing was serious and started paying for climate change research. They assembled teams of scientists and climate modellers to work out what was happening and realising there was a huge problem teamed up with other fossil fuel companies including Shell, Mobil, Amoco, Texaco and Standard Oil to set up a consortium called “The carbon dioxide and climate change task force”. By 1982 they were producing scientific papers.

https://corporate.exxonmobil.com/Energy-and-environment/Environmental-protection/Climate-change/ExxonMobil-four-decades-of-climate-science-research#Mediareporteddocuments

https://corporate.exxonmobil.com/-/media/Global/Files/climate-change/media-reported-documents/03_1982-Exxon-Primer-on-CO2-Greenhouse-Effect.pdf

image

image

415ppm, which is very close to the actual value. It also predicted the temperature rise in 2020 would be 1oC.

This work was done 38 years ago so the fossil fuel companies knew what was going on and Exxon as a good corporate citizen decided not to develop a natural gas field (expected to be the largest in the world) in the South China Sea

https://insideclimatenews.org/infographics/exxons-natuna-gas-field-major-source-co2/

image

The reason was the gas was about 70% carbon dioxide which they would have had to throw out into the atmosphere. This would have accounted for 1% of all carbon dioxide emissions by the human race at that time. It would have been the largest point emitter of carbon dioxide of carbon dioxide anywhere on the planet (and that was before any of the gas was burnt).

The point when global warming was covered up.

In 1990 fossil fuel companies stated that the science behind global warming was a fact, but a little later they reversed their opinion and denied global warming. A devil’s bargain lasting thirty years.

Something happened in 1990 to cause fossil fuel companies to stop what they were doing. They changed “The Carbon dioxide and climate Task Force” that did climatology search, into something different. “The Global Climate Coalition”, basically a lobby group, geared to keep burning fossil fuels around the world.

The reason for this change isn’t known. There is no access to any communications about it. There is access to lots of emails, but not the important ones.

Besides being fossil fuel companies, they were delivering energy. They had two options for doing this.

Option one involved the companies moving out of providing fossil fuels and into selling energy. This was the right thing to do but it was new “territory”. It was risky. A company might do well, but it might fail.

Option two was business as usual PLUS find denialist disinformation campaigns in the same way that tobacco and alcohol companies worked. This was safer.

The companies continued with option two for 15/16 years until the Royal Society said enough is enough.

https://www.nature.com/news/2006/061002/full/061002-12.html

image

https://royalsociety.org/

https://en.wikipedia.org/wiki/Royal_Society

The Royal Society, formally The Royal Society of London for Improving Natural Knowledge, is a learned society and the United Kingdom’s national academy of sciences. Founded on 28 November 1660, it was granted a royal charter by King Charles II as “The Royal Society”. It is the oldest national scientific institution in the world. The society fulfils a number of roles: promoting science and its benefits, recognising excellence in science, supporting outstanding science, providing scientific advice for policy, fostering international and global co-operation, education and public engagement.

Past members have included Isaac Newton, Albert Einstein and Marie Curie.

https://insideclimatenews.org/news/16092015/exxons-own-research-confirmed-fossil-fuels-role-in-global-warming/

Exxon: The road not taken

https://insideclimatenews.org/infographics/exxon-science-vs-misinformation/

image

The Royal Society wrote a letter to Exxon to tell them to stop spreading inaccurate and misleading climate science mistruths. An example is that over twenty years their point of view has changed from “There’s general scientific agreement about global warming” into “the scientific evidence for global warming is just not conclusive”. Another example “this is serious, we’ve got five to ten years before we have to make the hard decisions “into “it’s not going to make any difference if we do nothing or something now or in twenty years” and so Exxon have been funding dozens of organisations that told lies about climate change. The Royal Society has worked out that Exxon has paid out around 115 million dollars a year. However, The Guardian newspaper in 2013 said that conservative groups were spending up to a billion dollars a year to fight action on climate change.

https://www.theguardian.com/environment/2013/dec/20/conservative-groups-1bn-against-climate-change

image

https://en.wikipedia.org/wiki/Suzanne_Goldenberg

Suzanne Goldenberg is a Canadian-born author and journalist currently employed by The Guardian as their United States environmental correspondent.

image

In 2010 Exxon had the highest Market Capitalisation on the Dow Jones Index.

https://en.wikipedia.org/wiki/Market_capitalization

Market capitalisation, commonly called market cap, is the market value of a publicly traded company’s outstanding shares.

Market capitalisation is equal to the share price multiplied by the number of shares outstanding. Since outstanding stock is bought and sold in public markets, capitalisation could be used as an indicator of public opinion of a company’s net worth and is a determining factor in some forms of stock valuation.

https://en.wikipedia.org/wiki/Dow_Jones_Industrial_Average

The Dow Jones Industrial Average (DJIA), Dow Jones, or simply the Dow is a stock market index that measures the stock performance of 30 large companies listed on stock exchanges in the United States.

Exxon was doing well with “Business as usual” for twenty years. The highest market capitalisation in the US in the year that the Tesla Car company (electric cars and clean energy) opened the Tesla Factory in Fremont, California, for $42 million.

https://en.wikipedia.org/wiki/Tesla,_Inc.

Tesla, Inc. (formerly Tesla Motors, Inc.) is an American electric vehicle and clean energy company based in Palo Alto, California. Tesla’s current products include electric cars, battery energy storage from home to grid scale, solar panels and solar roof tiles, and related products and services.

But what was the cost with this fossil fuel business. How is money moving quietly in the background? Fast forward to 2013

https://www.elibrary.imf.org/view/IMF071/20361-9781475558111/20361-9781475558111/20361-9781475558111.xml?redirect=true

https://greenfiscalpolicy.org/imf-report-energy-subsidy-reform-lessons-and-implications-full-version/

The International monetary fund (IMF) investigated subsidies i.e., money given for free to companies and stated that this process needed reform. They pointed out that in 2011, the free money given by governments to Big Fossil Fuel companies worldwide amounted to 1.9 trillion US dollars out of a GDP of about 80 or so trillion US dollars. This is roughly 2.5% of the global GDP or 8 cents of every dollar that every government earnt that year.

image

Roll forward to 2019 and nothing much has changed

https://www.elibrary.imf.org/view/IMF001/24483-9781484311493/24483-9781484311493/24483-9781484311493.xml

image

Business as usual. “Global fossil fuel subsidies remain large”. In 2017 government subsidies to fossil fuel companies was about 4.7 trillion US dollars (about 6.3% of global GDP and about 8% of all government revenues over the entire planet – money obtained from selling things, raising taxes etc.).

Ok, so we give them 8 cents in every dollar, surely good things happen to the world in general as a result of giving free money to fossil fuel companies.

Not according to the IMF. Good things do not happen. Economic growth is depressed, bad things happen in the environment, bad things happen to societies around the world, bad things happen to people’s health by giving fossil fuel companies free money. Governments giving away 8% of their revenue makes the world a worse place not a better place.

Now to the cover up

So, back in 2010 Exxon had the highest DJ Market capitalisation on the day of the Tesla launch.

In 2020 things have rather changed. Tesla, by itself, is worth more than the next biggest car companies combined and Exxon has been “removed” from the Dow Jones Industrial Average (blue-chip stock market barometer). Exxon got booted out of the Dow Jones by a wind and solar producer called NextEra.

If we had been able to see into the future in 2010 and had money to invest then putting it into NextEra would have given us a 600% profit but investing in ExxonMobil would have given us a loss of 25%.

https://en.wikipedia.org/wiki/Blue_chip_(stock_market)

A blue chip is stock in a corporation with a national reputation for quality, reliability, and the ability to operate profitably in good and bad times.

https://en.wikipedia.org/wiki/NextEra_Energy

NextEra Energy, Inc. is an American energy company with about 46 gigawatts of generating capacity, revenues of over $17 billion in 2017, and about 14,000 employees throughout the US and Canada. It is believed to be the largest electric utility holding company by market capitalisation. Its subsidiaries include Florida Power & Light (FPL), NextEra Energy Resources, NextEra Energy Partners, Gulf Power Company, and NextEra Energy Services.

https://reneweconomy.com.au/worlds-biggest-wind-and-solar-producer-now-worth-more-than-exxonmobil-26783/

image

In the first half of 2020 NextEra made a net profit of 1.7 billion US dollars when Exxon made a loss of 1.7 billion US dollars.

How are Exxon responding, well they plan to increase drilling.

https://www.scientificamerican.com/article/exxons-internal-plans-reveal-rising-co2-emissions/

image

https://www.ft.com/content/c343b958-63f4-44a4-9485-130d7740a843

image

The economist, financial times and other publications have written about the slow death of fossil fuels. Fossil fuels are turning into a stranded asset.

What is a stranded asset?

https://en.wikipedia.org/wiki/Stranded_asset

Stranded assets are “assets that have suffered from unanticipated or premature write-downs, devaluations or conversion to liabilities”. Stranded assets can be caused by a variety of factors and are a phenomenon inherent in the ‘creative destruction’ of economic growth, transformation and innovation; as such they pose risks to individuals and firms and may have systemic implications.

The term is important to financial risk management in order to avoid economic loss after an asset has been converted to a liability. Accountants have measures to deal with the impairment of assets (e.g., IAS 16) which seek to ensure that an entity’s assets are not carried at more than their recoverable amount. In this context, stranded assets are also defined as an asset that has become obsolete or non-performing, but must be recorded on the balance sheet as a loss of profit.

An example of a stranded asset comes from shipping.

In the 16th, 17th and 18th century ships moved around using sails and if your company made sails you were doing really well until the steam engine, powered by coal came along. Sails became a stranded asset. Steam engines were great until diesel came along. Steam engines became the stranded asset. Diesel engines are gradually becoming a stranded asset as electric motors using renewable energy sources are taking over. Economists are saying fossil fuels are becoming a stranded asset. This is the economy talking “Nothing personal, strictly business)

So, from fossil fuels we’ve had 30 years of well-funded climate change denial. Fossil fuel companies have had many years of huge profits, but Exxon is now going broke (bargains with the devil are usually bad in the end). And, of course, we have had to suffer from the climate change, and in the last thirty years it has been expensive for us and the planet.

How to fix climate change

1) The philosophical approach

A hopeful message is that after all the funding to organisations who say there is no such thing as climate change only 10% of US citizens and 13% of UK citizens are climate deniers. You can’t blame them because there has been a well-funded disinformation campaign.

The motto of successful denialists is “Doubt is our product”

https://en.wikipedia.org/wiki/Merchants_of_Doubt

Merchants of Doubt: How a Handful of Scientists Obscured the Truth on Issues from Tobacco Smoke to Global Warming is a 2010 non-fiction book by American historians of science, Naomi Oreskes and Erik M. Conway. It identifies parallels between the global warming controversy and earlier controversies over tobacco smoking, acid rain, DDT, and the hole in the ozone layer. Oreskes and Conway write that in each case “keeping the controversy alive” by spreading doubt and confusion after a scientific consensus had been reached was the basic strategy of those opposing action. In particular, they show that Fred Seitz, Fred Singer, and a few other contrarian scientists joined forces with conservative think tanks and private corporations to challenge the scientific consensus on many contemporary issues.

image

How this works. You say something is 21% but someone else says it is 22%. Obviously, you are both wrong. Any doubt, you lose.

2) The physical approach

Zero emissions of carbon dioxide are possible and we could reduce it from 411ppm to 360ppm (the 1996 value)

https://drawdown.org/

Their mission is to help the world reach “Drawdown”— the point in the future when levels of greenhouse gases in the atmosphere stop climbing and start to steadily decline, thereby stopping catastrophic climate change — as quickly, safely, and equitably as possible.

Remove carbon dioxide from the atmosphere and stop putting it there in the first place

image

https://drawdown.org/sites/default/files/pdfs/TheDrawdownReview%E2%80%932020%E2%80%93Download.pdf

Various ways of doing it:

https://en.wikipedia.org/wiki/Sustainable_energy

Reduce the sources –

It is possible to reduce each individual source of carbon dioxide and pretty quickly bring it down to zero. Then support or increase the things that can be used instead of carbon dioxide producing activities.

There are natural sinks on the land or the ocean or we can make them.

Methods of fixing each of the carbon dioxide emission sectors. The % emissions varies on how you measure them but roughly:

30% of emissions are a result of producing heat and electricity. This is very easy to fix technically and scientifically. Google them and you find a very long list. As time goes on there will be better inventions and easier ways of producing them, but the technology is present now to get the job done.

16% of emissions is due to transport. Transport can be broken down into two categories – short haul and long haul. Short haul i.e., short range, using electricity (generated using renewable energy sources) and batteries. Long haul including ships and aeroplanes using hydrogen as the energy source (about 5% of the world’s greenhouse emissions are caused by aeroplanes).

Airbus has said if anybody is prepared to give them an order, they can build the planes now. Some zero emission planes will be flying by 2028 and commercially carrying passengers by 2035.

https://www.airbus.com/newsroom/press-releases/en/2020/09/airbus-reveals-new-zeroemission-concept-aircraft.html

image

A turbofan design (120-200 passengers) with a range of 2,000+ nautical miles, capable of operating transcontinentally and powered by a modified gas-turbine engine running on hydrogen, rather than jet fuel, through combustion. The liquid hydrogen will be stored and distributed via tanks located behind the rear pressure bulkhead.

image

A turboprop design (up to 100 passengers) using a turboprop engine instead of a turbofan and also powered by hydrogen combustion in modified gas-turbine engines, which would be capable of traveling more than 1,000 nautical miles, making it a perfect option for short-haul trips.

image

A “blended-wing body” design (up to 200 passengers) concept in which the wings merge with the main body of the aircraft with a range similar to that of the turbofan concept. The exceptionally wide fuselage opens up multiple options for hydrogen storage and distribution, and for cabin layout.

12% of emissions are caused by manufacture and construction. Steel causes 8% of the emissions. Carbon is added to iron oxide to pull the oxygen off and create carbon dioxide. Hydrogen could be used just as easily.

11% of emissions are caused by agriculture. This is complicated because it involves living things with DNA, and it can fight us. There are a lot of things we can do including eating less meat.

8% of emissions caused by the inevitable “other stuff. But “stuff” can be done here

6% of emissions caused by land use change and forestry. 30% land used for agriculture could be turned back into forest. We also waste 40% of our food. There is absolutely no reason why people should be starving in the 21st century

5% of emissions caused by industrial processes and fugitive emissions (stuff leaking out of pipes)

https://en.wikipedia.org/wiki/Fugitive_emission

Fugitive emissions are emissions of gases or vapours from pressurized equipment due to leaks and other unintended or irregular releases of gases, mostly from industrial activities. As well as the economic cost of lost commodities, fugitive emissions contribute to air pollution.

5% of emissions caused by building (creating the buildings and the materials involved). Cement, by itself, accounts for 8% of emissions.

https://en.wikipedia.org/wiki/Cement

A cement is a binder, a substance used for construction that sets, hardens, and adheres to other materials to bind them together. Cement is seldom used on its own, but rather to bind sand and gravel (aggregate) together. Cement mixed with fine aggregate produces mortar for masonry, or with sand and gravel, produces concrete. Concrete is the most widely used material in existence and is only behind water as the planet’s most-consumed resource.

By far the most common type of cement is hydraulic cement, which hardens by hydration of the clinker minerals when water is added.

First, the limestone (calcium carbonate) is burned to remove its carbon, producing lime (calcium oxide) in what is known as a calcination reaction. This single chemical reaction is a major emitter of global carbon dioxide emissions as well as unwanted heat. You don’t actually have either of them. You can use a geopolymer instead.

https://en.wikipedia.org/wiki/Geopolymer_cement

image

Geopolymer cement is a binding system that hardens at room temperature.

It is a more environmentally friendly alternative to conventional Portland cement. It relies on minimally processed natural materials or industrial by-products to significantly reduce the carbon footprint of cement production, while also being highly resistant to many common concrete durability issues.

Geopolymer cements exist which may cure more rapidly than Portland-based cements.

So, fossil fuels are rapidly becoming a stranded asset.

Part 2: Making coffee the wrong way

Business can’t happen without the essentials. You need people, involvement but more importantly you need good coffee.

I should add here that the last part of the above is not my comment but Dr Karl’s. I prefer bog standard Nescafe Gold blend, which my husband calls “floor sweepings” (Nestle, if you would like to sue him, please do and I will back you up and we could share the compensation), although when I was a teacher the drain got more coffee than I did.

https://en.wikipedia.org/wiki/History_of_coffee

According to Leonhard Rauwolf’s 1583 account, coffee became available in England no later than the 16th century, largely through the efforts of the Levant Company.

https://en.wikipedia.org/wiki/Leonhard_Rauwolf

https://en.wikipedia.org/wiki/Levant_Company

The general method of making is similar to how it was prepared by the Sufis in the 15th century. Baristas’ are trained to make good coffee.

https://en.wikipedia.org/wiki/Barista

A barista is a person, usually a coffeehouse employee, who prepares and serves espresso-based coffee drinks.

https://worldbaristachampionship.org/

https://en.wikipedia.org/wiki/World_Barista_Championship

The World Barista Championship (WBC) is an annual barista competition operated by World Coffee Events for the title of World Barista Champion. The competition is composed of the winners of the national barista championships, which are operated by the Specialty Coffee Association (SCA) chapters, or an approved, independent, non-profit national body.

The competition involves each competitor having to make four identical espressos for four judges who live and breathe coffee.

Amazingly, it doesn’t matter how experienced they are, Baristas’ can’t do it.

Through no fault of their own, they (and we) have been making coffee wrong for six centuries.

Chaos causes clumping. The “cure” is less-is-more

The Instituto Nazionale Espresso Italiano method of making espresso

http://www.espressoitaliano.org/

(a) 6.5-7.5g of freshly ground coffee

(b) Hot water at 86-90oC

(c) Pressure of 9 atmospheres (3-4x the pressure inside a car tyre)

(d) Process for 20-30 seconds

This should give you 23-27ml of dark hot coffee liquid.

However, it is very difficult to get consistency.

Coffee contains more than 2000 different chemicals.

How old should coffee beans be at the time of grinding?

How finely ground should the beans be?

What quantity of coffee grounds should be used?

How firmly should you tamp the coffee grounds?

Is there a best temperature, pressure and time that should be used in processing the coffee?

The answers to these questions turned up in a scientific journal called “Matter”.

https://www.sciencedirect.com/science/article/pii/S2590238519304102

image

The article writes about “fines” and “boulders”.

The coffee beans are ground up but there is a large range of particle sizes, roughly from 7 to 700mm giving a range of 100 to 1. That is two magnitudes. The size of the entire Universe from the very small to the very large observable is only about 50 magnitudes and we’re using two of them on just coffee. No wonder things are messy.

image

The volume percent particle size distribution at GS = 2.5, 2.0, 1.5, and 1.0. Grinding finer reduces the average boulder size and increases the number of fines. Intruders are boulders that are larger than the aperture of the burr set and hence further fractured until they can exit the burrs (the part of the grinder that crushes the beans.

image

Surface area and number of coffee particulates produced with a grind setting GS = 2.5. Here, 99% of the particles are <100 mm in diameter and account for 80% of the surface area

image

The coffee grounds are shown in grey (Ws), and the pore space, which is filled with water during extraction, is shown in blue (Wl). The macroscopic spatial coordinate measuring depth through the bed, z, the microscopic spatial coordinate measuring radial position within the spherical coffee particles, r, as well as the basket radius, R0, are also indicated.

Some of the particles are very big, they should be microscopic (100 times smaller).

It would be hoped that as the hot water is passing through the coffee there should be a fair, equal and democratic extraction of chemicals from each and every individual coffee grain, finally arriving at the bottom of the espresso cup for you to drink.

Unfortunately, this is not the case. Clumping occurs, little blobs of coffee about the size of a thumb, acting like clay, so water can’t enter and extract the good stuff. Lots of chemicals are extracted from the channels but very little from the lumpy clumps.

image

Red circle = lumpy clump

The channels are important as it is the grains lining them that provide most of the goodness (and a bit of the bad) that pass down them to the coffee cup. It is a very chaotic process.

https://en.wikipedia.org/wiki/Randomness

In common parlance, randomness is the apparent lack of pattern or predictability in events. A random sequence of events, symbols or steps often has no order and does not follow an intelligible pattern or combination. Individual random events are, by definition, unpredictable,

https://en.wikipedia.org/wiki/Chaos_theory

Chaos theory is a branch of mathematics focusing on the study of chaos — dynamical systems whose apparently random states of disorder and irregularities are actually governed by underlying patterns and deterministic laws that are highly sensitive to initial conditions.

https://en.wikipedia.org/wiki/Butterfly_effect

In chaos theory, the butterfly effect is the sensitive dependence on initial conditions in which a small change in one state of a deterministic nonlinear system can result in large differences in a later state.

The people involved in the coffee study were computational chemists (based in Australia, USA, UK, Switzerland and Ireland). They didn’t need need test tubes, just mathematics, computers and a barista.

So how do you produce the perfect cup of coffee (personally I think it comes out of a Nescafe Gold Blend jar – free samples gratefully received).

Well, here we go:

You grind it less finely, which seems wrong. Surely the finer it is the more goodness you get. Well yes, but you also get more badness and clumping. Grind it less finely, use a lower pressure, process for less time and you will, supposedly, get the perfect espresso

Part 3: The amazing disappearing anus

Surprisingly there are some animals that have a body part that can disappear.

This is an area of hard science, but a delicate “area” full of daft puns

The scientists needed a lot of cheek to do the research

Not doing the research could lead to falling behind

Doing the research could lead the scientists being the butt of everyone’s jokes

(I would like to add that these are not my dreadful puns, but Dr Karl’s).

However, there are animals that can exist when their anus disappears.

Of course, in the most advanced animals the anus is a permanent part of their bodies (although you would think that evolution would have given a more sensible method of getting rid of solid waste than sitting on a toilet).

Gedanken experiment

Imagine putting your hands together like the image below (apologies for my little stubby hands, but at least you can admire the work of my friend Jess’ work with nails. I should add this was done when we weren’t in tier 3 or 4).

In your hands there is a mixture of solid, liquid and gas (yes, I know that the gas will just escape, but pretend it won’t)

image

Move your fingers about so the imaginary gas can escape but the solid and liquid can’t.

Your anal sphincter does this many times a day (especially if you are a vegan like me).

https://en.wikipedia.org/wiki/Internal_anal_sphincter

Getting to the bottom of anal evolution (The title of an actual paper)

Scientists have looked at various different sorts of creatures

https://www.sciencedirect.com/science/article/pii/S004452311500011X

image

The anus is the high point of the last 540 million years of evolution and scientists don’t know where it came from.

clip_image001

image

Gut architecture and hindgut types across animal lineages.

Diversity of gut morphologies and types of hindguts observed in the different metazoan lineages, in relation to the most recent animal phylogeny (Dunn et al., 2014). The diagram is a simplification, and exceptions from the general coding are possible.

Hypotheses about the evolution of the platyhelminth connection between gonads and digestive tract.

(A) According to Karling (1940), the bursa evolved from the genital system following this sequence of events: separate gonadal system for male (v) and female gonads (♀) (a) got connected with each other (b) and with the digestive tract (c) to resorb the sperm. The organ for the resorption of excessive sperm got elaborated (d) and after its digestive function disappeared, the ductus for the release of oocytes and connection with the gut remained (e). (B) According to Remane (1951), the bursa evolved from the hindgut of a coelomate ancestor as follows: at the beginning the hindgut and the female genital duct were connected in a cloaca (a). Then, the gut lost its function as transfer of digesting material (b) and the cloaca area that corresponds to the hindgut (bursa) acquired the function to digest material that accidentally entered there, such as remnants of sperm and yolk. This material could still be in some cases given to the gut by vacuoles. The bursa became the receptaculum seminis (c) and ducts from this cavity to the female gonad developed (d), until the receptaculum seminis formed its own channel (e). (C) According to Steinböck (1966), the bursa evolved from the endoderm: initially, the ancestor had a sack like gut through which oocytes were laid (a). After the female gonad gained complexity as an organ (b), a portion of the gut developed to a bulb-like structure (bursa, in c) and a genital tract and gonopore evolved (d). In some lineages, the endodermal bursa separates from the gut (e), resulting in the secondary evolution of a blind gut. Platyhelminth examples for different architectures: A(a) acoels, A(d), B(a), C(c): Myozona, Promacrostomum (Macrostomida), Enterogonia (Polycladida), Coelogynopora (Proseriata) A(c), B(a, b): Kambanella, Pseudograffilla (Rhabdocoela), B(c), C(e): some triclads and polyclads, B(d) and B(e) some rhabdocoels (see references in original literature).

Every animal needs a gut or git (unless they are single celled or a certain type of parasite)

A little diversion in parasite land

https://en.wikipedia.org/wiki/Parasitism

Parasitism is a symbiotic relationship between species, where one organism, the parasite, lives on or inside another organism, the host, causing it some harm, and is adapted structurally to this way of life.

https://en.wikipedia.org/wiki/List_of_parasitic_organisms

This is an incomplete list of organisms that are true parasites upon other organisms.

1) The parasite grazes the skin

image

image

2) It uses its proboscis to penetrate the skin.

3) It buries itself deep inside the skin

4) The proboscis enters a blood vessel

5) The parasite leaves the surface entirely

6) It is no longer involved with the surface of the host

7) The parasite’s eggs get released into the outside world

image

8) Sometimes the parasite can go even further

9) Having no connection with the outside world

10) Lives in the host’s tissues

image

Parasites that don’t have a gut include the malaria parasite and hookworm

https://en.wikipedia.org/wiki/Plasmodium

Plasmodium is a genus of unicellular eukaryotes that are obligate parasites of vertebrates and insects. The life cycles of Plasmodium species involve development in a blood-feeding insect host which then injects parasites into a vertebrate host during a blood meal. Parasites grow within a vertebrate body tissue (often the liver) before entering the bloodstream to infect red blood cells. The ensuing destruction of host red blood cells can result in disease, called malaria. During this infection, some parasites are picked up by a blood-feeding insect (mosquitoes in majority cases), continuing the life cycle.

https://en.wikipedia.org/wiki/Hookworm

Hookworms are intestinal, blood-feeding, parasitic roundworms that cause types of infection known as helminthiases. Hookworm infection is common in countries with poor access to adequate water, sanitation, and hygiene. In humans, infections are caused by two main species of roundworm, belonging to the genera Ancylostoma and Necator. In other animals the main parasites are species of Ancylostoma.

Most animals have a mouth which leads to some form of stomach and intestine. In the process of digestion, the chemicals in food are broken down into smaller chemicals. These chemicals are absorbed into the blood stream. These chemicals are used as nutrients or building blocks or both. The solid waste material is egested to the outside through the anus (some waste also leaves the mouth, usually water and carbon dioxide, but they are products of respiration. Liquid waste also leaves the body via the bladder and ureter).

So, it seems that, apart from the animal that has to grow a new anus, animals can have two types of gut/anus depending on how advanced they are.

They could have a “blind gut” (= sack gut). Very simple. The food goes in and the waste comes out of the same opening. This means that you would have to digest your breakfast and egest the waste before you could have lunch.

Coral is an organism that does this

https://en.wikipedia.org/wiki/Coral

Corals are marine invertebrates within the class Anthozoa of the phylum Cnidaria. They typically live in compact colonies of many identical individual polyps. Coral species include the important reef builders that inhabit tropical oceans and secrete calcium carbonate to form a hard skeleton.

A coral “group” is a colony of myriad genetically identical polyps. Each polyp is a sac-like animal typically only a few millimeters in diameter and a few centimeters in height. A set of tentacles surround a central mouth opening. Each polyp excretes an exoskeleton near the base. Over many generations, the colony thus creates a skeleton characteristic of the species which can measure up to several meters in size. Individual colonies grow by asexual reproduction of polyps. Corals also breed sexually by spawning: polyps of the same species release gametes simultaneously overnight, often around a full moon. Fertilized eggs form planulae, a mobile early form of the coral polyp which when mature settles to form a new colony.

Coral is a stationary animal that glues itself to rock.

Although some corals are able to catch plankton and small fish using stinging cells on their tentacles, most corals obtain the majority of their energy and nutrients from photosynthetic unicellular dinoflagellates of the genus Symbiodinium that live within their tissues.

The food they do catch is taken into the moth and digested. The waste products leave through the mouth.

image

Anatomy of a stony coral polyp

This method of dealing with solid waste is ok if you are short and stubby. But what is you are a longer creature, like the ribbon worm?

https://en.wikipedia.org/wiki/Nemertea

Nemertea is a phylum of invertebrate animals also known as ribbon worms or proboscis worms.

The typical nemertean body is very slim in proportion to its length. The smallest are a few millimetres long, most are less than 20 centimetres, and several exceed 1 metre. The longest animal ever found, at 54 metres long, may be a specimen of Lineus longissimus, although L. longissimus is usually only a few millimeters wide. The bodies of most nemerteans can stretch a lot, up to 10 times their resting length in some species, but reduce their length to 50% and increase their width to 300% when disturbed. A few have relatively short but wide bodies, for example Malacobdella grossa is up to 3.5 centimetres long and 1 centimetre wide, and some of these are much less stretchy. Smaller nemerteans are approximately cylindrical, but larger species are flattened dorso-ventrally. Many have visible patterns in various combinations of yellow, orange, red and green

image

image

Adriaan Gittenberger & Cor Schipper – http://www.zoologischemededelingen.nl/z/zoomed/images/vol82/nr01/8201a07fig1-3.jpg (article http://www.zoologischemededelingen.nl/82/nr01/a07)

Lineus longissimus, picture taken in Grevelingen near Scharendijke (the Netherlands)

If you were as long as 54m you couldn’t possibly take in food or egest the waste through your mouth, which is why the “through gut” evolved. It should be noted, however that evolution is an accidental process and a “through gut” didn’t evolve to sort out a problem. It simply happened and proved to be useful.

The “through gut” is more complex but is better at breaking down and absorbing the nutrients from food. This means you can eat lunch whilst your body is still digesting breakfast (unless you have both together = brunch).

There are all sorts of complications on how the gut is “made” and looks. In humans it is messy but it works.

Now to the animal that grows an anus every time it “poos”, which is basically every ten minutes when it is a baby and every hour as an adult. It’s basically a jellyfish and its species has been around for about half a billion years. Its called the warty comb jelly and it is basically a predator.

https://en.wikipedia.org/wiki/Mnemiopsis

image

Mnemiopsis leidyi, the warty comb jelly or sea walnut, is a species of tentaculate ctenophore (comb jelly). It is native to western Atlantic coastal waters, but has become established as an invasive species in European and western Asian regions. Three species have been named in the genus Mnemiopsis, but they are now believed to be different ecological forms of a single species M. leidyi by most zoologists.

The “comb” refers to the fact that the organism has cilia, which it uses for swimming.

https://en.wikipedia.org/wiki/Cilium

image

The cilium (from Latin ‘eyelash’; the plural is cilia) is an organelle found on eukaryotic cells in the shape of a slender protuberance that projects from the much larger cell body.

It can eat up to 10 times its body weight a day. Its food can include small fish and crustaceans.

Its size can range from the tiny 0.001 to 1500mm (Typically 50-100mm) and it can live to a very old age.

It has had a paper written about it

https://www.biorxiv.org/content/10.1101/511576v1.full

image

image

ao is the apical organ

Apical organs are sensory structures present in many marine invertebrate larvae where they are considered to be involved in their settlement, metamorphosis and locomotion.

image

The red arrow is pointing to the anus

image

You can’t actually see the right (right because all biological diagrams are not drawn as mirror images so in the above image the left anal opening is actually the right) anal opening, because it’s not there yet.

image

The finger on the above image is indicating the width of it.

image

The finger on the above image is indicating a holding chamber (left fork ~ left rectum – holding chamber)

image

Over here on the “right hand side” there is another rectum.

So, this creature has two anuses. We humans have just the one. There is a flatworm Thysanozoon nigropapillosum that has dozens of them

https://en.wikipedia.org/wiki/Thysanozoon_nigropapillosum

Thysanozoon nigropapillosum is a species of polyclad flatworms belonging to the family Pseudocerotidae. Some common names include gold-speckled flatworm, marine flatworm, yellow papillae flatworm, yellow-spotted flatworm, and yellow-spotted polyclad flatworm.

image

Anyway, back to the flatworm

The image below shows the rectum is quite small

image

The image below shows it gets fatter

image

The image below shows it gets smaller again

image

image

The red arrow in the image above is indicating some “stuff” squirting out. The blue arrow is indicating how wide the rectum is,

In the image below the rectum then shrinks back down.

image

Now what is weird is, that like humans, this jelly fish can be left or right-handed. Although, more correctly, it is left or right anus. It always uses one.

Things get even messier because this creature grows a new anus all the time. The rate at which it does this depends on its age.

But there is something even worse. One poor animal can lose its anus forever.

https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0116639

image

This South American scorpion deliberately loses a part of its body in order to save its life. Lizards can also lose a tail when chased by a predator in order to save themselves. It is a clean break at the “cleavage plane”. The cleavage planes separate different segments of an organism and are important in the development of the embryo, in that certain cells grow into having set functions.

Sometimes the part of the body left behind is full of food for a predator. It has its own nerve supply and will twitch in order to encourage the predator to go for it and leave the living animal alone.

The scorpion can lose its anus permanently. Evolution is real, but not perfect. Just good enough for the organism to have babies.

I the scorpion’s case evolution did make a bit of a mistake because its anus is not between its hind legs but located near the end of its stinger. Beyond the cleavage plane of the hind legs.

image

image

Autotomy in Ananteris Thorell, 1891 scorpions.

A. Ananteris balzani Thorell, 1891, adult male from Serra das Araras Ecological Station, Mato Grosso State, Brazil. Dashed lines indicate autotomy cleavage planes between metasomal segments I-IV. B, C. Autotomy in Ananteris solimariae Botero-Trujillo & Flórez, 2011, adult male, video frames. B. Exact moment before autotomy, scorpion fighting to escape. Arrow indicates beginning of cleavage. C. Immediately after autotomy, detached tail twitching. doi:10.1371/journal.pone.0116639.g001

If you manage to grab one of these scorpions by the tail with some tweezers, and there is a cleavage plane, it separates into two parts. The scorpion and a tail/anus. Thanks to the cleavage plane the scorpion does heal.

But what about the fact that the scorpion is still eating. The waste builds up and just pops out. These creatures can survive for a while (8 to 12 months). This happens to about 5-8% of Anteris scorpions in the wild. Males more than females. Females have to live longer as they have more to lose. They need to keep their stingers in order to provide food for their babies. So, evolution is happy.

image

Post-autotomy healing of severed stump of metasomal segment of adult male Ananteris solimariae Botero-Trujillo & Flórez, 2011. A. One hour after autotomy, with drop of hemolymph. B. One day after, hemolymph loss continues. C. Two days after, hemolymph loss reduced, brown scar beginning to develop. D. Three days after, hemolymph loss reduced, scar developing. E. Four days after, scar almost completely developed. F. Five days after, no hemolymph loss, scar fully formed. G. Ten days after, scar darkened. H. Twenty-five days after, scar fully defined. https://doi.org/10.1371/journal.pone.0116639.g002

image

Ananteris solimariae Botero-Trujillo & Flórez, 2011, adult males, twenty-five days after autotomy. A. Feeding on cricket nymph. B. Attempting to sting, showing swollen opisthosoma. C. Accumulated excrement evident as white area inside opisthosoma (arrow). https://doi.org/10.1371/journal.pone.0116639.g003

The scorpion is able to survive without its anus for 8 to 12 months and during this time is able to mate and have babies.

As an aside, the world record of a person with constipation occurred in the UK. They went six months without passing any solid waste.

Climate change with 20/20 vision

An example where something positive can happen.

In 1987 scientists agreed that CFCs were destroying the Earths ozone layer. Within two years the Montreal protocol was signed, CFCs were banned and the ozone layer is healing.

https://en.wikipedia.org/wiki/Chlorofluorocarbon

Chlorofluorocarbons (CFCs) and hydrochlorofluorocarbons (HCFCs) are fully or partly halogenated paraffin hydrocarbons that contain only carbon (C), hydrogen (H), chlorine (Cl), and fluorine (F), produced as volatile derivative of methane, ethane, and propane. They are also commonly known by the DuPont brand name Freon.

The most common representative is dichlorodifluoromethane (R-12 or Freon-12). Many CFCs have been widely used as refrigerants, propellants (in aerosol applications), and solvents. Because CFCs contribute to ozone depletion in the upper atmosphere, the manufacture of such compounds has been phased out under the Montreal Protocol, and they are being replaced with other products such as hydrofluorocarbons (HFCs) including R-410A and R-134a

https://en.wikipedia.org/wiki/Ozone_layer

The ozone layer or ozone shield is a region of Earth’s stratosphere that absorbs most of the Sun’s ultraviolet radiation. It contains a high concentration of ozone (O3) in relation to other parts of the atmosphere, although still small in relation to other gases in the stratosphere. The ozone layer contains less than 10 parts per million of ozone, while the average ozone concentration in Earth’s atmosphere as a whole is about 0.3 parts per million. The ozone layer is mainly found in the lower portion of the stratosphere, from approximately 15 to 35 kilometres above Earth, although its thickness varies seasonally and geographically.

https://en.wikipedia.org/wiki/Montreal_Protocol

The Montreal Protocol on Substances that Deplete the Ozone Layer, also known simply as the Montreal Protocol, is an international treaty designed to protect the ozone layer by phasing out the production of numerous substances that are responsible for ozone depletion. Signed 26 August 1987, it was made pursuant to the 1985 Vienna Convention for the Protection of the Ozone Layer, which established the framework for international cooperation in addressing ozone depletion.[1] The Montreal Protocol entered into force on 26 August 1989, and has since undergone nine revisions, in 1990 (London), 1991 (Nairobi), 1992 (Copenhagen), 1993 (Bangkok), 1995 (Vienna), 1997 (Montreal), 1998 (Australia), 1999 (Beijing) and 2016 (Kigali).

As a result of the international agreement, the ozone hole in Antarctica is slowly recovering. Climate projections indicate that the ozone layer will return to 1980 levels between 2050 and 2070. The Montreal Protocol’s success is attributed to its effective burden sharing and solution proposals, which helped mitigate regional conflicts of interest.

If carbon dioxide and climate change had been taken seriously in the 1970s there might not have been the bushfires, wildfires, coral reef bleaching or large-scale flooding. Unfortunately, it seems, that even though scientists recognised we had a problem in 1990, very little has been done.

https://en.wikipedia.org/wiki/Kyoto_Protocol

The Kyoto Protocol is an international treaty which extends the 1992 United Nations Framework Convention on Climate Change (UNFCCC) that commits state parties to reduce greenhouse gas emissions, based on the scientific consensus that (part one) global warming is occurring and (part two) it is extremely likely that human-made CO2 emissions have predominantly caused it. The Kyoto Protocol was adopted in Kyoto, Japan, on 11 December 1997 and entered into force on 16 February 2005. There are currently 192 parties (Canada withdrew from the protocol, effective December 2012) to the Protocol.

Regional conflicts of interest produced shortcomings of the global regulatory approach of the Kyoto Protocol.

https://en.wikipedia.org/wiki/Paris_Agreement

The Paris Agreement is an agreement within the United Nations Framework Convention on Climate Change (UNFCCC), dealing with greenhouse-gas-emissions mitigation, adaptation, and finance, signed in 2016. The agreement’s language was negotiated by representatives of 196 state parties at the 21st Conference of the Parties of the UNFCCC in Le Bourget, near Paris, France, and adopted by consensus on 12 December 2015. As of November 2020, all 196 members of the UNFCCC have signed the agreement and 188 remain party to it. Of the eight countries which are not party to the law, the only significant emitters are Iran, Turkey, and the United States.

A pair of studies in Nature have said that, as of 2017, none of the major industrialized nations were implementing the policies they had envisioned and have not met their pledged emission reduction targets, and even if they had, the sum of all member pledges (as of 2016) would not keep global temperature rise “well below 2 °C”

However. it’s not too late. There is the technology.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s