Electron Microscopy and X-ray spectroscopy

Exploring the Nano Cosmos with Subatomic Particles

Dr Pietro Maiello

Department of Mathematics Physics and Electrical Engineering at Northumbria University

Dr Maiello is a scientific officer and he looks after the department’s electron microscopes. He also builds things and solves problems



The electron microscope is a remarkable piece of equipment used widely in materials science to characterise and explore the micro/nanocosmo through the use of accelerated electrons. The technique was developed nearly 90 years ago and today it is extensively used in many fields of science and technology as a tool to better understand, develop and explore the small world around us not accessible by our senses.

The following are notes from the on-line lecture. Even though I could stop the video and go back over things there are likely to be mistakes because I haven’t heard things correctly or not understood them. I hope Dr Maiello, and my readers will forgive any mistakes and let me know what I got wrong.

An electron microscope is just like an optical microscope in that it is used to look at small things.


Above left: The sort of optical microscope found in schools. Above right: An electron microscope not usually found in schools

The basis of all microscopes are subatomic particles


However optical and electron microscopes have different interactions, but these interactions allow small things to be seen

image image

Above left is an optical image of nanofibers. Above right is a scanning electron microscope (SEM) image of nanofibres.

The above images are of the same object but the electron microscope gives much more detail with a greater depth of field. Everything is kept in focus, which is not the case with the optical image. The electron microscope gives a better-quality image of the material. It looks more artificial but it was full of details.

Depth of field


For most optical equipment, depth of field (DOF) is the distance between the nearest and the farthest objects that are in acceptably sharp focus in an image. The depth of field can be calculated based on focal length, distance to subject, the acceptable circle of confusion size, and aperture. A particular depth of field may be chosen for technical or artistic purposes. Limitations of depth of field can sometimes be overcome with various techniques/equipment.

The focal length of an optical system is a measure of how strongly the system converges or diverges light.


A strength of the SEM is the enhanced depth of field compared to optical microscopy as shown above for the radiolarian Trochodicus longispinus. The optical image has only a few micron depth of field (= plane in focus), whereas SEM images can be made to be in focus for hundreds of microns (e.g. increasing working distance). Goldstein et al 2003 Fig 1.3.



The Radiolaria, also called Radiozoa, are protozoa of diameter 0.1–0.2 mm that produce intricate mineral skeletons, typically with a central capsule dividing the cell into the inner and outer portions of endoplasm and ectoplasm. The elaborate mineral skeleton is usually made of silica. They are found as zooplankton throughout the global ocean. As zooplankton, radiolarians are primarily heterotrophic, but many have photosynthetic endosymbionts and are, therefore, considered mixotrophs. The skeletal remains of some types of radiolarians make up a large part of the cover of the ocean floor as siliceous ooze. Due to their rapid change as species and intricate skeletons, radiolarians represent an important diagnostic fossil found from the Cambrian onwards.


The micrometre (SI symbol: μm), also commonly known as a micron, is an SI derived unit of length equalling 1 x 10−6 metre (SI standard prefix “micro-” = 10−6); that is, one millionth of a metre (or one thousandth of a millimetre, 0.001 mm)

The history of electron microscopy

A tale that is nearly 100 years old



Above left: The first prototype electron microscope, capable of four-hundred-power magnification, developed in 1931 by the physicist Ernst Ruska and the electrical engineer Max Knoll Above right: The first practical TEM, originally installed at IG Farben-Werke and now on display at the Deutsches Museum in Munich, Germany

The first electron microscope was completely hand made.

https://en.wikipedia.org/wiki/Ernst_Ruska (Below left)


Ernst August Friedrich Ruska (25 December 1906 – 27 May 1988) was a German physicist who won the Nobel Prize in Physics in 1986 for his work in electron optics, including the design of the first electron microscope.

https://en.wikipedia.org/wiki/Max_Knoll (Above right)

Max Knoll (17 July 1897 – 6 November 1969) was a German electrical engineer.



Ernst Ruska and Max Knoll won half of the Nobel prize in physics in 1986 for inventing the electron microscope.

The electron microscope design hasn’t changed much over the years. Modern electron microscopes have more sophisticated electronics and Information technology (IT) has improved the processing of images.

Resolution of modern microscopes


Optical resolution describes the ability of an imaging system to resolve detail in the object that is being imaged.

An imaging system may have many individual components including a lens and recording and display components. Each of these contributes to the optical resolution of the system, as will the environment in which the imaging is done.

The best resolution for an optical microscope is 200nm (2 x 10-7 metres) however the best resolution for an electron microscope (TEM) is 50pm (5 x 10-11 metres).

An electron microscope can have a four order of magnitude better resolution than an optical microscope.

As the diameter of an atom is of the order of 10-10 metres you can see an atom with an electron microscope, but you can’t with an optical microscope.

What is an atom?



An atom is the smallest unit of ordinary matter that forms a chemical element.

The basic idea that matter is made up of tiny indivisible particles is very old, appearing in many ancient cultures such as Greece and India. This ancient idea was based in philosophical reasoning rather than scientific reasoning, and modern atomic theory is not based on these old concepts. The word atom is derived from the Greek word atomos, which means “uncuttable”.

https://en.wikipedia.org/wiki/Leucippus (below left is an artist’s impression of Leucippus)

Leucippus (fl. 5th cent. BCE) is reported in some ancient sources to have been a philosopher who was the earliest Greek to develop the theory of atomism—the idea that everything is composed entirely of various imperishable, indivisible elements called atoms. Leucippus often appears as the master to his pupil Democritus, a philosopher also touted as the originator of the atomic theory.


https://en.wikipedia.org/wiki/Democritus (above right is an artist’s impression of Democritus)

Democritus (c. 460 – c. 370 BC) was an Ancient Greek pre-Socratic philosopher primarily remembered today for his formulation of an atomic theory of the universe.

The theory of Leucippus and Democritus held that everything is composed of “atoms,” which are physically, but not geometrically, indivisible; that between atoms, there lies empty space; that atoms are indestructible, and have always been and always will be in motion; that there is an infinite number of atoms and of kinds of atoms, which differ in shape and size.

Unfortunately for western science, Aristotle later came up with the idea of five elements and it wasn’t until the early 1800s that the idea of atoms really came back into fashion.


Aristotle (384–322 BC) was a Greek philosopher and polymath during the Classical period in Ancient Greece.


Roman copy in marble of a Greek bronze bust of Aristotle by Lysippos, c. 330 BC, with modern alabaster mantle.

Aristotle’s classical elements typically refer to earth, water, air, fire, and (later) aether, which were proposed to explain the nature and complexity of all matter in terms of simpler substances.

The reason why the idea of classical elements lasted so long was because the Roman Catholic church adopted Aristotle and accepted his work as fact.

In the early 1800s, the scientist John Dalton noticed that chemical substances seemed to combine and break down into other substances by weight in proportions that suggested that each chemical element is ultimately made up of tiny indivisible particles of consistent weight.



John Dalton FRS (6 September 1766 – 27 July 1844) was an English chemist, physicist, and meteorologist. He is best known for introducing the atomic theory into chemistry.


Atoms and molecules as depicted in John Dalton’s A New System of Chemical Philosophy vol. 1 (1808)

Shortly after 1850, certain physicists developed the kinetic theory of gases and of heat, which mathematically modelled the behaviour of gases by assuming that they were made of particles.

In the early 20th century, Albert Einstein and Jean Perrin proved that Brownian motion (the erratic motion of pollen grains in water) is caused by the action of water molecules; this third line of evidence silenced remaining doubts among scientists as to whether atoms and molecules were real. Throughout the nineteenth century, some scientists had cautioned that the evidence for atoms was indirect, and therefore atoms might not actually be real, but only seem to be real.

https://en.wikipedia.org/wiki/Albert_Einstein (below left)

Albert Einstein (14 March 1879 – 18 April 1955) was a German-born theoretical physicist.

https://en.wikipedia.org/wiki/Jean_Baptiste_Perrin (below right)

Jean Baptiste Perrin ForMemRS (30 September 1870 – 17 April 1942) was a French physicist who, in his studies of the Brownian motion of minute particles suspended in liquids, verified Albert Einstein’s explanation of this phenomenon and thereby confirmed the atomic nature of matter (sedimentation equilibrium). For this achievement he was honoured with the Nobel Prize for Physics in 1926.



Brownian motion is the random motion of particles suspended in a medium (a liquid or a gas).


This is a simulation of the Brownian motion of 5 particles (yellow) that collide with a large set of 800 particles. The yellow particles leave 5 blue trails of random motion and one of them has a red velocity vector.

Atoms were thought to be the smallest possible division of matter until 1897 when J. J. Thomson discovered the electron through his work on cathode rays.



Sir Joseph John Thomson OM PRS (18 December 1856 – 30 August 1940) was a British physicist and Nobel Laureate in Physics, credited with the discovery of the electron, the first subatomic particle to be discovered.



The cathode rays (blue) were emitted from the cathode, sharpened to a beam by the slits, then deflected as they passed between the two electrified plates.

Thomson discovered that the rays could be deflected by an electric field (in addition to magnetic fields, which was already known). He concluded that these rays, rather than being a form of light, were composed of very light negatively charged particles he called “corpuscles” (they would later be renamed electrons by other scientists). He measured the mass-to-charge ratio and discovered it was 1800 times smaller than that of hydrogen, the smallest atom. These corpuscles were a particle unlike any other previously known.

Thomson suggested that atoms were divisible, and that the corpuscles were their building blocks. To explain the overall neutral charge of the atom, he proposed that the corpuscles were distributed in a uniform sea of positive charge; this was the plum pudding model as the electrons were embedded in the positive charge like raisins in a plum pudding (although in Thomson’s model they were not stationary).


Thomson’s plum pudding model was disproved in 1909 by one of his former students, Ernest Rutherford, who discovered that most of the mass and positive charge of an atom is concentrated in a very small fraction of its volume, which he assumed to be at the very centre.


Ernest Rutherford and his colleagues Hans Geiger and Ernest Marsden came to have doubts about the Thomson model after they encountered difficulties when they tried to build an instrument to measure the charge-to-mass ratio of alpha particles (these are positively-charged particles emitted by certain radioactive substances such as radium). The alpha particles were being scattered by the air in the detection chamber, which made the measurements unreliable. Thomson had encountered a similar problem in his work on cathode rays, which he solved by creating a near-perfect vacuum in his instruments. Rutherford didn’t think he’d run into this same problem because alpha particles are much heavier than electrons. According to Thomson’s model of the atom, the positive charge in the atom is not concentrated enough to produce an electric field strong enough to deflect an alpha particle, and the electrons are so lightweight they should be pushed aside effortlessly by the much heavier alpha particles. Yet there was scattering, so Rutherford and his colleagues decided to investigate this scattering carefully.

Between 1908 and 1913, Rutherford and his colleagues performed a series of experiments in which they bombarded thin foils of metal with alpha particles. They spotted alpha particles being deflected by angles greater than 90°. To explain this, Rutherford proposed that the positive charge of the atom is not distributed throughout the atom’s volume as Thomson believed, but is concentrated in a tiny nucleus at the centre. Only such an intense concentration of charge could produce an electric field strong enough to deflect the alpha particles as observed.


The Geiger–Marsden experiment

Left: Expected results: alpha particles passing through the plum pudding model of the atom with negligible deflection.

Right: Observed results: a small portion of the particles were deflected by the concentrated positive charge of the nucleus. Rutherford described this outcome as if a bullet was fired at a piece of tissue paper and it bounce back.

https://en.wikipedia.org/wiki/Ernest_Rutherford (below left)


Ernest Rutherford, 1st Baron Rutherford of Nelson, OM, FRS, HonFRSE (30 August 1871 – 19 October 1937) was a New Zealand–born British physicist who came to be known as the father of nuclear physics.

https://en.wikipedia.org/wiki/Hans_Geiger (above centre)

Johannes Wilhelm “Hans” Geiger (30 September 1882 – 24 September 1945) was a German physicist.

https://en.wikipedia.org/wiki/Ernest_Marsden (above right)

Sir Ernest Marsden CMG CBE MC FRS (19 February 1889 – 15 December 1970) was an English-New Zealand physicist.

Rutherford wasn’t completely happy with his model. He knew that, as atoms were neutral, the quantity of positive charge (given the name protons) in the nucleus should equal the quantity of negative charge, but the nucleus was too heavy for this.



He proposed the nucleus contained a hypothetical neutral particle with a mass similar to the mass of a proton. He named it a neutron and he and one of his colleagues, James Chadwick, set about looking for it. They knew they were looking for a neutral particle so they had to devise an experiment which would cause these particles to release particles that could be detected.



Sir James Chadwick, CH, FRS (20 October 1891 – 24 July 1974) was a British physicist who was awarded the 1935 Nobel Prize in Physics for his discovery of the neutron in 1932.


A schematic diagram of the experiment used to discover the neutron in 1932. At left, a polonium source was used to irradiate beryllium with alpha particles, which induced an uncharged radiation. When this radiation struck paraffin wax, protons were ejected. The protons were observed using a small ionization chamber. Adapted from Chadwick (1932).


Alpha particles, also called alpha rays or alpha radiation, consist of two protons and two neutrons bound together into a particle identical to a helium-4 nucleus.



Rutherford envisioned the atom as a miniature solar system, with electrons orbiting around a massive nucleus, and as mostly empty space, with the nucleus occupying only a very small part of the atom. Initially the neutron had not been discovered when he proposed his model, which had a nucleus consisting only of protons.

The planetary model of the atom had two significant shortcomings. The first is that, unlike planets orbiting a sun, electrons are charged particles. An accelerating electric charge is known to emit electromagnetic waves according to the Larmor formula in classical electromagnetism. An orbiting charge should steadily lose energy and spiral toward the nucleus, colliding with it in a small fraction of a second. The second problem was that the planetary model could not explain the highly peaked emission and absorption spectra of atoms that were observed.



Quantum theory revolutionized physics at the beginning of the 20th century, when Max Planck and Albert Einstein postulated that light energy is emitted or absorbed in discrete amounts known as quanta (singular, quantum). In 1913, Niels Bohr incorporated this idea into his Bohr model of the atom, in which an electron could only orbit the nucleus in particular circular orbits with fixed angular momentum and energy, its distance from the nucleus (i.e., their radii) being proportional to its energy. Under this model an electron could not spiral into the nucleus because it could not lose energy in a continuous manner; instead, it could only make instantaneous “quantum leaps” between the fixed energy levels. When this occurred, light was emitted or absorbed at a frequency proportional to the change in energy (hence the absorption and emission of light in discrete spectra).

https://en.wikipedia.org/wiki/Max_Planck (below left)


Max Karl Ernst Ludwig Planck, ForMemRS (23 April 1858 – 4 October 1947) was a German theoretical physicist whose discovery of energy quanta won him the Nobel Prize in Physics in 1918.

https://en.wikipedia.org/wiki/Niels_Bohr (above centre)

Niels Henrik David Bohr (7 October 1885 – 18 November 1962) was a Danish physicist who made foundational contributions to understanding atomic structure and quantum theory, for which he received the Nobel Prize in Physics in 1922.

Bohr’s model was not perfect. It could only predict the spectral lines of hydrogen; it couldn’t predict those of multielectron atoms.

For reasons that will become clearer later it is mathematically impossible to simultaneously derive the position and momentum of an electron. This became known as the Heisenberg uncertainty principle after the theoretical physicist Werner Heisenberg, who first described it and published it in 1927. This invalidated Bohr’s model, with its neat, clearly defined circular orbits. The modern model of the atom describes the positions of electrons in an atom in terms of probabilities. An electron can potentially be found at any distance from the nucleus, but, depending on its energy level, exists more frequently in certain regions around the nucleus than others; this pattern is referred to as its atomic orbital. The orbitals come in a variety of shapes-sphere, dumbbell, torus, etc.-with the nucleus in the middle.

https://en.wikipedia.org/wiki/Werner_Heisenberg (above right)

Werner Karl Heisenberg (5 December 1901 – 1 February 1976) was a German theoretical physicist and one of the key pioneers of quantum mechanics.

The modern model of an atom consists of a dense nucleus surrounded by a probabilistic “cloud” of electrons.


An illustration of the helium atom, depicting the nucleus (pink) and the electron cloud distribution (black). The nucleus (upper right) in helium-4 is in reality spherically symmetric and closely resembles the electron cloud, although for more complicated nuclei this is not always the case. The black bar is one angstrom (10−10 m or 100 pm).

The modern atom’s model is based on quantum mechanics



In atomic theory and quantum mechanics, an atomic orbital is a mathematical function describing the location and wave-like behaviour of an electron in an atom. This function can be used to calculate the probability of finding any electron of an atom in any specific region around the atom’s nucleus. The term atomic orbital may also refer to the physical region or space where the electron can be calculated to be present, as predicted by the particular mathematical form of the orbital.

Each orbital in an atom is characterized by a unique set of values of the three quantum numbers which respectively correspond to the electron’s energy, angular momentum, and an angular momentum vector component (the magnetic quantum number). Each such orbital can be occupied by a maximum of two electrons, each with its own spin quantum number s. The simple names s orbital, p orbital, d orbital, and f orbital refer to orbitals with angular momentum quantum number ℓ = 0, 1, 2, and 3 respectively. These names, together with the value of n, are used to describe the electron configurations of atoms.

Atomic orbitals are the basic building blocks of the atomic orbital model (alternatively known as the electron cloud or wave mechanics model), a modern framework for visualising the sub-microscopic behaviour of electrons in matter. In this model the electron cloud of a multi-electron atom may be seen as being built up (in approximation) in an electron configuration that is a product of simpler hydrogen-like atomic orbitals. The repeating periodicity of the blocks of 2, 6, 10, and 14 elements within sections of the periodic table arises naturally from the total number of electrons that occupy a complete set of s, p, d, and f atomic orbitals, respectively, although for higher values of the quantum number n, particularly when the atom in question bears a positive charge, the energies of certain sub-shells become very similar and so the order in which they are said to be populated by electrons can only be rationalised somewhat arbitrarily.

Questions and answers part 1

1) How intricate must the electronics be to detect to detect tunnelling currents.

They can’t. The smallest thing that can be detected is the atom

So after more than one hundred years of theory atoms can finally be seen (although we can’t see electrons, yet!)


The above images show electron micrographs of diamond and silicon atom arrays (the numbers in [ ] brackets refer to the face of the crystal) (1 Å = 10-10 metres)

Very recently a single atom has been examined (well, actually the cloud of electrons around the nucleus has been examined).


Gold atoms on an FeO (rust) substrate observed through a transmission electron microscope (TEM).



The youtube link and images above show two clusters of gold atoms on a FeO substrate joining together due to quantum mechanical behaviour. The small clump gets absorbed by the big clump



Quantum mechanics is the science of very small things. It explains the behaviour of matter and its interactions with energy on the scale of atomic and subatomic particles. By contrast, classical physics explains matter and energy only on a scale familiar to human experience, including the behaviour of astronomical bodies such as the Moon. Classical physics is still used in much of modern science and technology. However, towards the end of the 19th century, scientists discovered phenomena in both the large (macro) and the small (micro) worlds that classical physics could not explain. The desire to resolve inconsistencies between observed phenomena and classical theory led to two major revolutions in physics that created a shift in the original scientific paradigm: the theory of relativity and the development of quantum mechanics.

The wikipedia article describes how physicists discovered the limitations of classical physics and developed the main concepts of the quantum theory that replaced it in the early decades of the 20th century. It describes these concepts in roughly the order in which they were first discovered. For a more complete history of the subject, see History of quantum mechanics.


Quantum mechanics is the foundation of all quantum physics including quantum chemistry, quantum field theory, quantum technology, and quantum information science.

Questions and answers part 2

1) What is the smallest the thing we can see?

A single atom. The one that has been investigated is cobalt.

2) Can you see the subatomic structure?


3) How difficult are electron microscopes to operate? Do you need special training?

They are not as easy as using an optical microscope but all the interfaces help with the operation.

Twelve years ago, models were more difficult use. It would take about an hour to produce an image after the sample had been put in place. Now it takes a few minutes and you get lots of images,

Back to the talk

How does an electron microscope work?

An electron microscope is a fusion of technology and theory. What are they?


The physicists around when the electron microscope was created didn’t believe it was possible.


The above image is a silicon chip. The CPU processing power and software of the computer controlling the electron microscope makes the whole process easy.

A modern electron microscope requires lots of different disciplines to get it made, such as material scientists and electrical engineers.

However, the fundamental reason why an electron microscope works is because electrons can behave as a wave at the quantum mechanical level and because of this they interact with matter in a certain way.

For any type of microscope, it is the wavelength that is very important.


In physics, the wavelength is the distance over which the wave’s shape repeats. It is the distance between consecutive corresponding points of the same phase on the wave, such as two adjacent crests, troughs, or zero crossings, and is a characteristic of both traveling waves and standing waves, as well as other spatial wave patterns.



The resolution is proportional to the wavelength. So, the smaller the wavelength the smaller the object that can be detected.

Up until the start of the 19th century scientists considered light to be made up of particles. In 1800 Thomas Young presented a paper to the Royal Society (written in 1799) where he argued that light could act as a wave. His idea was greeted with a certain amount of scepticism because it contradicted Newton’s corpuscular theory. Nonetheless, he continued to develop his ideas. He believed that a wave model could much better explain many aspects of light propagation.

In 1801, he presented a famous paper to the Royal Society entitled “On the Theory of Light and Colours” which described various interference phenomena. In 1803, he described his famous interference experiment. Unlike the modern double-slit experiment, his experiment reflected sunlight (using a steering mirror) through a small hole, and split the thin beam in half using a paper card. He also mentioned the possibility of passing light through two slits in his description of the experiment.





If light consisted strictly of ordinary or classical particles, and these particles were fired in a straight line through a slit and allowed to strike a screen on the other side, we would expect to see a pattern corresponding to the size and shape of the slit. However, when this “single-slit experiment” is actually performed, the pattern on the screen is a diffraction pattern in which the light is spread out. The smaller the slit, the greater the angle of spread. The top portion of the image below shows the central portion of the pattern formed when a red laser illuminates a slit and, if one looks carefully, two faint side bands. More bands can be seen with a more highly refined apparatus. Diffraction explains the pattern as being the result of the interference of light waves from the slit.

If you illuminate two parallel slits, the light from the two slits again interfere. Here the interference is a more pronounced pattern with a series of alternating light and dark bands. The width of the bands is a property of the frequency of the illuminating light. When Thomas Young first demonstrated this phenomenon, it indicated that light consists of waves, as the distribution of brightness can be explained by the alternately additive and subtractive interference of wavefronts. Young’s experiment played a vital part in the acceptance of the wave theory of light, vanquishing the corpuscular theory of light proposed by Isaac Newton, which had been the accepted model of light propagation in the 17th and 18th centuries.



The Royal Society, formally The Royal Society of London for Improving Natural Knowledge, is a learned society and the United Kingdom’s national academy of sciences.



Thomas Young FRS (13 June 1773 – 10 May 1829) was a British polymath who made notable contributions to the fields of vision, light, solid mechanics, energy, physiology, language, musical harmony, and Egyptology.

So, during the 19th century physicists accepted that light was a wave.

In 1894 Max Planck pondered the problem of black-body radiation. How does the intensity of the electromagnetic radiation emitted by a black body (a perfect absorber, also known as a cavity radiator) depend on the frequency of the radiation (i.e., the colour of the light) and the temperature of the body? The question had been explored experimentally, but no theoretical treatment agreed with experimental values. The behaviour was correctly predicted at high frequencies, but failed at low frequencies. Another approach to the problem, agreed with experimental results at low frequencies, but created what was later known as the “ultraviolet catastrophe” at high frequencies. However, contrary to many textbooks this was not a motivation for Planck.


The ultraviolet catastrophe was the prediction of late 19th century/early 20th century classical physics that an ideal black body at thermal equilibrium will emit radiation in all frequency ranges, emitting more energy as the frequency increases. By calculating the total amount of radiated energy (i.e., the sum of emissions in all frequency ranges), it can be shown that a black body is likely to release an arbitrarily high amount of energy. This would cause all matter to instantaneously radiate all of its energy until it is near absolute zero – indicating that a new model for the behaviour of black bodies was needed.

In 1900, Max Planck derived the correct form for the intensity spectral distribution function by making some strange (for the time) mathematical assumptions. In particular, Planck assumed that electromagnetic radiation can be emitted or absorbed only in discrete packets, called quanta, of energy: Equanta = hν = hc/λ, where h is Planck’s constant, ν is the frequency, c = the speed of the electromagnetic wave and λ is the wavelength

Einstein later proposed that electromagnetic radiation itself is quantized.




Max Karl Ernst Ludwig Planck, ForMemRS (23 April 1858 – 4 October 1947) was a German theoretical physicist whose discovery of energy quanta won him the Nobel Prize in Physics in 1918.


The photoelectric effect is the emission of electrons when electromagnetic radiation, such as light, hits a material. Electrons emitted in this manner are called photoelectrons.

The experimental results disagree with classical electromagnetism, which predicts that continuous light waves transfer energy to electrons, which would then be emitted when they accumulate enough energy. An alteration in the intensity of light would theoretically change the kinetic energy of the emitted electrons, with sufficiently dim light resulting in a delayed emission. The experimental results instead show that electrons are dislodged only when the light exceeds a certain frequency—regardless of the light’s intensity or duration of exposure. Because a low-frequency beam at a high intensity could not build up the energy required to produce photoelectrons like it would have if light’s energy was coming from a continuous wave, Albert Einstein proposed that a beam of light is not a wave propagating through space, but a collection of discrete wave packets, known as photons using a concept first put forward by Max Planck.


The emission of electrons from a metal plate caused by light quanta – photons.

Emission of conduction electrons from typical metals requires a few electron-volt (eV) light quanta, corresponding to short-wavelength visible or ultraviolet light. Study of the photoelectric effect led to important steps in understanding the quantum nature of light and electrons and influenced the formation of the concept of wave–particle duality.

Einstein theorised that the energy in each quantum of light was equal to the frequency of light multiplied by a constant, later called Planck’s constant. A photon above a threshold frequency has the required energy to eject a single electron, creating the observed effect. This was a key step in the development of quantum mechanics.

Einstein was awarded the 1921 Nobel Prize in Physics for “his discovery of the law of the photoelectric effect.

So electromagnetic radiation (light) could behave as if it was made up of particles as well as a wave and that got Louis de Broglie thinking. If light, usually considered to be a wave, could act like it was made up of particles. Could particles have wave like behaviour?



Louis Victor Pierre Raymond de Broglie, 7th duc de Broglie (15 August 1892 – 19 March 1987) was a French physicist and aristocrat who made ground-breaking contributions to quantum theory. In his 1924 PhD thesis, he postulated the wave nature of electrons and suggested that all matter has wave properties. This concept is known as the de Broglie hypothesis, an example of wave–particle duality, and forms a central part of the theory of quantum mechanics.

de Broglie won the Nobel Prize for Physics in 1929, after the wave-like behaviour of matter was first experimentally demonstrated in 1927.

So, photons and electrons can be a “wave” – – – –


– – – – and a particle at the same time

The mass of an electron at rest is 9.1 x 10-31kg. Using Einstein’s famous equation, E = mc2, the energy equivalent is 8.19 x 10-14J = 511875eV = 0.512MeV

The mass of a visible photon is zero kg but it does have an energy in the region of a few eV.

Comparing the wavelength of an electron and a photon you need to use de Broglie’s equation, λ = h/p where h is Planck’s constant and p is momentum (=mv)

This is a generalization of Einstein’s equation since the momentum of a photon is given by p = E/c.

1eV = 1.6 x 10-19J momentum of the photon = 1.6 x 10-19/3 x 108 = 5.33 x 10-28

λ = 6.63 x 10-34/5.33 x 10-28 = 1.24 x 10-6m = 1240nm = wavelength of the photon

To find the momentum of an electron use E = 1eV = ½mv2 = 1.6 x 10-19J

v = √(2 x 1.6 x 10-19/9.1 x 10-31) = 592999m/s

λ = h/p = 6.63 x 10-34/(9.1 x 10-31 x 592999) = 1.23 x10-9m = 1.23nm

The electron has the smaller de Broglie wavelength so it will have the smaller resolution.

The best resolution for an optical microscope 200nm (2 x 10-7m)

The best resolution for an electron microscope (TEM) is 50pm (5 x 10-11m)

Questions and answers part 3

1) Can you see the shapes of individual electron orbits?

About 4 years ago physicists were able to look at a single atom and they were trying to see the electron configuration.

At very high magnification electron orbits are very unstable but the physicists think they managed to see an s configuration.

They also tried to look for bonds between atoms where the electron configurations were different but they have really reached the limit for this field.

2) Can the electron microscope show images in colour?

Normally images are black and white but using various properties of the image you can do a virtual simulation of colour. It is possible to code a colour to something specific that you want to see.

Back to the talk

There are two categories of electron microscopes: Scanning electron microscope (SEM below left) and Transmission electron microscope (TEM below right)


The SEM is a simpler and more used friendly electron microscope.


Electromagnetic lenses are used in both types of electron microscopes to focus the electron beam.

There are a lot of interactions at the surface of the sample. When the electron beam hits the sample lots of particles are scattered off the surface. These particles are collected by detectors and an image is constructed.



The TEM is a much more difficult machine to use. The sample has to be a very tiny slice of the material (20–100 nm thick) as the electrons have to pass through it. This machine does give a much higher resolution/higher magnification but a major drawback is that the sample needs to be treated in a very specific way.

All the versions of electron microscopes need a source of electrons. The TEM emits electrons either by thermionic or field electron emission and in a typical SEM, an electron beam is thermionically emitted.


Thermionic emission is the liberation of electrons from an electrode by virtue of its temperature (releasing of energy supplied by heat).

Because the electron was not identified as a separate physical particle until the work of J. J. Thomson in 1897, the word “electron” was not used when discussing experiments that took place before this date.

The phenomenon was initially reported in 1853 by Edmond Becquerel. It was rediscovered in 1873 by Frederick Guthrie in Britain. While doing work on charged objects, Guthrie discovered that a red-hot iron sphere with a negative charge would lose its charge (by somehow discharging it into air). He also found that this did not happen if the sphere had a positive charge.



The effect was rediscovered again by Thomas Edison on February 13, 1880, while he was trying to discover the reason for breakage of lamp filaments and uneven blackening (darkest near the positive terminal of the filament) of the bulbs in his incandescent lamps.


We now know that the filament was emitting electrons, which were attracted to a positively charged foil, but not a negatively charged one.


Field electron emission, also known as field emission (FE) and electron field emission, is emission of electrons induced by an electrostatic field. It took until about 1928 for there to be a basic physical understanding of its origin.

Th electrons interact with the material and the researcher looks at what happens.

The “Tear drop”: The volume of interaction


Illustration of the phenomena that occur from the interaction of highly energetic electrons with the sample (without destroying the sample), also depicting the tear shape interaction volume which is typically observed in this type of interactions. The grey part is the sample and detectors are place all around the sample in order to record the results of all the interactions.

“Tear drop”: volume of interactions using a Monte Carlo simulation.


Monte Carlo methods, or Monte Carlo experiments, are a broad class of computational algorithms that rely on repeated random sampling to obtain numerical results. The underlying concept is to use randomness to solve problems that might be deterministic in principle.


Monte Carlo simulations are used to model the probability of different outcomes in a process that cannot easily be predicted due to the intervention of random variables. It is a technique used to understand the impact of risk and uncertainty in prediction and forecasting models.


In the above case the Monte Carlo simulation is predicting what happens during the production of the “Tear drop”.

The term, kV, is an indication of the energy to accelerate the electrons (Energy = charge of the particle x accelerating voltage)

The higher the energy the greater the interaction.

Different types of electrons sources

Tungsten (below left) was the initial choice for the electron source. It consists of an inverted V-shaped wire filament and is about 100mm long. Electrons are “boiled” off and electromagnetic lenses focus them.


Lanthanum hexaboride (LaB6) or Cerium hexaboride (CeB6) (above centre) is a thermionic emission gun. Thermionic emission is a more relicable source of electrons.

Field emission gun (FEG) (above right) is a wire of tungsten with a very sharp tip, less than 100nm! It is the method used in modern electron microscopes. The electrons are released from the tip and give a very high resolution.

Differences between TEM and STEM


Much better resolution (5 x 10-11 metres)

More expensive

More difficult sample preparation

Less easy to use


Lower resolution (10-9 metres)

Less expensive

Relatively easy sample preparation

Quite easy to use in many applications

More flexible (e.g. ESEM where conductive coating can be avoided)


The environmental scanning electron microscope (ESEM) is a scanning electron microscope (SEM) that allows for the option of collecting electron micrographs of specimens that are wet, uncoated, or both by allowing for a gaseous environment in the specimen chamber.

Disadvantages of an electron microscope

In all types of electron microscopes, the sample needs to be placed in a vacuum because otherwise you would be looking at air molecules.

If the object isn’t conductive you usually need to coat it with a metal (usually gold or platinum)



The above image is a gold coated spider

Electron microscopes are expensive compared to optical based devices

It often requires highly trained users. This typically involves sic hours of basic training followed by fifty hours of practice to become proficient

It only generates black and white images

Questions and answers part 4

1) Can you see enzymes under an electron microscope.

Only recently has it been possible to look at biological samples as they have to be in a vacuum. Biological samples are full of water and exposing them to a vacuum causes them to dry out and they are destroyed.

Now a cryotechnique freezes the sample preserving the structure. So, it is possible o look at the structure of enzymes. This is very useful for the pharmaceutical industry. This technique allows large molecules to be examined,

2) How do you get electrons from the electron source?

Heat the source to a high temperature and the electrons leave it. They are then attracted by an electric field and accelerated with a positive potential difference voltage).

The path of the electrons can be shaped this way. This can’t be done with photons as they do not have an electric charge. You need expensive glass to produce the smallest images with light.

3) How can you prepare samples besides freezing and using a vacuum (and coating the samples in metal)

If the sample is conductive and not living you can just stick it in the electron microscope.

If it is something like an insect you use a sputtering machine to put 5 to 10nm of gold or platinum on it. Not much is needed so it doesn’t cost much. The process only takes about 2 minutes. The electrons are able to pass through the thin layer of gold or platinum.

Back to the talk

What can you do with an electron microscope?

It has outstanding multi-analysis capabilities!

You can do many more things with an electron microscope than with an optical one. There are about 20 to 30 different techniques.

You don’t just bombard the sample with electrons to get an image. You can even generate X-rays.


A cathodoluminescence (CL) microscope combines methods from electron and regular (light optical) microscopes. It is designed to study the luminescence characteristics of polished thin sections of solids irradiated by an electron beam.


The most common imaging mode collects low-energy (<50 eV) secondary electrons (SE) that are ejected from conduction or valence bands of the specimen atoms by inelastic scattering interactions with beam electrons. Due to their low energy, these electrons originate from within a few nanometres below the sample surface.


Backscattered electrons (BSE) consist of high-energy electrons originating in the electron beam, that are reflected or back-scattered out of the specimen interaction volume by elastic scattering interactions with specimen atoms. Since heavy elements (high atomic number) backscatter electrons more strongly than light elements (low atomic number), and thus appear brighter in the image, BSEs are used to detect contrast between areas with different chemical compositions.


Correlative Raman-SEM imaging (RISE microscopy) combines an SEM and a confocal Raman microscope. The confocal Raman microscope is integrated into the vacuum chamber of the electron microscope. Confocal Raman imaging is a spectroscopic technique for the analysis of molecular compounds within a sample.


Scanning probe microscopy (SPM) is a branch of microscopy that forms images of surfaces using a physical probe that scans the specimen. SPM was founded in 1981, with the invention of the scanning tunnelling microscope, an instrument for imaging surfaces at the atomic level. Many scanning probe microscopes can image several interactions simultaneously. The manner of using these interactions to obtain an image is generally called a mode.


Structured illumination microscopy (SIM) enhances spatial resolution by collecting information from frequency space outside the observable region. This process is done in reciprocal space: the Fourier transform (FT) of an SI image contains superimposed additional information from different areas of reciprocal space; with several frames where the illumination is shifted by some phase, it is possible to computationally separate and reconstruct the FT image, which has much more resolution information. The reverse FT returns the reconstructed image to a super-resolution image.

SIM microscopy could potentially replace electron microscopy as a tool for some medical diagnoses. These include diagnosis of kidney disorders, kidney cancer, and blood diseases.


In mathematics, a Fourier transform (FT) is a mathematical transform that decomposes a function (often a function of time, or a signal) into its constituent frequencies, such as the expression of a musical chord in terms of the volumes and frequencies of its constituent notes. The term Fourier transform refers to both the frequency domain representation and the mathematical operation that associates the frequency domain representation to a function of time.


Secondary-ion mass spectrometry (SIMS) is a technique used to analyse the composition of solid surfaces and thin films by sputtering the surface of the specimen with a focused primary ion beam and collecting and analysing ejected secondary ions. The mass/charge ratios of these secondary ions are measured with a mass spectrometer to determine the elemental, isotopic, or molecular composition of the surface to a depth of 1 to 2 nm. Due to the large variation in ionization probabilities among elements sputtered from different materials, comparison against well-calibrated standards is necessary to achieve accurate quantitative results. SIMS is the most sensitive surface analysis technique, with elemental detection limits ranging from parts per million to parts per billion.


In physics, sputtering is a phenomenon in which microscopic particles of a solid material are ejected from its surface, after the material is itself bombarded by energetic particles of a plasma or gas.


Time-of-flight mass spectrometry (TOFMS) is a method of mass spectrometry in which an ion’s mass-to-charge ratio is determined via a time of flight measurement. Ions are accelerated by an electric field of known strength.[1] This acceleration results in an ion having the same kinetic energy as any other ion that has the same charge. The velocity of the ion depends on the mass-to-charge ratio (heavier ions of the same charge reach lower speeds, although ions with higher charge will also increase in velocity). The time that it subsequently takes for the ion to reach a detector at a known distance is measured. This time will depend on the velocity of the ion, and therefore is a measure of its mass-to-charge ratio. From this ratio and known experimental parameters, one can identify the ion.


Energy-dispersive X-ray spectroscopy (EDS, EDX, EDXS or XEDS), sometimes called energy dispersive X-ray analysis (EDXA) or energy dispersive X-ray microanalysis (EDXMA), is an analytical technique used for the elemental analysis or chemical characterization of a sample. It relies on an interaction of some source of X-ray excitation and a sample.


Wavelength-dispersive X-ray spectroscopy (WDXS or WDS) is a non-destructive analysis technique used to obtain elemental information about a range of materials by measuring characteristic x-rays within a small wavelength range. The technique generates a spectrum in which the peaks correspond to specific x-ray lines and elements can be easily identified. WDS is primarily used in chemical analysis, wavelength dispersive X-ray fluorescence (WDXRF) spectrometry, electron microprobes, scanning electron microscopes, and high precision experiments for testing atomic and plasma physics.


Electron-beam-induced current (EBIC) is a semiconductor analysis technique performed in a scanning electron microscope (SEM) or scanning transmission electron microscope (STEM). It is used to identify buried junctions or defects in semiconductors, or to examine minority carrier properties. EBIC is similar to cathodoluminescence in that it depends on the creation of electron–hole pairs in the semiconductor sample by the microscope’s electron beam. This technique is used in semiconductor failure analysis and solid-state physics.


Electron backscatter diffraction (EBSD) is a scanning electron microscope–based microstructural-crystallographic characterization technique commonly used in the study of crystalline or polycrystalline materials. The technique can provide information about the structure, crystal orientation , phase, or strain in the material. Traditionally these types of studies have been carried out using X-ray diffraction (XRD), neutron diffraction and/or electron diffraction in a Transmission electron microscope.


A scanning transmission electron microscope (STEM) is a type of transmission electron microscope (TEM). As with a conventional transmission electron microscope (CTEM), images are formed by electrons passing through a sufficiently thin specimen. However, unlike CTEM, in STEM the electron beam is focused to a fine spot (with the typical spot size 0.05 – 0.2 nm) which is then scanned over the sample in a raster illumination system constructed so that the sample is illuminated at each point with the beam parallel to the optical axis. The rastering of the beam across the sample makes STEM suitable for analytical techniques such as Z-contrast annular dark-field imaging, and spectroscopic mapping by energy dispersive X-ray (EDX) spectroscopy, or electron energy loss spectroscopy (EELS). These signals can be obtained simultaneously, allowing direct correlation of images and spectroscopic data.

A typical STEM is a conventional transmission electron microscope equipped with additional scanning coils, detectors and necessary circuitry, which allows it to switch between operating as a STEM, or a CTEM; however, dedicated STEMs are also manufactured.

High resolution scanning transmission electron microscopes require exceptionally stable room environments. In order to obtain atomic resolution images in STEM, the level of vibration, temperature fluctuations, electromagnetic waves, and acoustic waves must be limited in the room housing the microscope.


A scanning pattern of parallel lines that form the display of an image

Art conservationists rely on electron microscopy to see how old paints were created.

Microelectronics rely on electron microscopy in order to tweak the chips, in order to improve their performance


The image above shows a modern field emission electron microscope capable of sub-nanometre resolution (TESCAN MIRA3 at Northumbria University)

Typical images from an SEM


The images are colourised to show up specific things, according to what the researcher wants to see. This shows up difference in the image better as humans distinguish things in colour better,

Artificially colourised SEM image of the surface of a cannabis plant


SEM image of a deep-sea hydrothermal worm


SEM images of a cross sectional view of thin solar cells (NPAG Northumbria photovoltaics centre)


The electron microscope images how the tiny technology is used. The section of the solar cells is only 3-4mm thick. The electron microscope helps to get the structure right.








Photovoltaics (PV) is the conversion of light into electricity using semiconducting materials that exhibit the photovoltaic effect, a phenomenon studied in physics, photochemistry, and electrochemistry.

A photovoltaic system employs solar modules, each comprising a number of solar cells, which generate electrical power. PV installations may be ground-mounted, rooftop mounted, wall mounted or floating. The mount may be fixed or use a solar tracker to follow the sun across the sky.

Solar PV has specific advantages as an energy source: once installed, its operation generates no pollution and no greenhouse gas emissions, it shows simple scalability in respect of power needs and silicon has large availability in the Earth’s crust.

X-ray analysis for elemental identification

The X-rays produced by the electron beams tells you a lot about the nature of the sample, such as composition. Essentially when you want to look at a surface made up of different materials you simply point the electrons to the area you are interested in and thanks to the spectrum produced, the materials in that area can be identified.

X-ray spectroscopy is one of the most important types of analysis that can be performed with an electron microscope. There are two different approaches:

Energy dispersive spectroscopy


Energy-dispersive X-ray spectroscopy (EDS, EDX, EDXS or XEDS), sometimes called energy dispersive X-ray analysis (EDXA) or energy dispersive X-ray microanalysis (EDXMA), is an analytical technique used for the elemental analysis or chemical characterization of a sample. It relies on an interaction of some source of X-ray excitation and a sample. Its characterization capabilities are due in large part to the fundamental principle that each element has a unique atomic structure allowing a unique set of peaks on its electromagnetic emission spectrum (which is the main principle of spectroscopy). The peak positions are predicted by the Moseley’s law with accuracy much better than experimental resolution of a typical EDX instrument.

To stimulate the emission of characteristic X-rays from a specimen a beam of X-rays is focused into the sample being studied. At rest, an atom within the sample contains ground state (or unexcited) electrons in discrete energy levels or electron shells bound to the nucleus. The incident beam may excite an electron in an inner shell, ejecting it from the shell while creating an electron hole where the electron was. An electron from an outer, higher-energy shell then fills the hole, and the difference in energy between the higher-energy shell and the lower energy shell may be released in the form of an X-ray. The number and energy of the X-rays emitted from a specimen can be measured by an energy-dispersive spectrometer. As the energies of the X-rays are characteristic of the difference in energy between the two shells and of the atomic structure of the emitting element, EDS allows the elemental composition of the specimen to be measured.

Four primary components of the EDS setup are

the excitation source (electron beam or x-ray beam)

the X-ray detector

the pulse processor

the analyser

Electron beam excitation is used in electron microscopes, scanning electron microscopes (SEM) and scanning transmission electron microscopes (STEM). X-ray beam excitation is used in X-ray fluorescence (XRF) spectrometers. A detector is used to convert X-ray energy into voltage signals; this information is sent to a pulse processor, which measures the signals and passes them onto an analyser for data display and analysis. The most common detector used to be Si(Li) detector cooled to cryogenic temperatures with liquid nitrogen. Now, newer systems are often equipped with silicon drift detectors (SDD) with Peltier cooling systems.

Wavelength dispersive spectroscopy


Wavelength-dispersive X-ray spectroscopy (WDXS or WDS) is a non-destructive analysis technique used to obtain elemental information about a range of materials by measuring characteristic x-rays within a small wavelength range. The technique generates a spectrum in which the peaks correspond to specific x-ray lines and elements can be easily identified. WDS is primarily used in chemical analysis, wavelength dispersive X-ray fluorescence (WDXRF) spectrometry, electron microprobes, scanning electron microscopes, and high precision experiments for testing atomic and plasma physics.


Electron beam interactions with a sample, X-rays are one of the possible products.

The image below is a typical characteristic X-ray spectrum combined with an electron image


X-ray elemental mapping


An electron microprobe (EMP), also known as an electron probe microanalyzer (EPMA) or electron micro probe analyser (EMPA), is an analytical tool used to non-destructively determine the chemical composition of small volumes of solid materials. It works similarly to a scanning electron microscope: the sample is bombarded with an electron beam, emitting x-rays at wavelengths characteristic to the elements being analysed. This enables the abundances of elements present within small sample volumes (typically 10-30 cubic micrometres or less) to be determined, when a conventional accelerating voltage of 15-20 kV is used. The concentrations of elements from lithium to plutonium may be measured at levels as low as 100 parts per million (ppm), material dependent, although with care, levels below 10 ppm are possible. The ability to quantify lithium by EPMA became a reality in 2008.

The image below shows the results of X-ray elemental mapping of a broken electronic chip from a mobile phone. Bottom left of the image is a little scale that allows the various elements to be identified by colour within the area. The different colours show the different elements. These are tungsten (yellow), titanium (pale blue), aluminium (dark blue), nickel (green), gold (orange), tin (red) and silicon (lilac). (CL PennState Materials Research Institute)





There is a tungsten filament on a silicon base. The method allows physicists to see where all the elements are distributed

The images below show multi-colour microscopy by element-guided identification of cells, organelles and molecules (Nature, Scientific Reports volume 7, Article number: 45970 (2017)).


Cd Solar cell with nanorods and the importance of EDS analysis


The image above shows two solar cells. It is important to make them as efficient as possible. This is quite a struggle.

Thanks to the electron microscope and X-ray mapping the performance of the cell was improved by adding sulphur (the pink layer) to certain places in the cell. This tiny layer is known as a nanorod.


In nanotechnology, nanorods are one morphology of nanoscale objects. Each of their dimensions range from 1–100 nm. They may be synthesized from metals or semiconducting materials. Standard aspect ratios (length divided by width) are 3-5. Nanorods are produced by direct chemical synthesis. A combination of ligands act as shape control agents and bond to different facets of the nanorod with different strengths. This allows different faces of the nanorod to grow at different rates, producing an elongated object.


Electron microscopy and X-ray mapping can prove things in ways that the eyes could never do




The above link allows you to have a go at using an electron microscope




Questions and answers part 5

1) What is the weirdest thing you’ve ever seen

The interesting things are biological e.g. insects. Also highly magnified crystals. Everything natural is amazing.

2) What lies ahead in the future of electron microscopy.

Reaching the maximum concentration. Hopefully looking in more detail at biological items like DNA. Looking at protein sequences and viruses.

In the next 10 to 20 years the electron microscope will be able to examine viruses in greater detail. This will allow us to understand how they work so better anti-viral drugs can be created,

Answers to questions not answered by the speaker

1) https://en.wikipedia.org/wiki/Electron_microscope

In the reflection electron microscope (REM) as in the TEM, an electron beam is incident on a surface but instead of using the transmission (TEM) or secondary electrons (SEM), the reflected beam of elastically scattered electrons is detected. This technique is typically coupled with reflection high energy electron diffraction (RHEED) and reflection high-energy loss spectroscopy (RHELS). Another variation is spin-polarized low-energy electron microscopy (SPLEEM), which is used for looking at the microstructure of magnetic domains.

2) https://www.sciencelearn.org.nz/resources/502-types-of-electron-microscope

The TEM lets us look in very high resolution at a thin section of a sample (and is therefore analogous to the compound light microscope). This makes it particularly good for learning about how components inside a cell, such as organelles, are structured.

The scanning electron microscope (SEM) lets us see the surface of three-dimensional objects in high resolution. It works by scanning the surface of an object with a focused beam of electrons and detecting electrons that are reflected from and knocked off the sample surface. At low magnifications, entire objects (such as insects) viewed on the SEM can be in focus at the same time. That’s why the SEM is so good at generating three-dimensional images of lice, flies, snowflakes and so on.

Electron backscatter diffraction (EBSD) is used to look in detail at the structure of minerals (such as those in rocks). Rather than being microscopes in their own right, EBSD detectors are add-ons to SEMs. After the electron beam is fired at the rock, the EBSD detects electrons that have entered the rock and been scattered in all directions. The pattern of scattering can tell scientists a lot about the structure of the mineral and the orientation of crystals within it.

One thought on “Electron Microscopy and X-ray spectroscopy

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s