Professor Katherine Blundell OBE
Well-trained eyes can be remarkably useful for capturing light curves of evolving objects in the cosmos, even contributing to modern research programmes.
This lecture considered how stargazing with imperfect, non-linear human eyes can accomplish such a feat, and the important contributions that this makes to elucidating the phenomena of nova detonations in our galaxy.
Professor Blundell was appointed Gresham Professor of Astronomy in 2019. She is a Professor of Astrophysics at Oxford University and a Research Fellow at St John’s College. Before this she was one of the Royal Society’s University Research Fellows, a Research Fellow of the 1851 Royal Commission, and a Junior Research Fellow at Balliol College, Oxford.
Her research interests include the evolution of active galaxies and their life cycles, the accretion of material near black holes and the launch and propagation of relativistic jets (jets of plasma emitted by some black holes). In her research she uses electromagnetic imaging and spectroscopy, as well as computational techniques.
She is also a renowned science communicator and set up a worldwide network of five schools-based Global Jet Watch observatories, which collect data on evolving black hole systems and nova explosions in our Galaxy, helping to inspire the next generation of scientists in South Africa, Chile, Australia and India.
Her awards include a Philip Leverhulme Prize in Astrophysics, the Royal Society’s Rosalind Franklin Medal in 2010, the Institute of Physics Bragg Medal in 2012, the Royal Astronomical Society’s Darwin Lectureship in 2015 and an OBE for services to astronomy and the education of young people in 2017.
Professor Blundell’s first lecture series for Gresham College is called Cosmic Concepts, starting 2 October 2019, and she will be looking at how concepts developed in physics and cosmology have led to some of our most surprising discoveries about the Universe.
The following are notes from the on-line lecture. Even though I could stop the video and go back over things there are likely to be mistakes because I haven’t heard things correctly or not understood them. I hope the Professor Blundell and my readers will forgive any mistakes and let me know what I got wrong.
What we see in the night sky and what some of it means
A photo of the Moon and Jupiter with Mercury hiding in the Eucalyptus trees. The Moon is waxing. The thin crescent is the part of the Moon reflecting the Sun’s rays to Earth. When you stand looking at a waxing crescent moon, you’re seeing a thin fraction of the moon’s day side, or illuminated side, and a larger fraction of the moon’s night side, the side of the moon submerged in the moon’s own shadow. The pale glow on that night portion of the moon, when the moon is a crescent is called earthshine.
The picture shows a beautiful conjunction.
In astronomy, a conjunction occurs when two astronomical objects or spacecraft have either the same right ascension or the same ecliptic longitude, usually as observed from Earth.
When two objects always appear close to the ecliptic—such as two planets, the Moon and a planet, or the Sun and a planet—this fact implies an apparent close approach between the objects as seen on the sky.
Conjunctions involve either two objects in the Solar System or one object in the Solar System and a more distant object, such as a star. A conjunction is an apparent phenomenon caused by the observer’s perspective: the two objects involved are not actually close to one another in space. Conjunctions between two bright objects close to the ecliptic, such as two bright planets, can be seen with the naked eye.
The ecliptic is the plane of Earth’s orbit around the Sun. From the perspective of an observer on Earth, the Sun’s movement around the celestial sphere over the course of a year traces out a path along the ecliptic against the background of stars. The ecliptic is an important reference plane and is the basis of the ecliptic coordinate system.
As seen from the orbiting Earth, the Sun appears to move with respect to the fixed stars, and the ecliptic is the yearly path the Sun follows on the celestial sphere. This process repeats itself in a cycle lasting a little over 365 days.
The moon is globe, just as Earth is, and that the globe of the moon is always half-illuminated by sunlight. When we see a crescent Moon, we’re seeing just a sliver of the moon’s lighted half.
During a full Moon bright moonlight can illuminate the Earth’s sky. Similarly, whenever we see a crescent moon, a nearly full Earth appears in the moon’s night sky. The full Earth illuminates the lunar landscape. And that is earthshine. It’s light from the nearly full Earth shining on the moon.
Just as a full moon can illuminate an earthly landscape, so a full or nearly full Earth can illuminate the lunar landscape.
The Moon is a gravity rounded astronomical body orbiting Earth and is the planet’s only natural satellite. It is the fifth-largest satellite in the Solar System, and by far the largest among planetary satellites relative to the size of the planet that it orbits. The Moon is, after Jupiter’s satellite Io, the second-densest satellite in the Solar System among those whose densities are known.
Full Moon in the darkness of the night sky. It is patterned with a mix of light-tone regions and darker, irregular blotches, and scattered with varying sizes of impact craters, circles surrounded by out-thrown rays of bright ejecta. Gregory H. Revera
Full Moon photograph taken 10-22-2010 from Madison, Alabama, USA. Photographed with a Celestron 9.25 Schmidt-Cassegrain telescope. Acquired with a Canon EOS Rebel T1i (EOS 500D), 20 images stacked to reduce noise. 200 ISO 1/640 sec.
Jupiter is the fifth planet from the Sun and the largest in the Solar System. It is a gas giant with a mass one-thousandth that of the Sun, but two-and-a-half times that of all the other planets in the Solar System combined. Jupiter is one of the brightest objects visible to the naked eye in the night sky, and has been known to ancient civilizations since before recorded history. It is named after the Roman god Jupiter. When viewed from Earth, Jupiter can be bright enough for its reflected light to cast visible shadows, and is on average the third-brightest natural object in the night sky after the Moon and Venus.
An image of Jupiter taken by NASA’s Hubble Space Telescope
NASA, ESA, and A. Simon (Goddard Space Flight Center) – http://www.spacetelescope.org/images/heic1410a/ or http://hubblesite.org/newscenter/archive/releases/2014/24/image/b/
This full-disc image of Jupiter was taken on 21 April 2014 with Hubble’s Wide Field Camera 3 (WFC3).
Enhanced-colour image of Mercury from first MESSENGER flyby.
NASA/Johns Hopkins University Applied Physics Laboratory/Carnegie Institution of Washington – NASA/JPL. Edited version of Mercury in color – Prockter07.jpg by jjron (cropped to square).
Mercury is the smallest and innermost planet in the Solar System. Its orbit around the Sun takes 87.97 Earth days, the shortest of all the planets in the Solar System. It is named after the Greek god Hermes, translated into Latin Mercurius Mercury, god of commerce, messenger of the gods, mediator between gods and mortals.
Mars is the fourth planet from the Sun and the second-smallest planet in the Solar System, being larger than only Mercury. In English, Mars carries the name of the Roman god of war and is often referred to as the “Red Planet”. The latter refers to the effect of the iron oxide prevalent on Mars’s surface, which gives it a reddish appearance distinctive among the astronomical bodies visible to the naked eye. Mars is a terrestrial planet with a thin atmosphere, with surface features reminiscent of the impact craters of the Moon and the valleys, deserts and polar ice caps of Earth.
Mars appears as a red-orange globe with darker blotches and white icecaps visible on both of its poles.
ESA & MPS for OSIRIS Team MPS/UPD/LAM/IAA/RSSD/INTA/UPM/DASP/IDA, CC BY-SA IGO 3.0
True colour image of Mars taken by the OSIRIS instrument on the ESA Rosetta spacecraft during its February 2007 flyby of the planet. The image was generated using the OSIRIS orange (red), green, and blue filters. Alternative description: The first true-colour image generated using the OSIRIS orange (red), green and blue colour filters. The image was acquired on 24 February 2007 at 19:28 CET from a distance of about 240 000 km; image resolution is about 5 km/pixel.
Mars has a very thin atmosphere consisting of about 96% carbon dioxide, 1.93% argon and 1.89% nitrogen along with traces of oxygen and water. The atmosphere is quite dusty, containing particulates about 1.5 µm in diameter which give the Martian sky a tawny colour when seen from the surface. It may take on a pink hue due to iron oxide particles suspended in it.
Atmospheric pressure on the surface today ranges from a low of 30 Pa on Olympus Mons to over 1,155 Pa in Hellas Planitia, with a mean pressure at the surface level of 600 Pa.
Mars’ days and seasons are comparable to Earth. Its rotational axis and orientation of that axis are also comparable to Earth. A year on Mars is about twice that of Earth, but by astrophysical standards they are comparable.
How do we look at Mars and what do we actually see when we look through a telescope at it?
Below shows a series of photos taken by Professor Blundell of Mars on the evening of the 20th October 2020.
The images were seen through a lot of atmosphere as it was towards the horizon. It had been raining and houses nearby had been emitting heat causing turbulence in the surrounding air
Professor Blundell took a large number of continuous images (like a video) and picked out the best. The reason for doing this is that it gave her the chance of getting the best picture without sitting about waiting for it to happen (the benefits of having a digital camera)
The colours show a greyish colour towards the bottom left changing to apricot. The white spot, bottom left of two of the above pictures, is the polar ice cap.
Looking at all the pictures Professor Blundell could see Mars rotating
Astronomer and astrophotographer
Rob Tilsley has turned the process into an art form. He produced 77 separate videos, pulling out 1300000 of the best pictures. This procedure enabled him to record an entire rotation period.
In his images you can make out features on Mars
Details about the surface structure of Mars have been rife with controversy and confusion
Ancient Sumerians thought that Mars was a god but the ancient Egyptians, Babylonians and Greeks recognised it as a planet.
Astronomers were able to work out things about Mars but Galileo Galilei was the first person to see Mars via telescope in 1610.
Galileo di Vincenzo Bonaiuti de’ Galilei (15 February 1564 – 8 January 1642) was an Italian astronomer, physicist and engineer, sometimes described as a polymath, from Pisa. Galileo has been called the “father of observational astronomy”, the “father of modern physics” the “father of the scientific method”, and the “father of modern science”.
The first person to draw a map of Mars that displayed any terrain features was the Dutch astronomer Christiaan Huygens.
Christiaan Huygens FRS (14 April 1629 – 8 July 1695), also spelled Huyghens, was a Dutch physicist, mathematician, astronomer and inventor, who is widely regarded as one of the greatest scientists of all time and a major figure in the scientific revolution. In physics, Huygens made groundbreaking contributions in optics and mechanics, while as an astronomer he is chiefly known for his studies of the rings of Saturn and the discovery of its moon Titan. As an inventor, he improved the design of the telescope with the invention of the Huygenian eyepiece.
By the 19th century, the resolution of telescopes reached a level sufficient for surface features to be identified. In 1877 the Italian astronomer Giovanni Schiaparelli used a 22 centimetres telescope in Milan to help produce the first detailed map of Mars. These maps notably contained features he called canali, which were later shown to be an optical illusion. These canali were supposedly long, straight lines on the surface of Mars, to which he gave names of famous rivers on Earth. His term, which means “channels” or “grooves”, was popularly mistranslated in English as “canals”
Giovanni Virginio Schiaparelli ForMemRS HFRSE (14 March 1835 – 4 July 1910) was an Italian astronomer and science historian.
Schiaparelli’s 1877 surface map of Mars.
The ideas of man-made canals on Mars was popular with people as it implied that there was intelligent life on Mars and authors made use of this fact. Probably the most famous is H. G. Well’s “War of the Worlds”.
Herbert George Wells (21 September 1866 – 13 August 1946) was an English writer.
The War of the Worlds is a science fiction novel first serialised in 1897 by Pearson’s Magazine in the UK and by Cosmopolitan magazine in the US. The novel’s first appearance in hardcover was in 1898 from publisher William Heinemann of London. Written between 1895 and 1897, it is one of the earliest stories to detail a conflict between mankind and an extraterrestrial race. The novel is the first-person narrative of both an unnamed protagonist in Surrey and of his younger brother in London as southern England is invaded by Martians.
Influenced by Mars observations, the orientalist Percival Lowell founded an observatory which had 30 and 45 centimetres telescopes. The observatory was used for the exploration of Mars during the last good opportunity in 1894 and the following less favourable oppositions. He published several books on Mars and life on the planet, which had a great influence on the public.
Percival Lawrence Lowell (1855 – November 12, 1916) was an American businessman, author, mathematician, and astronomer who fuelled speculation that there were canals on Mars. He founded the Lowell Observatory in Flagstaff, Arizona, and formed the beginning of the effort that led to the discovery of Pluto 14 years after his death.
Martian canals depicted by Percival Lowell.
He studied Mars extensively, and made intricate drawings of the surface markings as he perceived them. Lowell published his views in three books: Mars (1895), Mars and Its Canals (1906), and Mars As the Abode of Life (1908). With these writings, Lowell more than anyone else popularized the long-held belief that these markings showed that Mars sustained intelligent life forms.
He spent a great deal of time searching for intelligent life on Mars
Eugène Michel Antoniadi (1 March 1870 – 10 February 1944) was a Greek-French astronomer.
In 1892, Antoniadi joined the BAA’s Mars Section and became that section’s Director in 1896.
He was a highly reputed observer of Mars, and at first supported the notion of Martian canals, but after using the 83-centimetre telescope at Meudon Observatory during the 1909 opposition of Mars, he came to the conclusion that canals were an optical illusion.
Optical illusions and our eyes
We know that our eyes can play tricks on us so it was easy to see why people thought there was life on Mars.
What you see and what you think you see are different things. Your senses gather information and send it to your brain. But your brain does not simply receive this information—it creates your perception of the world. This means that sometimes your brain fills in gaps when there is incomplete information, or creates an image that isn’t even there!
If the Moon is high in the sky it can look bigger than when it is lower down.
In the image below you can see the Moon high up in the sky by St. Paul’s Cathedral
In the image below the Moon has been moved down by the side of St. Paul’s Cathedral in order to compare sizes. It looks slightly smaller when it is next to St Paul’s.
In the image above the parallel lines show the Moon doesn’t get bigger or smaller if its position in the sky changes.
So, the problem isn’t our eyes but our brain. Sometimes it interprets the information incorrectly.
A satellite is a better way of seeing what the surface of Mars is like.
Above right: Artist’s impression of the Viking Orbiter spacecraft. Artist’s description: “The Viking Orbiter spacecraft releases the aeroshell clad lander near the ‘high point’ of its orbit around Mars. The planet is shown based on Mariner 9 photography, oriented as it should appear during separation. Oil on canvas panel for NASA Headquarters.”
The Viking program consisted of a pair of American space probes sent to Mars, Viking 1 and Viking 2. Each spacecraft was composed of two main parts: an orbiter designed to photograph the surface of Mars from orbit, and a lander designed to study the planet from the surface. The orbiters also served as communication relays for the landers once they touched down.
The project cost roughly US$1 billion in 1970s dollars, equivalent to about 5 billion USD in 2019 dollars. The mission was considered successful and is credited with helping to form most of the body of knowledge about Mars through the late 1990s and early 2000s.
By discovering many geological forms that are typically formed from large amounts of water, the images from the orbiters caused a revolution in astronomers’ ideas about water on Mars. Huge river valleys were found in many areas. They showed that floods of water broke through dams, carved deep valleys, eroded grooves into bedrock, and travelled thousands of kilometres. Large areas in the southern hemisphere contained branched stream networks, suggesting that rain once fell. The flanks of some volcanoes are believed to have been exposed to rainfall because they resemble those caused on Hawaiian volcanoes. Many craters look as if the impactor fell into mud. When they were formed, ice in the soil may have melted, turned the ground into mud, then flowed across the surface. Normally, material from an impact goes up, then down. It does not flow across the surface, going around obstacles, as it does on some Martian craters. Regions, called “Chaotic Terrain,” seemed to have quickly lost great volumes of water, causing large channels to be formed. The amount of water involved was estimated to ten thousand times the flow of the Mississippi River. Underground volcanism may have melted frozen ice; the water then flowed away and the ground collapsed to leave chaotic terrain.
Mars image mosaic from the Viking 1 orbiter
NASA / USGS (see PIA04304 catalog page) – http://nssdc.gsfc.nasa.gov/photo_gallery/photogallery-mars.html http://nssdc.gsfc.nasa.gov/image/planetary/mars/marsglobe1.jpg
Global mosaic of 102 Viking 1 Orbiter images of Mars taken on orbit 1,334, 22 February 1980. The images are projected into point perspective, representing what a viewer would see from a spacecraft at an altitude of 2,500 km. At centre is Valles Marineris, over 3000 km long and up to 8 km deep. Note the channels running up (north) from the central and eastern portions of Valles Marineris to the area at upper right, Chryse Planitia. At left are the three Tharsis Montes and to the south is ancient, heavily impacted terrain. (Viking 1 Orbiter, MG07S078-334SP) Some of the features in this mosaic are annotated in Wikimedia Commons.
So, Viking revealed that Mars had craters, like our Moon, and it had some atmosphere. There was evidence of geological processes such as volcanic activity and polar caps capturing ice.
Dozens of crewless spacecraft, including orbiters, landers, and rovers, have been sent to Mars by the Soviet Union, the United States, Europe, and India to study the planet’s surface, climate, and geology.
As of 2018, Mars is host to eight functioning spacecraft: six in orbit — 2001 Mars Odyssey, Mars Express, Mars Reconnaissance Orbiter, MAVEN, Mars Orbiter Mission and ExoMars Trace Gas Orbiter — and two on the surface — Mars Science Laboratory Curiosity (rover) and InSight (lander).
The image above was taken by Steve Lee at his home. Mars is quite a bright object relative to other things in the night sky.
Get dark adapted
How can we see fainter things in the night sky? How can we see in the dark?
We need our eyes to become dark adapted.
When going out at night it is possible to see things in the night sky after a while. It helps if there is no bright Moon (unless you want to study the Moon) or light pollution so you can see more.
7% of light reflected off a full Moon reaches Earth, making it look like a spherical lump of coal. 7% might not seem much but the Sun is very bright.
It takes a while to see things in the night sky even if it is a clear dark night because dark adaption takes time. There are several reasons.
The pupils in your eyes need a few minutes to fully dilate allowing as much light to enter through the pupil as possible
Light enters the eye through the pupil. The iris controls the size of the pupil and automatically adjusts its size according the intensity of light. So, the iris controls the amount of light entering the eye by adjusting the size of the pupil.
A large pupil will allow as much light to enter the eye as possible when it is dark.
Rhodopsin (also known as visual purple) is a light-sensitive receptor protein involved in visual phototransduction. It is a biological pigment found in the rods of the retina and is a G-protein-coupled receptor (GPCR). It is extremely sensitive to light, and thus enables vision in low-light conditions. When rhodopsin is exposed to light, it immediately photobleaches. In humans, it is regenerated fully in about 30 minutes (> 1000 increase), after which rods are more sensitive. So, to see better in the dark you need to spend about 30 to 40 minutes in the dark.
Rhodopsin of the rods most strongly absorbs green-blue light and, therefore, appears reddish-purple, which is why it is also called “visual purple”. It is responsible for monochromatic vision in the dark.
Use averted vision
The eye has two types of detector cells in the retina of the eye
The human eye is a sense organ that reacts to light and allows vision. Rod and cone cells in the retina allow conscious light perception and vision including colour differentiation and the perception of depth. The human eye can differentiate between about 10 million colours and is possibly capable of detecting a single photon. The eye is part of the sensory nervous system.
The human eye’s non-image-forming photosensitive ganglion cells in the retina receive light signals which affect adjustment of the size of the pupil, regulation and suppression of the hormone melatonin and entrainment of the body clock.
1. vitreous body 2. ora serrata 3. ciliary muscle 4. ciliary zonules 5. Schlemm’s canal 6. pupil 7. anterior chamber 8. cornea 9. iris 10. lens cortex 11. lens nucleus 12. ciliary process 13. conjunctiva 14. inferior oblique muscle 15. inferior rectus muscle 16. medial rectus muscle 17. retinal arteries and veins 18. optic disc 19. dura mater 20. central retinal artery 21. central retinal vein 22. optic nerve 23. vorticose vein 24. bulbar sheath 25. macula 26. fovea 27. sclera 28. choroid 29. superior rectus muscle 30. Retina
The retina (from Latin: rēte) is the innermost, light-sensitive layer of tissue of the eye. The optics of the eye create a focused two-dimensional image of the visual world on the retina, which translates that image into electrical neural impulses to the brain to create visual perception. The retina serves a function analogous to that of the film or image sensor in a camera.
The light sensing cells are in back of the retina, so that light has to pass through layers of neurons and capillaries before it reaches the cones and rods.
Cone cells, or cones, are photoreceptor cells in the retinas of vertebrate eyes including the human eye. They respond differently to light of different wavelengths, and are thus responsible for colour vision, and function best in relatively bright light, as opposed to rod cells, which work better in dim light. Cone cells are densely packed in the fovea centralis, a 0.3 mm diameter rod-free area with very thin, densely packed cones which quickly reduce in number towards the periphery of the retina. Conversely, they are absent from the optic disc, contributing to the blind spot. There are about six to seven million cones in a human eye and are most concentrated towards the macula.
Cones are less sensitive to light than the rod cells in the retina (which support vision at low light levels), but allow the perception of colour. They are also able to perceive finer detail and more rapid changes in images because their response times to stimuli are faster than those of rods. Cones are normally one of the three types, each with different pigment, namely: S-cones, M-cones and L-cones. Each cone is therefore sensitive to visible wavelengths of light that correspond to short-wavelength, medium-wavelength and longer-wavelength light. Because humans usually have three kinds of cones with different photopsins, which have different response curves and thus respond to variation in colour in different ways, humans have trichromatic vision (generally blue, green and red, but it is the brain that gives the final colour of the image from the information given from the electrical signals produced by the retina)). The three pigments responsible for detecting light have been shown to vary in their exact chemical composition due to genetic mutation; different individuals will have cones with different colour sensitivity.
Normalized responsivity spectra of human cone cells, S, M, and L types
Rod cells are photoreceptor cells in the retina of the eye that can function in lower light better than the other type of visual photoreceptor, cone cells. Rods are usually found concentrated at the outer edges of the retina and are used in peripheral vision (cones are generally in the centre of the retina and the rods are on the outside). On average, there are approximately 92 million rod cells in the human retina (more of them than cones). Rod cells are more sensitive than cone cells and are almost entirely responsible for night vision. However, rods have little role in colour vision, which is the main reason why colours are much less apparent in dim light.
Cross section of the retina. Rods are visible at far right.
Averted vision is a technique for viewing faint objects which uses peripheral vision. It involves not looking directly at the object, but looking a little off to the side, while continuing to concentrate on the object.
There is some evidence that the technique has been known since ancient times, as it seems to have been reported by Aristotle while observing the star cluster now known as M41.
Averted vision works because there are virtually no rods in the fovea (central 1° of the retina) : a small area in the centre of the eye. The fovea contains primarily cone cells, which serve as bright light and colour detectors and are not as useful during the night. This situation results in a decrease in visual sensitivity in central vision at night. The density of the rod cells usually reaches a maximum around 20 degrees off the centre of vision.
The resolution of the eye, its ability to resolve fine detail, falls off rapidly beyond 0.6 degrees from the line of sight. It is four times poorer at 10 degrees radius as it is within the 0.6-degree radius from the line of sight.
Averted vision is the technique of looking out of the corner of your eye to see faint objects more clearly.
The eye is a remarkable detector but, unlike a CCD camera used for imaging, long exposures are not an option.
So, to see really faint objects you need to understand its limitations and the tricks that can be employed to coax the maximum out of its ‘short exposure’ capability.
If you can’t see the faint objects that others can it will be down to a number of factors: local light pollution, impatience or inadequate dark adaptation.
Electrochemical signals from the retina’s detectors travel via cells known as ganglion cells on their way to the brain.
In the high-resolution, full colour, centre of the retina, one ganglion cell connects to one cone.
But, as you go further out and low-light rods dominate, there may be 100 rod detectors passing their electrochemical signal into just one ganglion cell; it’s a case of grouping them to improve the signal-to-noise ratio.
Not surprisingly, with so many rod detectors teamed up, resolution suffers badly.
While the foveal cones can resolve a 60th of a degree (one arcminute), the teamed-up rod system, well away from the centre, may only resolve 20 arcminutes.
That’s not much finer than the size of the Moon, as seen with the naked eye.
The good news is that there is an optimum, ultra-sensitive, rod-packed region of the retina that you can bring into play.
The eye is about four astronomical magnitudes (40 times) more sensitive at this crucial point than at its centre.
So, if you can hold, say, a 10th magnitude star steady in the visual centre of a 12-inch reflecting telescope’s field, you’ll be able to hold a 14th magnitude star steady on the rods 12° off centre.
To get to this sensitive area, you have to look to one side of the faint astronomical object you’re trying to see: place the object you’re looking at roughly 8° to 16° away from the eye’s centre – 12° is a good average value for the best part.
At first this will seem incredibly difficult, but it will improve with practice.
This 12° offset should be arranged so that you appear to place the object nearer to your nose than the side of your face in the field of view.
The reason for this strange requirement is that the eye has a blind spot where the optic nerve leaves the retina, and this blind spot is on the side of the eye, away from the nose.
Note: It takes about 30/40 minutes for the eyes to be ready to carry out this technique.
On July 2020 Professor Blundell took the above picture of comet Neowise with her digital camera. When looking at it with averted vision she found the tail appeared to be about four times longer.
So, don’t look at astronomical objects along the line of sight. With averted imaging light ends up on the rods and you “see more”.
C/2020 F3 (NEOWISE) or Comet NEOWISE is a long period comet with a near-parabolic orbit discovered on March 27, 2020, by astronomers during the NEOWISE mission of the Wide-field Infrared Survey Explorer (WISE) space telescope. At that time, it was an 18th-magnitude object, located 2 AU (300 million km) away from the Sun and 1.7 AU (250 million km) away from Earth.
NEOWISE is known for being the brightest comet in the northern hemisphere since Comet Hale–Bopp in 1997.
Zooming in she was lucky to see the meteor (10 o’clock position)
What can we see in the little bit of 2020 left?
This is the time of year when Orion, Aldebaran, Taurus and the Pleiades can be seen (November/December)
Orion is a prominent constellation located on the celestial equator and visible throughout the world. It is one of the most conspicuous and recognizable constellations in the night sky. It is named after Orion, a hunter in Greek mythology. Its brightest stars are blue-white Rigel (Beta Orionis) and red Betelgeuse (Alpha Orionis).
Aldebaran is an orange giant star measured to be about 65 light-years from the Sun in the zodiac constellation Taurus. It is the brightest star in Taurus and generally the fourteenth-brightest star in the night sky, though it varies slowly in brightness between magnitude 0.75 and 0.95. Aldebaran is believed to host a planet several times the mass of Jupiter, named Aldebaran b.
Aldebaran is a red giant, cooler than the sun with a surface temperature of 3,900 K, but its radius is about 44 times the sun’s, so it is over 400 times as luminous. It spins slowly and takes 520 days to complete a rotation.
Taurus (Latin for “the Bull”) is one of the constellations of the zodiac and is located in the Northern celestial hemisphere. Taurus is a large and prominent constellation in the northern hemisphere’s winter sky. It is one of the oldest constellations, dating back to at least the Early Bronze Age when it marked the location of the Sun during the spring equinox. Its importance to the agricultural calendar influenced various bull figures in the mythologies of Ancient Sumer, Akkad, Assyria, Babylon, Egypt, Greece, and Rome.
A number of features exist that are of interest to astronomers. Taurus hosts two of the nearest open clusters to Earth, the Pleiades and the Hyades, both of which are visible to the naked eye. At first magnitude, the red giant Aldebaran is the brightest star in the constellation. In the northwest part of Taurus is the supernova remnant Messier 1, more commonly known as the Crab Nebula. One of the closest regions of active star formation, the Taurus-Auriga complex, crosses into the northern part of the constellation. The variable star T Tauri is the prototype of a class of pre-main-sequence stars.
The Pleiades also known as the Seven Sisters and Messier 45, are an open star cluster containing middle-aged, hot B-type stars in the north-west of the constellation Taurus. It is among the star clusters nearest to Earth, it is the nearest Messier object to Earth, and is the cluster most obvious to the naked eye in the night sky.
The cluster is dominated by hot blue and luminous stars that have formed within the last 100 million years. Reflection nebulae around the brightest stars were once thought to be left over material from the formation of the cluster, but are now considered likely to be an unrelated dust cloud in the interstellar medium through which the stars are currently passing.
Computer simulations have shown that the Pleiades were probably formed from a compact configuration that resembled the Orion Nebula. Astronomers estimate that the cluster will survive for about another 250 million years, after which it will disperse due to gravitational interactions with its galactic neighbourhood.
Together with the open star cluster of the Hyades the Pleiades form the Golden Gate of the Ecliptic.
Viewing this arrangement of the stars in the southern hemisphere was a sign for ancient peoples in south America to start growing crops.
The three bluish stars in the above images is Orion’s belt. The top left version was taken in January 2019. The top centre image was taken in January 2020 and the top right image was taken in February 2020. The orange star in the top right corner is Betelgeuse
Images focusing on Betelgeuse on January 2019, January 2020 and February 2020. It appeared to be getting fainter throughout this period of time even though the background brightness didn’t change. (The February 2020 did suffer a bit from light pollution, the great curse of modern astronomy)
Betelgeuse is usually the tenth-brightest star in the night sky and, after Rigel, the second-brightest in the constellation of Orion. It is a distinctly reddish semiregular variable star whose apparent magnitude, varying between +0.0 and +1.6, has the widest range displayed by any first-magnitude star. At near-infrared wavelengths, Betelgeuse is the brightest star in the night sky. Its Bayer designation is α Orionis, Latinised to Alpha Orionis and abbreviated Alpha Ori or α Ori.
Classified as a red supergiant of spectral type M1-2, Betelgeuse is one of the largest stars visible to the naked eye. If it were at the centre of our Solar System, its surface would lie beyond the asteroid belt and it would engulf the orbits of Mercury, Venus, Earth, Mars, and possibly Jupiter. Nevertheless, there are several larger red supergiants in the Milky Way, including Mu Cephei and the peculiar supergiant, VY Canis Majoris. Calculations of Betelgeuse’s mass range from slightly under ten to a little over twenty times that of the Sun. It is calculated to be about 548 light-years from the Sun, with an absolute magnitude of about −6. Less than 10 million years old, Betelgeuse has evolved rapidly because of its large mass and is expected to end its evolution with a supernova explosion, most likely within 100,000 years. Having been ejected from its birthplace in the Orion OB1 Association—which includes the stars in Orion’s Belt—this runaway star has been observed moving through the interstellar medium at a speed of 30 km/s, creating a bow shock over four light-years wide.
Starting in October 2019, Betelgeuse began to dim noticeably, and by mid-February 2020 its brightness had dropped by a factor of approximately 3, from magnitude 0.5 to 1.7. By 22 February 2020, Betelgeuse stopped dimming and started to brighten again.
What caused Betelgeuse to dim? One suggestion was that it was in the process of going supernova
However, researchers have estimated that this will not happen for 100,000 years. When it does the explosion will create a burst capable of briefly outshining an entire galaxy.
Another theory for the faintness is that the star spat off a lot of hot gas/bits of plasma (something our Sun does). This gas cooled as it went into outer space to cool and coagulate into “dust”/soot. This dust would be black so it would absorb light. If the soot happened to be in front of the line of sight on Earth the light would be attenuated, which is probably why the star appeared dim
Infrared observations found no significant change in brightness over the last 50 years, suggesting that the best explanation for the dimming is due to a change in extinction rather than an underlying change in the luminosity of the star. Further studies suggested that occluding “large-grain circumstellar dust” may be the most likely explanation for the dimming of the star. In other words, the soot was at fault.
The brightness did oscillate and Betelgeuse is being thoroughly examined as astronomers are not sure what is going to happen next.
How can we see the night sky?
It is important to find somewhere where it is dark.
The night sky is of special scientific interest (outstanding natural beauty) and light pollution isn’t just a problem for astronomers, it is also a problem for animals. In a previous blog I wrote about how light pollution was causing a problem for fireflies. It’s even a problem for us humans. We sleep better when it is completely dark
The Commission for Dark Skies (CFDS) has many reasons why we should keep the skies dark.
Dead birds collected in one year after colliding with lit windows in Toronto photograph courtesy F.L.A.P.
CFDS is a Canadian organisation that has put together information about the negative effects of light pollution.
At FLAP’s mission is to inform and educate people to take actions that keep birds safe from daytime and night-time bird-building collision threats; homes, workplaces or other built structures.
Many species of bird migrate at night, some using the Moon and stars to navigate. Bright artificial light confuses them. Millions die per annum when migrating.
Back to Orion
The above photos of Orion were taken using a digital camera and a small telescope in the Australian outback. Importantly the camera was fixed to a stationary support.
The blue arrow is pointing to a nebula, sometimes called the Orion nebula or Messier 42,
The Orion Nebula (also known as Messier 42, M42, or NGC 1976) is a diffuse nebula situated in the Milky Way, being south of Orion’s Belt in the constellation of Orion. It is one of the brightest nebulae, and is visible to the naked eye in the night sky. M42 is located at a distance of 1,344 ± 20 light years and is the closest region of massive star formation to Earth. The M42 nebula is estimated to be 24 light years across. It has a mass of about 2,000 times that of the Sun.
The Orion Nebula is one of the most scrutinised and photographed objects in the night sky, and is among the most intensely studied celestial features. The nebula has revealed much about the process of how stars and planetary systems are formed from collapsing clouds of gas and dust. Astronomers have directly observed protoplanetary disks, brown dwarfs, intense and turbulent motions of the gas, and the photo-ionizing effects of massive nearby stars in the nebula.
It has been studied for many years.
John Herschel studied it in great detail. He used an eyepiece to produce the image below. It is an inverted image as he would have used a pencil to draw on white paper. It isn’t terribly clear but Herschel identified an area where there were fours stars sitting in a little dark halo.
John Herschel inverted image
Sir John Frederick William Herschel, 1st Baronet KH FRS (7 March 1792 – 11 May 1871) was an English polymath, mathematician, astronomer, chemist, inventor, experimental photographer who invented the blueprint, and did botanical work.
Herschel originated the use of the Julian day system in astronomy. He named seven moons of Saturn and four moons of Uranus – the seventh planet, discovered by his father Sir William Herschel. He made many contributions to the science of photography, and investigated colour blindness and the chemical power of ultraviolet rays. His Preliminary Discourse (1831), which advocated an inductive approach to scientific experiment and theory-building, was an important contribution to the philosophy of science.
Howard Banich’s drawing is of the same region of space as John Herschel’s, but they are done probably about 200 years apart. They look remarkably similar despite the fact that Howard Banich had better optics.
Below is a multicolour version of Howard Banich’s drawing
The “square” in the above image contains four bright stars forming a trapezium shape within a dark halo (Just as Herschel had drawn). The drawing is the signature of an accurate drawing by a human. The stars stand out against the dark background,
Below is an image taken with a telescope and a camera by the Global Jet Watch observatory. The bright region is where the trapezium of stars can be found
Zooming in on the trapezium of stars (image below) there is no dark halo, that appeared in the two drawings, many years apart.
Did the artists make a mistake? No, they drew exactly what they thought they saw.
When the eyes (and the brain) look at something very bright they are overwhelmed. So, the brain filters out some of the brightness.
The drawings faithfully showed what the human observer sees.
In reality there is no dark halo, just radiating light, reflecting/re-radiating the light from the new young hot stars.
The Orion nebula is an ongoing region of star formation.
The two sets of drawings, many years apart, shows how the human eyes work, being faithful to the data as they observe it. The brain is trying to understand and reconstruct the astrophysical reality.
It is fascinating how the eyes work. Their imperfections, their non-linear responses. Also, the ability for the brain to mis-interpret the information the eyes send it.
But the eyes make an incredible contribution to astronomical endeavours
The Crab Nebula (catalogue designations M1, NGC 1952, Taurus A) is a supernova remnant (the material that was ejected as a result of the supernova explosion) and pulsar wind nebula in the constellation of Taurus. The common name comes from William Parsons, 3rd Earl of Rosse, who observed the object in 1840 using a 36-inch telescope and produced a drawing that looked somewhat like a crab. Corresponding to a bright supernova recorded by Chinese astronomers in 1054, the nebula was discovered earlier by English astronomer John Bevis in 1731. The nebula was the first astronomical object identified corresponding to a historical supernova explosion.
At an apparent magnitude of 8.4, comparable to that of Saturn’s moon Titan, it is not visible to the naked eye but can be made out using binoculars under favourable conditions or a telescope if not. The nebula lies in the Perseus Arm of the Milky Way galaxy, at a distance of about 2.0 kiloparsecs (6,500 ly) from Earth. It has a diameter of 3.4 parsecs (11 ly), corresponding to an apparent diameter of some 7 arcminutes, and is expanding at a rate of about 1,500 kilometres per second, or 0.5% of the speed of light.
At the centre of the nebula lies the Crab Pulsar, a neutron star 28–30 kilometres across with a spin rate of 30.2 times per second, which emits pulses of radiation from gamma rays to radio waves. At X-ray and gamma ray energies above 30 keV, the Crab Nebula is generally the brightest persistent gamma-ray source in the sky, with measured flux extending to above 10 TeV. The nebula’s radiation allows detailed study of celestial bodies that occult it. In the 1950s and 1960s, the Sun’s corona was mapped from observations of the Crab Nebula’s radio waves passing through it, and in 2003, the thickness of the atmosphere of Saturn’s moon Titan was measured as it blocked out X-rays from the nebula.
The Crab Nebula is a good object to look at if you have a reasonable sized telescope,
Dame Susan Jocelyn Bell Burnell DBE FRS FRSE FRAS FInstP (born 15 July 1943) is an astrophysicist from Northern Ireland who, as a postgraduate student, discovered the first radio pulsars in 1967. She was credited with “one of the most significant scientific achievements of the 20th century”. The discovery was recognised by the award of the 1974 Nobel Prize in Physics, but although she was the first to observe the pulsars, Bell was not one of the recipients of the prize.
The Crab Pulsar (PSR B0531+21) is a relatively young neutron star. The star is the central star in the Crab Nebula, a remnant of the supernova SN 1054, which was widely observed on Earth in the year 1054. The pulsar was the first to be connected with a supernova remnant
Jocelyn Bell Burnell, who co-discovered it in 1967, relates that in the late 1950s a woman viewed the Crab Nebula source at the University of Chicago’s telescope, then open to the public, and noted that it appeared to be flashing. The astronomer she spoke to, Elliot Moore, disregarded the effect as scintillation (twinkling), despite the woman’s protestation that as a qualified pilot she understood scintillation and this was something else. Bell Burnell notes that the 30 Hz frequency of the Crab Nebula optical pulsar is difficult for many people to see.
Twinkling, or scintillation, is a generic term for variations in apparent brightness or position of a distant luminous object viewed through a medium. If the object lies outside the Earth’s atmosphere, as in the case of stars and planets, the phenomenon is termed astronomical scintillation; within the atmosphere, the phenomenon is termed terrestrial scintillation. As one of the three principal factors governing astronomical seeing (the others being light pollution and cloud cover), atmospheric twinkling is defined as variations in illuminance only.
In simple terms, twinkling of stars is caused by the passing of light through different layers of a turbulent atmosphere.
So, the flashing star that the woman saw was the crab pulsar. So human eyes can see things that artificial observing methods can miss.
If the observer is honest, what they see can provide remarkable insight into what is going on in the night sky. That there was a pulsar, within the crab nebular, which was in the centre of Messier 1 and Jocelyn Bell-Burnell officially discovered it some years after the woman saw the flashing light.
If our eyes are well trained and well adjusted to seeing at night, they can see some amazing things.
Returning to John Herschel’s drawing (bottom left) it is important to realise that his optical equipment was quite poor by modern standards. The image bottom right is a version using a better telescope with a camera (rather than drawing)
The image below is of Messier 8
The Lagoon Nebula (catalogued as Messier 8 or M8, NGC 6523, Sharpless 25, RCW 146, and Gum 72) is a giant interstellar cloud in the constellation Sagittarius. It is classified as an emission nebula and as an H II region.
The Lagoon Nebula was discovered by Giovanni Hodierna before 1654 and is one of only two star-forming nebulae faintly visible to the eye from mid-northern latitudes. Seen with binoculars, it appears as a distinct oval cloudlike patch with a definite core. Within the nebula is the open cluster NGC 6530.
The Lagoon Nebula is estimated to be between 4,000-6,000 light-years away from the Earth.
An emission nebula is a nebula formed of ionized gases that emit light of various wavelengths.
it appears pink in time-exposure colour photos but is grey to the eye peering through binoculars or a telescope, human vision having poor colour sensitivity at low light levels.
The Lagoon Nebula also contains at its centre a structure known as the Hourglass Nebula (so named by John Herschel). In 2006 the first four Herbig–Haro objects were detected within the Hourglass, also including HH 870. This provides the first direct evidence of active star formation by accretion within it.
Eyes and cameras
Telescopes and eyes together can give us a good picture of what is going on in the sky
What are the analogies to how the eye sees and how the camera sees? How does everything fit together?
The above image compares and contrasts the eye and the camera
Light enters the camera and reflects off the primary and secondary mirror. The light is the focused onto the detector i.e. the camera. The image is stored and controlled by a computer.
The quantity of light entering the eye is controlled by the size of the pupil as the amount of light entering a camera is controlled by the size of the aperture.
The larger the pupil the more light enters the eye which is why dark adaption is so important at night.
You can’t change the aperture size or collecting area of the telescope, they are fairly fixed. Despite this there is one advantage of using a camera/telescope combination compared with the eye and that is there is a long exposure time for a camera.
Light hitting the retina needs an average time of 0.1 to 0.2 of a second to be collected and interpreted. With a camera you can set the exposure time to quite a range.
The image above, taken by Professor Blundell, showed the meteor (very faint at the 10oclock position) required a 4 second exposure.
You can assist your eyes in different ways including using an eyepiece, telescope or binoculars. Binoculars are particularly useful because your brain works best when it receives information from two eyes. Binoculars also collect more light so that you can see fainter targets.
It is important to note that it is still the brain that interprets camera images and it is the brain that operates the computer (even the process of AI ultimately starts with a brain).
The collecting area of the eye and telescope are very important.
The image below is of Professor Blundell taking a picture of her own eye looking at the primary mirror of her telescope with a digital camera.
The diameter of her retina was about a few mm
The diameter of her camera lens was about a few cm
The diameter of the primary mirror was about 0.5m
The telescope was necessary to obtain the image of Messier 8. The collecting area really does matter in astronomy. Size matters when talking about the size of the “light bucket” which is collecting light signals from different parts of the galaxy and beyond.
Things look at between November 2020 to January 2021 include Jupiter, Mercury and our Moon.
When objects come close together in the angular sense (we perceive them as close, even though they probably aren’t).
A great conjunction is due on the 21st December 2020 and it will be the closest Jupiter-Saturn conjunction since the year 1623! At their closest, Jupiter and Saturn will be only 0.1 degrees apart. That’s just 1/5 of a full moon diameter. It is interesting to note that in 1623 Gresham college was only on its third professor of astronomy
A great conjunction is a conjunction of the planets Jupiter and Saturn. Great conjunctions occur regularly (every 19.6 years, on average) due to the combined effect of Jupiter’s approximately 11.86-year orbital period and Saturn’s 29.5-year orbital period.
(Left) View the ‘great conjunction’ of Jupiter and Saturn on 21 December at 17:00 UT and (right) use a telescope to pick out each of the planet’s moons before they get too low in the sky (south-up view). Credit: Pete Lawrence (who I would like to thank because he shows things in the night sky in the UK, so I don’t have to go out in the cold)
See it with: Naked eye, binoculars and telescope
How to see it: Early evening bright twilight towards the southwest horizon
On 21 December, after months slowly approaching each other, Jupiter and Saturn meet up for a spectacular ‘great conjunction’. They will appear so close in the sky that for a naked-eye view they may look like a single, bright object. They will be low in the evening twilight and will set quickly, so a good uncluttered southwestern horizon is essential in order to view this conjunction.
Binoculars will separate them into two objects with Saturn, the fainter of the two, lying above the mighty Jupiter. However, if you can use a telescope then aim it at them before they get too low. You will not only see them as discs, but may even see Saturn’s rings and Jupiter’s belts in the same view, along with the four Galilean moons of Jupiter and Saturn’s largest moon, Titan. This will be a brilliant conjunction as the year draws to a close.
It will be sensible to start looking into the night sky well before the 21st. There are several reasons for doing this. Firstly, you will be able to see Saturn and Jupiter getting closer together and secondly it will allow your eyes to practise to observing in the dark)
Saturn is the sixth planet from the Sun and the second-largest in the Solar System, after Jupiter. It is a gas giant with an average radius of about nine times that of Earth. It only has one-eighth the average density of Earth; however, with its larger volume, Saturn is over 95 times more massive
What the eyes see in the night sky can contribute to professional astronomy.
Since its founding in 1911, the American Association of Variable Star Observers (AAVSO) has coordinated, collected, evaluated, analysed, published, and archived variable star observations made largely by amateur astronomers and makes the records available to professional astronomers, researchers, and educators. These records establish light curves depicting the variation in brightness of a star over time.
A new star popped up on the 10th June. How is it changing? How fast is it changing? Telescopes are not always available to look at the night sky and professional astronomers can’t stop what they are doing if something else interesting appears.
The general public are encouraged to watch what is happening and make comparisons with new stars against neighbouring stars. Stars which are not changing and whose brightness is known and calibrated. They can answer questions like “is the new star brighter or dimmer than, say, star 91.
Looking at the above image it seemed to be a lot brighter than star 91 on the 10th of June. Stars 64 and 61 are either side of the new star in terms of brightness.
The observations were repeated on the 15th and 28th of June.
This process can be done even on a cloudy night as the relative brightness stays the same.
In astronomy, a light curve is a graph of light intensity of a celestial object or region, as a function of time. The light is usually in a particular frequency interval or band. Light curves can be periodic, as in the case of eclipsing binaries, Cepheid variables, other periodic variables, and transiting extrasolar planets, or aperiodic, like the light curve of a nova, a cataclysmic variable star, a supernova or a microlensing event or binary as observed during occultation events. The study of the light curve, together with other observations, can yield considerable information about the physical process that produces it or constrain the physical theories about it.
Why the process of collection and light gathering is so important. The light comparison data
The above image is a light curve showing the brightness of a newly exploded star, called a nova, which occurred in the constellation of Sagittarius in 2015.
Sagittarius is one of the constellations located in the Southern celestial hemisphere.
The above image is a light curve showing the brightness of a newly exploded star, called a nova, which occurred in the constellation of Sagittarius in 2015.
It is a bit scruffy and it is difficult to understand what is going on. It was the only light curve data that was available from telescopes in the first few weeks after the nova exploded.
Sagittarius is one of the constellations located in the Southern celestial hemisphere.
The above image shows the light curve after data was overlaid from amateur observers. You can see more clearly the up and down zig-zags showing the changing brightness. This is what professional astronomers really want.
The above image also shows that things evolved dramatically by the late spring and early summer of 2015.
There was a sudden drop in brightness probably due to a whole lot of matter being thrown off the star. It was at its faintest after about 100 days.
People all over the world contributed to filling in the gaps.
There is uncertainty as human eyes aren’t perfect but they are scientifically useful.
Professor Blundell and her PhD student, Dominic McLoughlin, are investigating the nova. The are looking at changes in this particular star as it evolved. The up and down bits are very useful.
Eyes are non-linear but their observations are useful
In the UK, Galaxy Zoo is a project that asks the general public to help explore galaxies near and far, sampling a fraction of the roughly one hundred billion that are scattered throughout the observable Universe. Each one of the systems, containing billions of stars, has had a unique life, interacting with its surroundings and with other galaxies in many different ways; the aim of the Galaxy Zoo team is to try and understand these processes, and to work out what galaxies can tell us about the past, present and future of the Universe as a whole.
Questions and Answers
1) What size of telescope did Professor Blundell use for her “video” of Mars?
14” diameter but you would get good images with a digital camera held stationary with a tripod
2) How are machines able to take picture of Mars whilst dealing with external factors such as dust and radiation etc.
The process isn’t easy. Any images taken must be done after the equipment has been calibrated
Astronomical photometry is concerned with measuring the brightness of celestial objects. CCD (Charge-Coupled Device) photometric calibration is concerned with converting the arbitrary units in which CCD images are recorded into standard, reproducible units. In principle the observed brightness could be calibrated into genuine physical units, such as W m−2, and indeed this is occasionally done. However, in optical astronomy it is much more common to calibrate the observed brightness into an arbitrary scale in which the brightness is expressed relative to the brightness of well-studied ‘standard’ stars. That is, the stars chosen as standards are being used as ‘standard candles’ to calibrate an arbitrary brightness scale.
The reasons for wanting to calibrate CCD frames are virtually self-evident. Once an image has been calibrated it can be compared directly with other photometrically calibrated images and with theoretical predictions. The calibration of CCD frames is simplified because CCD detectors have a response which is usually for all practical purposes linear; that is, the size of the recorded signal is simply proportional to the observed brightness of the object.
In other words, you can’t say for sure that a star is dimmer today than it was yesterday if you haven’t calibrated the measuring device for brightness.
Calibration is taking information and turning it into something useful. This process is crucial with telescopes etc.
Dust can re-radiate infra-red radiation so this needs to be subtracted from signals/images to be studied. The process is informed by experience and curiosity.
A specially developed CCD in a wire-bonded package used for ultraviolet imaging
A charge-coupled device (CCD) is an integrated circuit containing an array of linked, or coupled, capacitors. Under the control of an external circuit, each capacitor can transfer its electric charge to a neighbouring capacitor. CCD sensors are a major technology used in digital imaging.
3) Does the lack of cratering on Mars, compared to the Moon, suggest that the cratering primarily occurred back when Mars had an atmosphere, before it lost its magnetic field? Surely today’s atmosphere would not burn up the meteorites?
Craters on Mars depend on the history of evolution of Mars itself. Geological formation and the evolution of the atmosphere.
The processes have not been fully resolved.
4) Could you suggest an optimal size/strength of binoculars for sky watching? I much prefer using two eyes to one!
Eyes have evolved to work in pairs.
The type of binoculars depends on where you are using them.
Do research on the pros and cons of each type
5) It’s not only domestic lights – phone & computer screens destroy your dark adaption as well.
6) Why aren’t my eyes dark adapted when I wake up in the middle of the night in a dark bedroom?
It takes time. Upon exposure to darkness, the rhodopsin is able to regenerate and reactivate, becoming sensitive again to light and improving our night vision. But this regeneration process takes time.
Adaption takes time, when you first wake up, your eyes have been closed
Are you sure you do not have lights such as TV pilot lights or streetlights providing background light that will stop dark adaption ?
For clarity the Rods are only sensitive to Green light. Experiments have showed that rods are most sensitive to wavelengths of light around 498 nm (green-blue), and insensitive to wavelengths longer than about 640 nm (red). This is responsible for the Purkinje effect: as intensity dims at twilight, the rods take over, and before colour disappears completely, peak sensitivity of vision shifts towards the rods’ peak sensitivity (blue-green). If you can see colour you are using your less sensitive rods.
It is a very dark room and eyes take several minutes to adapt. Perhaps adaptation is shut down when eyes are closed or one is asleep.
One viewer wrote “I must admit my eyes also take a while to adapt to waking up in a dark room – although that may have something to do with my use of blue light before sleep”
Rhodopsin aids the function of night vision. If your eyes are closed the photons cannot activate the rhodopsin enabling low light vision. Just guessing.
Most muscles relax during sleep, perhaps iris muscles do too.
Dark adaption requires high blood oxygen levels which is why it does not work on high mountains until you take a sniff from an oxygen bottle. I wonder if Oxygen levels fall while we sleep?
7) I wonder if albino people have potentially better night vision?
8) The Crab Pulsar flashes at 33 Hz – not many people can see things flashing at that rate. There is more chance of you seeing it with averted or peripheral vision. Peripheral vision is there to detect motion – tigers leaping out of the darkness!
We actually have a lot of lectures in our archive from Eye expert Will Ayliffe: https://www.gresham.ac.uk/series/vision-and-the-eye/
9) The story I heard is that some people can see faster flashes than others, and this lady was one of them.
Sometimes one can see a fluorescent tube scintillating or pulsing and they run at 50hz AC mains frequency.
An American Air Force Sergeant also made a pre-discovery of pulsars using radar sets in Alaska. He found several pulsating radio sources but his superiors were not interested and he was not allowed to publicise the information as the radar was secret.
Chris – fluorescent lights flash at 100 Hz as they turn on & off each half cycle. Flickering is normally at a lower frequency quite often caused by a failing starter on old style tubes that used these.
10) The numbers are the brightness of the star in magnitudes without the decimal points
11) So pulsars were discovered by an amateur, who was ignored!
Professor Blundell’s lecture series are as follows:
2020/21 Cosmic Vision
2019/20 Cosmic Concepts
All lectures by the Gresham Professors of Astronomy can be accessed here.