Goldsmiths’ 2014 Mathematics

Thursday 24th July

Complex Networks

Dr G Bianconi

image

What does complex mean?

1) composed of many interconnected parts; compound; composite: a complex highway system.

2) characterised by a very complicated or involved arrangement of parts, units, etc.: complex machinery.

3) so complicated or intricate as to be hard to understand or deal with: a complex problem.

Source: Dictionary.com

4) Complexity, a scientific theory which asserts that some systems display behavioural phenomena that are completely inexplicable by any conventional analysis of the systemsʼ constituent parts. These phenomena, commonly referred to as emergent behaviour, seem to occur in many complex systems involving living organisms, such as a stock market or the human brain.

Source: John L. Casti, Encyclopædia Britannica

Complex networks

These describe the underlying structure of interacting complex Biological, Social and Technological systems.

image

Food webs are an example of a biological network as is the kreb citric acid

http://www.sciencequiz.net/jcscience/jcbiology/ecology/mcq/ecology_mcq.htm

Every action — walking, talking, blinking, or so much as thinking — requires a certain muscle to carry out our will. But what fuels this important part of our body? Adenosine Triphosphate (ATP). Molecules of this essential substance are often referred to as “currency of energy.” Though we have several sources of energy in our body, ATP is the only one that is directly usable by any part, muscle, or cell!

http://www.buzzle.com/articles/why-is-atp-an-important-molecule-in-metabolism.html

image

Facebook is an example of a social network

www.Facebook.com

Technology networks such as wi-fi network, optical fibre networks, motorway network and the electricity network in the home.

image

http://www.ispexperts.com.np/?p=734

image

http://www.highways.gov.uk/our-road-network/our-network/

http://www.schoolphysics.co.uk/age11-14/Electricity%20and%20magnetism/Current%20electricity/text/House_electrical_circuits/index.html

Human disease can also be passed along a network

image

In the HDN, each node corresponds to a distinct disorder; coloured based on the disorder class to which it belongs, the name of the 22 disorder classes being shown on the right. A link between disorders in the same disorder class is coloured with the corresponding dimmer colour and links connecting different disorder classes are grey. The size of each node is proportional to the number of genes participating in the corresponding disorder (see key), and the link thickness is proportional to the number of genes shared by the disorders it connects. It indicates the name of disorders with 10 associated genes.

http://www.nature.com/scitable/resource?action=showFullImageForTopic&imgSrc=/scitable/content/6413/10[1].1073_pnas0701361104-fig2a_full.jpg

Disease can also be passed along through air, water and human contact.

image

Original map by John Snow showing the clusters of cholera cases in the London epidemic of 1854, drawn and lithographed by Charles Cheffins. He was able to use it to work backwards to the origin of the cholera outbreak, the public water pump on Broad Street (now Broadwick Street).

image

John Snow (15 March 1813 – 16 June 1858) was an English physician and a leader in the adoption of anaesthesia and medical hygiene.

http://en.wikipedia.org/wiki/1854_Broad_Street_cholera_outbreak

http://en.wikipedia.org/wiki/John_Snow_(physician)

Why use networks?

Because networks encode for the organisation, function, robustness and dynamic behaviour of the entire complex system.

A simple story predicting the H1N1 pandemic

The 2009 flu pandemic or swine flu was an influenza pandemic, and the second of the two pandemics involving H1N1 influenza virus (the first of them being the 1918 flu pandemic), albeit in a new version.

http://en.wikipedia.org/wiki/2009_flu_pandemic

http://en.wikipedia.org/wiki/Pandemic_H1N1/09_virus

http://en.wikipedia.org/wiki/Influenza_A_virus_subtype_H1N1

image

http://www.gleamviz.org/case-study/

Pandemics and jump from one place to another. The colours in the above image show the prevalence of the disease.

Epidemic Forecast Predicting the H1N1 pandemic

image

The above left image shows the real outbreaks in the United States and the above right image shows the projected outbreaks.

A simple story: Power outrage

image

The problem on this case was due to the complexity in the generating network in New York.

In the Human cell there are 23,299 genes in the Human Genome but the worm, c.elegans has 21,733

http://web.ornl.gov/sci/techresources/Human_Genome/

image

http://myclass.theinspiredinstructor.com/science/health_diagrams/Cell_Label.htm

https://bspace.berkeley.edu/access/content/group/0f4d90d8-c107-467d-000e-28cb28b3815b/Lecture%20Supplements/develop/images/009genome.jpg

Caenorhabditis elegans is a free-living (non-parasitic), transparent nematode (roundworm), about 1 mm in length, that lives in temperate soil environments.

image

http://en.wikipedia.org/wiki/Caenorhabditis_elegans

The human brain contains a network of between 10 and 100 billion neurons.

The precursors of complex network theory and the history of network analysis

Graph theory: 1735, Euler

image

Map of Königsberg in Euler’s time showing the actual layout of the seven bridges, highlighting the river Pregel and the bridges.

The Seven Bridges of Königsberg is a historically notable problem in mathematics. Its negative resolution by Leonhard Euler in 1735 laid the foundations of graph theory and prefigured the idea of topology.

The problem was to find a walk through the city that would cross each bridge once and only once. The islands could not be reached by any route other than the bridges, and every bridge must have been crossed completely every time; one could not walk halfway onto the bridge and then turn around and later cross the other half from the other side. The walk should start and end at the same spot. Euler proved that the problem has no solution. There could be no non-retracing the bridges. The difficulty was the development of a technique of analysis and of subsequent tests that established this assertion with mathematical rigor.

image

1735: Leonhard Euler’s theorem:

(a) If a graph has nodes of odd degree, there is no path.

(c) If a graph is connected and has no odd degree nodes, it has at least one path.

http://en.wikipedia.org/wiki/Seven_Bridges_of_K%C3%B6nigsberg

Social Network Research: 1930s, Moreno

A social network is a social structure made up of a set of social actors (such as individuals or organizations) and a set of the dyadic ties between these actors. The social network perspective provides a set of methods for analysing the structure of whole social entities as well as a variety of theories explaining the patterns observed in these structures. The study of these structures uses social network analysis to identify local and global patterns, locate influential entities, and examine network dynamics.

Jacob Moreno is credited with developing the first sociograms in the 1930s to study interpersonal relationships. These approaches were mathematically formalized in the 1950s and theories and methods of social networks became pervasive in the social and behavioural sciences by the 1980s

http://en.wikipedia.org/wiki/Social_network

Communication networks/internet: 1960s

http://en.wikipedia.org/wiki/History_of_the_Internet#Networks_that_led_to_the_Internet

The origins of the Internet date back to research commissioned by the United States government in the 1960s to build robust, fault-tolerant communication via computer networks. While this work, together with work in the United Kingdom and France, led to important precursor networks, they were not the Internet. There is no consensus on the exact date when the modern Internet came into being, but sometime in the early to mid-1980s is considered reasonable. From that point, the network experienced decades of sustained exponential growth as generations of institutional, personal, and mobile computers were connected to it.

http://en.wikipedia.org/wiki/Internet

The first ARPANET link was established between the University of California, Los Angeles (UCLA) and the Stanford Research Institute at 22:30 hours on October 29, 1969.

http://en.wikipedia.org/wiki/History_of_the_Internet

Ecological Networks: May, 1979.

http://www.brookes.ac.uk/lifesci/pdf/shreeve/2002EcologicalEntomology.pdf

http://ericlberlow.net/pubs/Memmott%20et%20al%202005%20Chapter%2014%20proofs.pdf

http://www.ecologyandsociety.org/vol11/iss1/art15/

Complex Networks: 1998

In the context of network theory, a complex network is a graph (network) with non-trivial topological features—features that do not occur in simple networks such as lattices or random graphs but often occur in real graphs. The study of complex networks is a young and active area of scientific research inspired largely by the empirical study of real-world networks such as computer networks and social networks.

In 1998, Duncan J. Watts and Steven Strogatz published the first small-world network model, which through a single parameter smoothly interpolates between a random graph and a lattice. Their model demonstrated that with the addition of only a small number of long-range links, a regular graph, in which the diameter is proportional to the size of the network, can be transformed into a “small world” in which the average number of edges between any two vertices is very small (mathematically, it should grow as the logarithm of the size of the network), while the clustering coefficient stays large. It is known that a wide variety of abstract graphs exhibit the small-world property, e.g., random graphs and scale-free networks. Further, real world networks such as the World Wide Web and the metabolic network also exhibit this property.

http://en.wikipedia.org/wiki/Complex_network

Network science and complex networks theory – The emergence of network science

Data Availability:

Movie Actor Network, 1998;

World Wide Web, 1999;

C elegans neural wiring diagram 1990;

Citation Network, 1998;

Metabolic Network, 2000;

PPI network, 2001

Universality:

The architecture of networks emerging in various domains of science, nature, and technology are more similar to each other than one would have expected.

The (urgent) need to understand complexity:

Despite the challenges complex systems offer us, we cannot afford to not address their behaviour, a view increasingly shared both by scientists and policy makers. Networks are not only essential for this journey, but during the past decade some of the most important advances towards understanding complexity were provided in context of network theory.

The characteristics of network science

Interdisciplinary

Quantitative and Mathematical

Computational

Empirical

The life of networks

image

Fundamental results of complex networks theory

Types of networks

Simple: Each link is either existent or non-existent, the links do not have directionality (Internet,…)

http://www.unc.edu/~unclng/Internet_History.htm

image

Directed: The links have directionality, i.e., arrows (World-Wide-Web, social networks…)

image

http://timeline.web.cern.ch/timelines/The-birth-of-the-World-Wide-Web

Signed: The links have a sign (friendship networks, biological networks… )

image

In this visualisation of a high school’s empirical friendship network from the scientists’ data, the different coloured (blue, green, purple, orange) nodes represent students in different grades. Links between nodes are drawn when a student nominates another student as a friend. In the recent study, physicists developed a novel model to describe this social network based on rules governing physical systems. Credit: Marta Gonzalez

Read more at: http://phys.org/news11611.html#jCp

http://phys.org/news11611.html

Types of networks

Weighted: The links are associated to a real number indicating their weight (airport networks, phone-call networks…)

With features of the nodes: The nodes might have weight or colour (social networks, disease, etc..)

Total number of nodes N and links L

The total number of nodes N and the total number of links L are the most basic characteristics of a network

Adjacency matrix

Network: A set of labelled nodes and links between them

image

Adjacency matrix: The matrix of entries a(i,j) = 1 if there is a link between node i and j; a(i,j)=0 otherwise

image

Degree distribution

Degree if node i: Number of links of node i

image

Network:

image

Degree distribution: P(k): how many nodes have degree k

image

Heterogeneities in complex networks – Random graphs

In probability theory and statistics, the binomial distribution is the discrete probability distribution of the number of successes in a sequence of n independent yes/no experiments, each of which yields success with probability p. Therewith the probability of an event is defined by its binomial distribution. A success/failure experiment is also called a Bernoulli experiment or Bernoulli trial; when n = 1, the binomial distribution is a Bernoulli distribution. The binomial distribution is the basis for the popular binomial test of statistical significance.

In probability theory and statistics, the Poisson distribution (French pronunciation;), named after French mathematician Siméon Denis Poisson, is a discrete probability distribution that expresses the probability of a given number of events occurring in a fixed interval of time and/or space if these events occur with a known average rate and independently of the time since the last event. The Poisson distribution can also be used for the number of events in other specified intervals such as distance, area or volume.

image

http://en.wikipedia.org/wiki/Binomial_distribution

http://en.wikipedia.org/wiki/Poisson_distribution

Siméon Denis Poisson (21 June 1781 – 25 April 1840), was a French mathematician, geometer, and physicist.

image

http://en.wikipedia.org/wiki/Sim%C3%A9on_Denis_Poisson

Heterogeneities and power-laws in complex systems

1906 Pareto Principles

20% people receive

80% of the income

The Pareto principle (also known as the 80–20 rule, the law of the vital few, and the principle of factor sparsity) states that, for many events, roughly 80% of the effects come from 20% of the causes. Management consultant Joseph M. Juran suggested the principle and named it after Italian economist Vilfredo Pareto, who observed in 1906 that 80% of the land in Italy was owned by 20% of the population; Pareto developed the principle by observing that 20% of the pea pods in his garden contained 80% of the peas

http://en.wikipedia.org/wiki/Pareto_principle

1935 Zipf Law (frequency of words)

image

Zipf’s law, an empirical law formulated using mathematical statistics, refers to the fact that many types of data studied in the physical and social sciences can be approximated with a Zipfian distribution, one of a family of related discrete power law probability distributions. The law is named after the American linguist George Kingsley Zipf (1902–1950), who first proposed it (Zipf 1935, 1949), though the French stenographer Jean-Baptiste Estoup (1868–1950) appears to have noticed the regularity before Zipf. It was also noted in 1913 by German physicist Felix Auerbach.

Zipf’s law states that, under certain circumstances, the frequency of any word is inversely proportional to its rank in the frequency table. Thus the most frequent word will occur approximately twice as often as the second most frequent word, three times as often as the third most frequent word, etc.

Probability mass function

image

http://en.wikipedia.org/wiki/Zipf’s_law

Universalities: Scale-free degree distribution

image

The Barabási–Albert (BA) model is an algorithm for generating random scale-free networks using a preferential attachment mechanism. Scale-free networks are widely observed in natural and human-made systems, including the Internet, the world wide web, citation networks, and some social networks. The algorithm is named for its inventors Albert-László Barabási and Réka Albert.

http://en.wikipedia.org/wiki/Barab%C3%A1si%E2%80%93Albert_model

http://www.cis.upenn.edu/~mkearns/teaching/NetworkedLife/power-internet.pdf

Scale-free e networks

image

A scale-free network is a network whose degree distribution follows a power law, at least asymptotically. That is, the fraction P(k) of nodes in the network having k connections to other nodes goes for large values of k as

image

where g is a parameter whose value is typically in the range 2 < g < 3, although occasionally it may lie outside these bounds.

http://en.wikipedia.org/wiki/Scale-free_network

Technological networks: Internet; world-wide web

image

A Map of Every Device in the World That’s Connected to the Internet

http://gizmodo.com/a-map-of-every-device-in-the-world-thats-connected-to-t-1628171291?utm_campaign=socialflow_gizmodo_twitter&utm_source=gizmodo_twitter&utm_medium=socialflow

Biological networks: Metabolic networks; protein-interaction networks; transcription networks

Transportation networks: Airport networks

Social networks; Collaboration networks; citation networks

Economical networks; Networks of shareholders in the financial market; World trade web

Why this universality?

Growing networks

A preferential attachment process is any of a class of processes in which some quantity, typically some form of wealth or credit, is distributed among a number of individuals or objects according to how much they already have, so that those who are already wealthy receive more than those who are not. “Preferential attachment” is only the most recent of many names that have been given to such processes. They are also referred to under the names “Yule process”, “cumulative advantage”, “the rich get richer”, and, less correctly, the “Matthew effect”. They are also related to Gibrat’s law. The principal reason for scientific interest in preferential attachment is that it can, under suitable circumstances, generate power law distributions.

http://en.wikipedia.org/wiki/Preferential_attachment

The Barabási–Albert (BA) model is an algorithm for generating random scale-free networks using a preferential attachment mechanism. Scale-free networks are widely observed in natural and human-made systems, including the Internet, the World Wide Web, citation networks, and some social networks. The algorithm is named for its inventors Albert-László Barabási and Réka Albert.

http://en.wikipedia.org/wiki/Barab%C3%A1si%E2%80%93Albert_model

Barabasi & Albert 1999, Dorogovtsev Mendes 2000, Bianconi & Barabasi 2001, etc.

Static Networks: Hidden variables mechanism,

Chung & Lu, Caldarelli et al. 2002, Park & Newman 2003

http://people.math.sc.edu/lu/papers/ave_pnas.pdf

http://iopscience.iop.org/0295-5075/59/1/155/pdf/epl_59_1_155.pdf

http://journals.aps.org/pre/abstract/10.1103/PhysRevE.68.036122

Motivation for Barabasi-Albert model

1) The networks grow; Networks continuously expand by the addition of new nodes. Ex. WWW : addition of new documents; Citation : publication of new papers

2) The attachment is not uniform (preferential attachment). A node is linked with higher probability to a node that already has a large number of links. Ex: WWW : new documents link to well-known sites (CNN, Wikipedia, NewYork Times, etc) Citation : well cited papers are more likely to be cited again.

BA model

(1) GROWTH : At every timestep we add a new node with m edges (connected to the nodes already present in the system).

(2) PREFERENTIAL ATTACHMENT: The probability Π that a new node will be connected to node i depends on the connectivity ki of that node

image

image

Barabási et al. Science (1999)

http://en.wikipedia.org/wiki/Barab%C3%A1si%E2%80%93Albert_model

Fitness the nodes

Not all the nodes are the same! Let us assign to each node an energy ε and a fitness

image

The Bianconi-Barabasi model

Growth:

–At each time a new node and m links are added to the network.

–To each node i we assign a energy εi from a p(ε) distribution

Generalized preferential attachment: –Each node connects to the rest of the network by m links attached preferentially to well connected, low energy nodes.

image

Results of the model

Power-law degree distribution

image

Fit-get-rich mechanism

image

http://en.wikipedia.org/wiki/Scale-free_network

http://www.sciencedirect.com/science/article/pii/S0375960110006274

http://books.google.co.uk/books?id=mDI72_9-bw0C&pg=PA359&lpg=PA359&dq=Fit-get-rich+mechanism&source=bl&ots=yVkjtvawFy&sig=Gm3VU5DzeJUtDoEi2tIpoRk91_c&hl=en&sa=X&ei=xBcBVKr-N83daurIgIgI&ved=0CD8Q6AEwBQ#v=onepage&q=Fit-get-rich%20mechanism&f=false

Mapping to a Bose gas

An ideal Bose gas is a quantum-mechanical version of a classical ideal gas. It is composed of bosons, which have an integer value of spin, and obey Bose–Einstein statistics. The statistical mechanics of bosons were developed by Satyendra Nath Bose for photons, and extended to massive particles by Albert Einstein who realized that an ideal gas of bosons would form a condensate at a low enough temperature, unlike a classical ideal gas. This condensate is known as a Bose–Einstein condensate.

A boson is a sub-atomic force carrier particle.

http://en.wikipedia.org/wiki/Bose_gas

We can map the fitness model to a Bose gas with

– density of states p(ε);

– specific volume v = 1;

– temperature T = 1/β.

In this mapping,

– each node of energy ε corresponds to

an energy level of the Bose gas

– while each link pointing to a node of energy ε, corresponds to an occupation of that energy level.

image

G. Bianconi, A.-L. Barabási 2001

Bose-Einstein condensation in trees scale-free networks

A Bose–Einstein condensate (BEC) is a state of matter of a dilute gas of bosons cooled to temperatures very close to absolute zero (that is, very near 0 K or −273.15 °C). Under such conditions, a large fraction of the bosons occupy the lowest quantum state, at which point quantum effects become apparent on a macroscopic scale. These effects are called macroscopic quantum phenomena.

Although later experiments have revealed complex interactions, this state of matter was first predicted, generally, in 1924–25 by Satyendra Nath Bose and Albert Einstein.

http://en.wikipedia.org/wiki/Bose%E2%80%93Einstein_condensate

Satyendra Nath Bose FRS (1 January 1894 – 4 February 1974) was an Bengali physicist specialising in mathematical physics.

Albert Einstein (14 March 1879 – 18 April 1955) was a German-born theoretical physicist and philosopher of science.

image

http://en.wikipedia.org/wiki/Satyendra_Nath_Bose

http://en.wikipedia.org/wiki/Albert_Einstein

In the network there is a critical temperature Tc such that

•for T>Tc the network is in the fit-get-rich phase

•for T<Tc the network is in the winner-takes-all or Bose-condensate phase

image

Small world networks

Small world in social networks

1967 Milligran experiment

The small-world experiment comprised several experiments conducted by Stanley Milgram and other researchers examining the average path length for social networks of people in the United States. The research was groundbreaking in that it suggested that human society is a small-world-type network characterized by short path-lengths. The experiments are often associated with the phrase “six degrees of separation”, although Milgram did not use this term himself.

296 people from Nebraska and Kansas were selected and, via a mailed packet of instructions, asked to contact a person in Boston through their network of acquaintances. The packet specified a destination person with their name, address, profession and city. They were asked to forward to someone they knew personally and the aim was to see how many people it took to forward the packet to the desired person.

Distribution of intermediaries

image

The “six degrees of separation” model

http://en.wikipedia.org/wiki/Small-world_experiment

The strength of weak ties (Granovetter 1973)

Weak ties help navigate the social networks

Granovetter idea

Two people that a common strong tie must be connected by a tie

image

http://www.immorlica.com/socNet/granstrengthweakties.pdf

Mark Granovetter (born October 20, 1943) is an American sociologist and professor at Stanford University who has created theories in modern sociology since the 1970s. He is best known for his work in social network theory and in economic sociology, particularly his theory on the spread of information in social networks known as “The Strength of Weak Ties”

http://en.wikipedia.org/wiki/Mark_Granovetter

The relevance of weak ties

According to Gravovetter idea bridges are weak ties and play a crucial role in connecting the society

image

Shortest distance

In sociology the shortest distance between two nodes is the minimal number of links than a path must hop to go from the source to the destination.

image

The shortest distance between node 4 and node 1 is 3

The shortest number between node 3 and node 1 is 2

Clustering coefficient of nodes 2,3.

image

Clustering coefficient

Definition of local clustering coefficient

http://en.wikipedia.org/wiki/Clustering_coefficient

In graph theory, a clustering coefficient is a measure of the degree to which nodes in a graph tend to cluster together. Evidence suggests that in most real-world networks, and in particular social networks, nodes tend to create tightly knit groups characterised by a relatively high density of ties; this likelihood tends to be greater than the average probability of a tie randomly established between two nodes (Holland and Leinhardt, 1971; Watts and Strogatz, 1998

image

Universalities: Small World

Small-world model (Watts & Strogatz 1998)

The model for coexistence of high clustering and small distances in complex networks

A small-world network is a type of mathematical graph in which most nodes are not neighbours of one another, but most nodes can be reached from every other by a small number of hops or steps. Specifically, a small-world network is defined to be a network where the typical distance L between two randomly chosen nodes (the number of steps required) grows proportionally to the logarithm of the number of nodes N in the network, that is

image

image

Small-world network example

Hubs are bigger than other nodes Average vertex degree = 1,917

Average shortest path length = 1.803.

Clusterisation coefficient = 0.522

http://en.wikipedia.org/wiki/Small-world_network

The Watts–Strogatz model is a random graph generation model that produces graphs with small-world properties, including short average path lengths and high clustering. It was proposed by Duncan J. Watts and Steven Strogatz in their joint 1998 Nature paper.

image

Random graph

Average vertex degree = 1,417

Average shortest path length = 2.109.

Clusterization coefficient = 0.167

http://en.wikipedia.org/wiki/Watts_and_Strogatz_model

image

Networks are clustered (large average Ci.i.e.C) but have a small characteristic path length (small L).

image

Watts and Strogatz small world model

image

image

There is a wide range of values of p in which high clustering coefficient coexist with small average distance

Watts & Strogatz (1998)

Variations and characterizations

Amaral & Barthélemy (1999)

Newman & Watts, (1999)

Barrat & Weigt, (2000)

Systemic risk and fault tolerance of complex networks

Centrality of nodes

In a network some nodes are more essential than others

A way to assess the relevance of nodes is by studying the vulnerability of the network when one node is removed

image

Failures and attacks

The different possible disfunctions have been traditionally divided into

Failures – Random elimination of nodes

Attack – Elimination of carefully chosen nodes

Giant Component

A connected component of a network is a subgraph such that for each pair of nodes in the subgraph there is at least one path linking them

The giant component is the connected component of the network which contains a number of nodes of the same order of magnitude of the total number of links

image

Robustness

Complex systems maintain their basic functions even under errors and failures

image

Robustness of scale-free networks

Failures – Topological error tolerance

image

http://www.ncbi.nlm.nih.gov/pubmed/10935628

Cohen et al. 2000/2001

Impact – The tools of modern network theory

Graph theory

Social network theory

Statistical physics

Computer science

Statistics

The impact of network science

Quantum mechanics (QM; also known as quantum physics, or quantum theory) is a branch of physics which deals with physical phenomena at nanoscopic scales where the action is on the order of the Planck constant. It departs from classical mechanics primarily at the quantum realm of atomic and subatomic length scales. Quantum mechanics provides a mathematical description of much of the dual particle-like and wave-like behaviour and interactions of energy and matter. Quantum mechanics provides a substantially useful framework for many features of the modern periodic table of elements including the behaviour of atoms during chemical bonding and has played a significant role in the development of many modern technologies.

http://en.wikipedia.org/wiki/Quantum_mechanics

Quantum Mechanics: 1900

The 1900 quantum hypothesis by Max Planck that any energy-radiating atomic system can theoretically be divided into a number of discrete “energy elements” ε (epsilon) such that each of these energy elements is proportional to the frequency f with which each of them individually radiate energy, as defined by the following formula:

e = fl where h is a numerical value called Planck’s constant.

http://en.wikipedia.org/wiki/History_of_quantum_mechanics

image

Max Karl Ernst Ludwig Planck, FRS (April 23, 1858 – October 4, 1947) was a German theoretical physicist who originated quantum theory, which won him the Nobel Prize in Physics in 1918.

http://en.wikipedia.org/wiki/Max_Planck

http://www.thequantumnetwork.org/quantum-frontiers-netsci2014/

Electron microscope 1931

An electron microscope is a microscope that uses accelerated electrons as a source of illumination. Because the wavelength of an electron can be up to 100,000 times shorter than that of visible light photons, the electron microscope has a higher resolving power than a light microscope and can reveal the structure of smaller objects. A transmission electron microscope can achieve better than 50 pm resolution and magnifications of up to about 10,000,000x whereas most light microscopes are limited by diffraction to about 200 nm resolution and useful magnifications below 2000x.

German physicist Ernst Ruska and the electrical engineer Max Knoll constructed the prototype electron microscope in 1931, capable of four-hundred-power magnification; the apparatus was the first demonstration of the principles of electron microscopy.

image

Electron microscope constructed by Ernst Ruska in 1933

http://en.wikipedia.org/wiki/Electron_microscope

http://en.wikipedia.org/wiki/Ernst_Ruska

http://en.wikipedia.org/wiki/Max_Knoll

Transistor 1947

A transistor is a semiconductor device used to amplify and switch electronic signals and electrical power. It is composed of semiconductor material with at least three terminals for connection to an external circuit. A voltage or current applied to one pair of the transistor’s terminals changes the current through another pair of terminals. Because the controlled (output) power can be higher than the controlling (input) power, a transistor can amplify a signal. Today, some transistors are packaged individually, but many more are found embedded in integrated circuits.

The transistor is the fundamental building block of modern electronic devices, and is ubiquitous in modern electronic systems. Following its development in 1947 by American physicists John Bardeen, Walter Brattain, and William Shockley, the transistor revolutionized the field of electronics, and paved the way for smaller and cheaper radios, calculators, and computers, among other things. The transistor is on the list of IEEE milestones in electronics, and the inventors were jointly awarded the 1956 Nobel Prize in Physics for their achievement.

http://en.wikipedia.org/wiki/Transistor

http://en.wikipedia.org/wiki/John_Bardeen

http://en.wikipedia.org/wiki/Walter_Houser_Brattain

http://en.wikipedia.org/wiki/William_Shockley

Laser 1957

A laser is a device that emits light through a process of optical amplification based on the stimulated emission of electromagnetic radiation. The term “laser” originated as an acronym for “light amplification by stimulated emission of radiation”

In 1953, Charles Hard Townes and graduate students James P. Gordon and Herbert J. Zeiger produced the first microwave amplifier, a device operating on similar principles to the laser, but amplifying microwave radiation rather than infrared or visible radiation.

In 1957, Charles Hard Townes and Arthur Leonard Schawlow, then at Bell Labs, began a serious study of the infrared laser. As ideas developed, they abandoned infrared radiation to instead concentrate upon visible light. The concept originally was called an “optical maser”. In 1958, Bell Labs filed a patent application for their proposed optical maser; and Schawlow and Townes submitted a manuscript of their theoretical calculations to the Physical Review, published that year in Volume 112, Issue No. 6.

http://en.wikipedia.org/wiki/Laser

http://en.wikipedia.org/wiki/Charles_Hard_Townes

http://en.wikipedia.org/wiki/Arthur_Leonard_Schawlow

Magnetic resonance imaging 1973

Magnetic resonance imaging (MRI), nuclear magnetic resonance imaging (NMRI), or magnetic resonance tomography (MRT) is a medical imaging technique used in radiology to investigate the anatomy and physiology of the body in both health and disease. MRI scanners use strong magnetic fields and radiowaves to form images of the body. The technique is widely used in hospitals for medical diagnosis, staging of disease and for follow-up without exposure to ionizing radiation.

In 1952, Herman Carr produced a one-dimensional MRI image as reported in his Harvard PhD thesis. In the Soviet Union, Vladislav Ivanov filed (in 1960) a document with the USSR State Committee for Inventions and Discovery at Leningrad for a Magnetic Resonance Imaging device, although this was not approved until the 1970s.

In a 1971 paper in the journal Science, Raymond Damadian, an Armenian-American physician, scientist, and professor at the Downstate Medical Center State University of New York (SUNY), reported that tumors and normal tissue can be distinguished in vivo by nuclear magnetic resonance (“NMR”).

While researching the analytical properties of magnetic resonance, Damadian created the world’s first magnetic resonance imaging machine in 1972. He filed the first patent for an MRI machine, U.S. patent #3,789,832 on March 17, 1972, which was later issued to him on February 5, 1974.

http://en.wikipedia.org/wiki/Magnetic_resonance_imaging

http://en.wikipedia.org/wiki/Raymond_Vahan_Damadian

Much of modern technology operates at a scale where quantum effects are significant. However there is at least a 30 year gap between the science and technology.

Drug design, metabolic engineering

Aspirin:

Reduces inflammation, fever and pain;

May reduce the risk of Alzheimer’s disease

http://en.wikipedia.org/wiki/Prevention_of_dementia

http://scholar.google.co.uk/scholar?q=aspirin+and+alzheimer%27s+prevention&hl=en&as_sdt=0&as_vis=1&oi=scholart&sa=X&ei=eO4BVPbOFsjWaM73gfAO&ved=0CDcQgQMwAA

Reduces the risk of breast cancer, ovarian cancers and colorectal cancer

http://scholar.google.co.uk/scholar?q=Reduces+the+risk+of+breast+cancer+ovarian+cancers+colorectal+cancer+amd+aspirin&hl=en&as_sdt=0&as_vis=1&oi=scholart&sa=X&ei=v_ABVNfKE9HuaOrKgoAI&ved=0CCEQgQMwAA

Prevents heart attack and stroke

http://www.nlm.nih.gov/medlineplus/ency/patientinstructions/000092.htm

Unfortunately aspirin can cause bleeding ulcers

http://drugs.webmd.boots.com/drugs/drug-36-ASPIRIN.aspx?drugid=36&drugname=ASPIRIN&source=2&isTicTac=false

http://health.howstuffworks.com/medicine/medication/aspirin4.htm

Aspirin also known as acetylsalicylic acid, is a salicylate drug, often used as an analgesic to relieve minor aches and pains, as an antipyretic to reduce fever, and as an anti-inflammatory medication.

image

http://en.wikipedia.org/wiki/Aspirin

http://www.significancemagazine.org/details/webexclusive/986723/Aspirin–a-wonder-drug.html

http://www.chm.bris.ac.uk/motm/aspirin/aspirin.htm

http://www.uni-saarland.de/fak8/heinzle/de/teaching/Systems_Synthetic_Biology/Literature/metabolic_engineering_bailey.pdf

Brain research

In September 2010 the National Institutes of Health awarded $40 million to researchers at Harvard, Washington University in St. Louis, the University of Minnesota and UCLA, to develop the technologies that could systematically map out brain circuits.

The Human Connectome Project (HCP) with the ambitious goal to construct a map of the complete structural and functional neural connections in vivo within and across individuals.

http://www.humanconnectomeproject.org/overview/

MOST IMPORTANT Networks Really Matter

If you were to understand the spread of diseases, could you do it without networks?

If you were to understand the WWW structure, searchability etc, could you do it without invoking the Webʼs topology?

If you want to understand human diseases, could you do it without considering the wiring diagram of the cell?

If you want to understand how any electronic device works can you do it without a circuit diagram?

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s