The yuga between 1800 CE and 1900 CE saw a remarkable change in our understanding of the world at many levels. It is not that some of these ideas did not exist long before that time but they came together in a world-system of science and philosophy in that period. Part of this change can be traced to multiple disparate events that interestingly happened in the year 1859 CE — a time when our nation had sunk to what was perhaps the lowest points of its existence. Due that our nation could not fully participate in the cataclysmic consequences of those events until some time later and the Hindu elite have still not fully internalized the significance of those events.
The mathematical foundations of mechanics laid by Newton with deep roots in Euclidean geometry had met with near infallible success. The physicists drunk with confidence of the triumph of Newton believed that they could account for all physical phenomena based on that mathematical formulation. But it was ironically this very mathematical formulation that was to point that something was amiss. The great Carl Causs had shown how with just three observations one could calculate the orbit of a celestial body and used this recover the asteroid Ceres which had been lost after going behind the sun. Gauss’s student was Johann Encke, who a few years after his graduation developed methods to calculate the perturbational effects of various celestial bodies on the orbits of other such bodies. It was these methods that led to his discovering the period of a comet that latter came to be known as the famed Encke’ Comet and his other great works on cometary orbits.
Following on the path of Encke’s methods, the great French mathematician Urbain Le Verrier came to be “the man who discovered a planet with the point of his pen”, as his colleague called him. By performing complex and painstaking calculations of Newtonian mechanics he showed that the orbit of Uranus could not be explained by the known data and proposed a new planet to explain it. Not just that, he predicted where that planet would be in the sky pointing to a location at the boundary of Capricornus and Aquarius. With no Frenchmen showing interest in testing his prediction, he sent his predictions over to Encke at Berlin. But the day Encke received Le Verrier’s letter was his birthday and he had organized big party for the evening rather than an observation session. Moreover, he was much more of a man of mathematics than an observer. By some coincidence, he had recently had his former doctoral student Carl Bremiker make excellent new star maps of the Aquarius-Capricornus region for the observatory. Further, Encke’s assistant and former student Johann Galle had sent his doctoral dissertation to Le Verrier for comments and this letter from Le Verrier contained the comments on that in addition to his note on the prediction of a new planet. Thus, with Encke not observing, Johann Galle and his student Heinrich d’Arrest got to use the telescope that night of receiving the Le Verrier’s letter. They discovered Neptune with degree of the position he had predicted. When we saw Neptune for the first time after quite some difficulty with our small refractor, we were able to appreciate the triumph of Galle. This was the ultimate triumph of Newtonian mechanics and Encke fittingly wrote to Le Verrier: “Your name will be forever linked with the most outstanding conceivable proof of the validity of universal gravitation.” By this Encke meant the Newtonian theory of gravitation.
But there was something ironic about Le Verrier’s life’s work. Before his study of Uranus leading the prediction Neptune, he had worked on the other end of solar system, where the elusive Mercury orbits, which these days the ordinary urban man rarely catches a glimpse of (I remember all the times I’ve seen it). Going through calculations involving hundreds of terms he calculated an excess of precession for the orbit of Mercury which could not be accounted for by Newtonian mechanics (September 1859). While the same trick of postulating an additional planet or other anomalies were tried, none of them really worked. Ironically the universality of Newtonian gravitation, which Encke thought Le Verrier had proven, now stood to be surpassed. This of course took a while to happen but seeds had been sowed by the new geometry of Bernhard Riemann. This laid the foundation for Einstein’s theory which came in the next century. At the same time as solving the excess precession problem of Mercury, Einstein as also predicted the existence of gravitational waves. Observations of binary pulsars in 1974 indirectly suggested he was right. With the direct detection of gravitational waves in the current century of the common era this prediction of Einstein has finally received its direct confirmation. Thus, one of the pillars of physics in the realm of the “big” was established.
[As an aside, talking of Riemann 1859 was also the year he published his famous paper establishing the relationship between the zeroes of the function in the complex plane and the distribution of the prime numbers.]
The 21 year old Gustav Kirchhoff, while still a student, discovered his famous laws of electrical circuits, which any student who has studied elementary high-school physics would have encountered. This was just the beginning of what was to be a remarkable career spanning multiple branches of science and mathematics. Continuing with his electrical work he showed that in a resistance-less wire the electricity would flow at the speed of light. This important result formed the bed-rock of electromagnetism that was taken to conclusion JC Maxwell. He then worked with Robert Bunsen to develop spectroscopy and it was as part of this work that in October of 1859 CE that he reported his observations on how the D-lines in the solar spectrum are further darkened when passed through a Bunsen burner flame with sodium. These observations culminated in Kirchhoff’s famous spectroscopic laws:
1) An incandescent solid, or a liquid or a gas under high pressure produces a continuous spectrum, i.e. like a rainbow of colors.
2) A gas under low pressure produces an emission spectrum, i.e. one with bright-lines of specific frequency.
3) A continuous spectrum when viewed through a cooler low-density gas produces an absorption spectrum, i.e dark lines are seen superimposed on the continuous spectrum. These correspond to the bright lines produced when the same substance is heated and producing an emission spectrum.
The frequency at which the absorption or emission lines are seen depends on the substances emitting or absorbing light and the temperature to which they are heated. Further, by the end of 1859, using the rather elementary device of the below thought experiment, Kirchhoff arrived at a basic theorem for the continuous spectrum:
Let and be 2 isolated, infinite, opaque (do not allow transmission energy through them) plates (as caricatured in Figure 1) in thermodynamic equilibrium, i.e. they are at the same absolute temperature and the inflow of energy and outflow of energy into either plate is in balance. Let us even assume they are made of different materials as indicated by the different colors in Figure 1. Let us consider the following for a particular frequency of radiation . Let be the absorptivity of the 2 plates, i.e. the amount of radiant energy absorbed per unit time per unit area. Let be the emissivity of the 2 plates, i.e. the amount of energy they radiate per unit time per unit area. Now the some of the energy incident on them is absorbed while the rest is reflected. This defines the respective reflectivities as . Now, for the outflow of energy per unit time per unit area is . Being in thermodynamic equilibrium it is in balance with its inflow .
We can analyze thus:
1) As the first chain of inflow receives from . Of this it absorbs (the first order absorption) and reflects . This is incident on which reflects back . Of this absorbs (second order absorption) and reflects back . In turn reflects back of which absorbs . reflects back and the chain continues ad infinitum. Thus, we can write the first component of as:
The last step is obtained via the limit of the infinite sum of a geometric series given that
2) The second chain of inflow goes thus: emits of which reflects back . Of this absorbs and reflects back . Of this reflects back . Of this absorbs and reflects back . Of this reflects back . Thus, we can write the second component of :
Given the thermodynamic equilibrium ; hence,
We can rearrange the equation as:
Dividing both sides by we get:
Given that and we can hence write the above as:
Similarly, from we can write:
Thus, irrespective of the material composition of the plates, their ratios of emissivity to absorptivity are the same. Since this analysis was done for a given frequency of radiation (where is the wavelength of the radiation) at a certain equilibrium temperature , we can say that the above ratios are function of these:
Now Kirchhoff postulated a theoretical body, termed the black body, that absorbed all energy incident on it, i.e. . Thus, for such a black body the emissivity would be , i.e. it would be purely a function of the frequency of the radiation and its temperature. This immediately presented the physicists of the age with two challenges: 1) An experimental one, i.e. to construct a radiating body that approximates the black body as closely as possible and to empirically measure the shape of . 2) A theoretical one, i.e. to derive from a theoretical model of radiation the shape of .
These challenges proved more revolutionary than physicists of the time thought. In 1869 CE one of the greatest physical theorists of all times, Ludwig Boltzmann, was appointed full professor of mathematical physics at the age of 25. Starting that year for the next two years he spent some time studying with Bunsen and Kirchhoff whose findings we have just alluded to. Deeply inspired by the discussions with them on thermodynamics, he went on to provide a statistical framework to explain the second law of thermodynamics in 1872 CE. Most European physicists of that time, unlike the chemists, did not consider the atomic theory to be real. Boltzmann not only considered atoms to be real (spherical atoms formed the foundation of his work on the second law) but in this work he introduced the idea of discrete energy levels. Later to the shock of the attending physicists in 1891 CE at a conference in Halle, Boltzmann emphatically stated: “I see no reason why energy shouldn’t also be regarded as divided atomically.” It was these ideas that were to provide the ultimate solution to Kirchhoff’s challenge.
On the experimental side it took about 20 years for the first glimmer of understanding to emerge with regard to , namely that it has one clear maximum when plotted against , which moves to lower with decreasing T. Finally, in 1896 CE, Hermann von Helmholtz’s student, Wilhelm Wien, proposed the first reasonable function to account for this shape. It took the form , where are constants. In terms of its basic shape it resembled what was empirically known for and the experiments by Paschen around that time suggested that indeed Wien had found the right curve for the black body radiation. However, new and more precise experiments at lower frequencies soon poured water on this. These experiments were being done by the groups of Lummer and Pringsheim on one hand and Rubens and Kurlbaum on the other at what where probably the best experimental physics labs in world at the dawn of the 1900s. They indicated that the function of Wien failed at lower frequencies.
The final solution to the problem came from the dark horse among the physicists, Max Planck, who had fittingly taken the professorial chair of Kirchhoff upon his death. This chair at Berlin was first offered to Boltzmann, who declined it; with no one taking it, finally it was given to Planck. He had a solid background having studied physics with Kirchhoff and von Helmholtz and mathematics under Weierstrass, who was second in line of academic descent from Gauss. Till the age of 40 he had done competent work in thermodynamics but was most part ‘scooped’ by the great American mathematician and inventor Josiah Gibbs, whose work in turn paralleled that of Boltzmann to enter the textbooks. Despite all this, Planck had for long set his mind on the bigger goal of deriving the correct shape of the black body radiation curve. Ironically, throughout most of this phase Planck was in the “wrong team”, opposing the atomic theory. As of 1897, Planck was still disputing the statistical framework of Boltzmann based on atomic principles. In response, Boltzmann published a paper showing that Planck’s objections were untenable and wrote that: “It is certainly possible and would be gratifying to derive for radiation phenomena a theorem analogously to the entropy theorem from the general laws for these phenomena using the same principles as in gas theory. Thus, I would be pleased, if the work of Dr. Planck on the scattering of electrical plane waves by very small resonators would become useful in this respect, which by the way are very simple calculations whose correctness I have never put in doubt.”
This rebuttal of Boltzmann brought about the gradual conversion of Planck over the next 3 years. His long-standing wish came to fruition fatefully on a Sunday afternoon in the autumn of 1900 when the family of Rubens who had done the black body radiation experiments visited the family of Planck. Rubens told his host about his latest experimental results with respect to the black body radiation measured for low frequencies and its departure from Wien’s proposal. That very evening Planck drawing on his deep study of Kirchhoff’s fundamental problem arrived at correct formula for the black body radiation function:
, where are constants.
Over the next two months, backed by his profound knowledge of thermodynamics, his strong mathematical capacity, the “conversion” he had undergone due to Boltzmann and the inspiration from those very methods of Boltzmann he arrived at the quantum theory where energy is emitted and absorbed in “atomic” packets or quanta. The energy of these quanta is described as , where is Planck’s constant. In the following years its power was demonstrated by Einstein who explained the photoelectric effect using the same theory. Thereafter, Niels Bohr used the same in combination with inspiration from Darwin’s grandson’s studies to arrive at the first quantum model of the atom. The rest as they say is history: thus, the second pillar of physics in the realm of the small was established.
When my father first read out a basic version of this story to me when I was a kid I was profoundly inspired to study the quantum theory to the extent my meager mathematics allowed me when I grew older.
Order and disorder
In the late 400s and the beginning of the 500s of the common era the great Hindu scientist Āryabhaṭa-I devised several mechanical devices, which were powered by gravity and/or flowing water. One of these was the svayamvaha-yantra in which the water flowing out of a water clock under gravity caused a sphere to rotate around its axis once in 24 hours. This was meant as a teaching device to illustrate the apparent rotation of the heavenly sphere. Over hundred years later, in 628 CE, Āryabhaṭa’s successor and antagonist Brahmagupta, apparently in a bid to outdo him, claimed to have devised a svayamvaha-yantra which was a perpetual motion machine (ajasra-yantra). It was supposed to operate with gravity acting on mercury and buoyancy alternately working to keep a spoked wheel moving for ever. Evidently, this device did not work as expected prompting his successors, like Lalla and Bhāskara-II, to attempt various modifications and alternate designs to arrive at something which worked. While the real goal was obviously not attained, in India these failed attempts were part of a tradition of constructing genuinely working mechanical devices culminating in king Bhojadeva’s automata — a tradition which did not survive the ravages of Mohammedanism. However, its transmission to West Asia appears to have seeded the quest for perpetual motion machines in Europe upon transmission of these ideas via the Mohammedans.
While from today’s vantage point the quest for these machines might look like a sign of lunacy, the ultimate realization that perpetual motion machines are untenable needed the recognition of the laws of thermodynamics. This had to wait for long time and arose from meditations ensuing from the eventual invention of the steam engine in Europe. First, the Englishman Joule’s recognition that work and heat were manifestations of an equivalent quantity, i.e. energy, led to the first law of thermodynamics. This law is essentially the law of conservation of energy: energy can neither be created nor destroyed but only converted from one form to another. This law negated the possibility of having a perpetual motion machine that did work without an equivalent input of energy.
As the English were taking full advantage of their engines driven by the heat energy from burning coal to run their business, their neighbors the French grew increasingly anxious. This prompted the brilliant military engineer Sadi Carnot from a learned French clan to carry out the first theoretical study of the principles behind the engines. His penetrating investigation completed by the time he was 27 more or less laid the foundations of thermodynamics. Subsequently, he suddenly went mad at the age of 36 and died from cholera shortly thereafter. Due to the contagious nature of the disease many of his works were buried with him but what survived was his famous work on the cycle of an engine. He recognized that for an engine to run it needed both a heat source from which it took heat to perform work and a heat sink at lower temperature than it to dump some of that heat that was not converted to work. While the source was rather obvious in the practical steam engines starting from those devised by James Watt, the sink was not — it was merely the ambient surroundings in which the engine operated. Carnot went on to show the maximal efficiency an engine depended on the temperature of the source and the sink. Let be the heat the engine takes from a source at (absolute) temperature . Let be the temperature of the sink. Then the maximal work that can be done in a cycle of the engine is
This is the famous Carnot equation that indicates that not all heat can be effectively converted to work unless the source was at infinite temperature or the sink at absolute zero, neither of which are feasible options.
It is this inability to get all the heat to perform work which poses an additional constraint that negates even a lower form of perpetual motion machine, namely one which conserves energy but at least keeps running forever by cyclically converting one form energy into another. Two noted scientists of the age, Lord Kelvin and Rudolf Clausius (second in line from Gauss via Dirichlet with whom he studied mathematics) formalized Carnot’s discovery by the middle of the 1800s as a law:
“No cycle in which heat is taken from a hot source and converted completely into work is possible” -Lord Kelvin
“Heat does not flow from a body at low temperature to one at high temperature without an accompanying change elsewhere.” -Rudolf Clausius
These became the basic statements of the second law of thermodynamics. Clausius furthered this to define an entity termed entropy. He defined this such that the change in entropy multiplied by temperature specified that portion of heat which could not be converted to work. This led him to state the second law in a rather different way but ultimately entirely equivalent to the above statements:
The entropy of the universe increases during of any spontaneous change.
Here a spontaneous change is one that occurs automatically, i.e. without needing any external work to be done for it to happen, e.g.: 1) when a compressed gas is released into an empty container of larger volume it spontaneously expands to occupy that volume. 2) A hot metal piece placed at room temperature cools to the same temperature.
This statement of the second law provided a deeper insight into the nature of entropy. For instance, let us consider the above example of the gas expansion: the gas occupying a smaller volume is more ordered. This can be expressed in probabilistic sense: The probability of find a gas molecule in a given unit of volume is higher in this state than when it spontaneously expands on being released into a larger empty container. Here the gas molecule moves over a larger space and the probability of finding the molecule in same unit volume decreases. Thus, the gas gets more disordered. Thus, entropy can be seen as a measure of disorder and the second law stated as: “matter and/or energy tend to get more disordered.” The formal description of this idea had its roots in a discovery of the great James Clerk Maxwell in a distinct investigation, namely the function describing the distribution of the velocities of molecules in an ideal gas, again interestingly published in 1859 CE. In course of developing this abstraction further Boltzmann formulated his celebrated formula for the absolute entropy of a substance in 1877 (in the modern form given by Max Planck):
Here, is Boltzmann’s constant in energy and inverse temperature units and is the total number of ways in which atoms, molecules or energy elements can be arranged in the sample such that the total energy remains constant. Each such arrangement that fulfills this condition is termed a “microstate”. Thus, is the total number of microstates in the sample. This simple formulation has profound implications for it allows one to connect the entropy of a sample to the probability of occurrence of each of the arrangements of the “atomic” entities in the sample:
Here the probability with which microstate occurs in the sample.
While this is a thermodynamic concept, one can now extend it to be the general measure of disorder of any system. This can be illustrated with an often-used example: say, on a national day we have large assembly of people. First, consider state-1: When the national anthem is being recited they all stand up in an erect posture and recite the same. Next state-2: once the anthem is over they might adopt a range of different postures with various conversations between individuals or small groups. Thus for a beholder of this system in state-1 the population is ordered and the information coming out from them is clearly perceived and limited (just the national anthem). In state-2, the population is disordered and information coming out from them is not easily perceived as it has a high degree of complexity being the sum of all the many individual conversations taking place. Thus, the abstract generalization of the thermodynamic entropy concept not only gives a general measure of disorder but also the information content of a system. It was this key generalization that Claude Shannon arrived at almost 70 years after Boltzmann’s initial discovery. His historic formula to quantify information essentially took the same form as Boltzmann’s formulation of thermodynamic entropy, just that Shannon’s is a pure number:
Here is the probability of the symbol in a certain symbol set occurring in the string . Thus, Shannon entropy specifies the minimal number of bits per symbol needed to encode the string in binary form. Hence, also measures the complexity of a string . As an example let us consider the following strings in the ASCII symbol set:
rAma rAma hare hare;
ugram indraM juhomi;
One notes that has higher Shannon entropy than and provides quantitative evidence for the intuitive idea that the second string is more complex than the first. This remarkable link between the mathematical formulations of two rather disparate entities, one a description of very palpable quantities like matter and energy and the other an abstract quantity, information, can be summarized by quoting Shannon:
“Quantities of the form (the constant merely amounts to a choice of a unit of measure) play a central role in information theory as measures of information, choice and uncertainty. The form of will be recognized as that of entropy as defined in certain formulations of statistical mechanics, where is the probability of a system being in cell of its phase space.”
Shannon’s formulation of entropy as a measure of information has profound implications for understanding the foundations of life. This aspect has been of great importance in our own investigations and will touched upon in the final section. Before heading there we may ask if there is a deeper link between the thermodynamic and informational conceptions of entropy? That is a question which remains more mysterious. However, in that regard we will merely quote a noted scientist of our age, Murray Gell-Mann (compare with the above example of the people assembled for the national day):
“In fact, entropy can be regarded as a measure of ignorance. When it is known only that a system is in a given macrostate [gross state of matter and energy in the sample], the entropy of the macrostate measures the degree of ignorance about which the microstate system is in, counting the number of bits of additional information needed to specify it, with all the microstates in the macrostate treated as equally probable.”
We will conclude this section by merely mentioning some even more mysterious issues pertaining to the two laws of thermodynamics. Emmy Noether, probably greatest female mathematician of all times, proved a remarkable theorem now known as Noether’s theorem. A basic version of this theorem can be relatively easily understood by someone with junior college mathematics. However, in its more complete forms it extends into rarefied heights of mathematics. This basic version depends on the Lagrangian formulation of another great mathematician Joseph-Louis Lagrange to describe a physical system. Simply put the the Lagrangian of a system is the difference between its kinetic energy (not to be confused with the symbol for temperature) and potential energy :
The Lagrangian is typically expressed as a function of the position of a body in the system and its time derivative, i.e. velocity . As a simple example, consider the Newtonian system of a body of mass raised to some height dropping under a uniform gravitation field with acceleration . It would have kinetic energy and potential energy as . Thus its Lagrangian can be written as:
Now, according to Noether’s theorem if of a physical system remains unaffected upon transformation in the coordinate system used to describe it, i.e. its symmetric under the transformation, then there will be a corresponding conservation law. Now, if a physical system is translated linearly to a different position and there is no other physical influence acting on it, its remains unaffected. This implies that the translational coordinate system is a uniform; thus, translational symmetry of the Lagrangian gives us the law of conservation of momentum. Similarly, that is unaffected if the system is translated in time gives us the law of conservation of energy or the first law of thermodynamics. Thus, remarkably conservation of energy is related to time symmetry, i.e. time being an orderly or uniform coordinate system — time does not flow fast and slow at different points along its flow. If that were to happen then energy would not be conserved.
Most physical laws are agnostic to flipping the time coordinate system, i.e. time reversal. For example, if we flipped time on the flight of an arrow from release to fall, there will be no changes to the laws of motion describing it. Similarly, if the time axis were flipped there would be no difference to the laws describing the revolution of planet around its star or the current in a circuit. As per Noether’s theorem if the Lagrangian of a system were unaffected under time-reversal then entropy would be conserved. But this is not so because it violates the second law of thermodynamics. So, it is the one physical law that has an inbuilt time direction, and is thus the odd third pillar of physics. The measurement of entropy gives us the “arrow of time” to use Eddington’s term. The young universe was very hot in a narrow energy range with energy and matter uniformly distributed. This was a low entropy state. With time the energy and matter become less uniformly distributed with clumping to form galaxies and their constituent star systems. Thus, the increasing entropy resulted in a complexity (if viewed in terms of information) of its structure. Thus, the state of the universe as described by its increasing entropy and hence information might be seen as a reflection of its “unfolding” along the arrow of time.
Lessons from life
The final event pertaining to 1859 CE whose consequences we shall talk about happened on 24th November of that fateful year: Charles Darwin published “On the Origin of Species by Means of Natural Selection, or the Preservation of favoured Races in the Struggle for Life”. One may go as far as to say that modern biology was born with that event. Till then biology was a science without a theoretical scaffold unlike physics or chemistry. The other events described in this note, however profound in their implications, had lesser social and intellectual impact than this book of Darwin. On the social front it shook the foundations of the Abrahamistic religions like nothing else coming from science. On the intellectual front it unsettled more thinkers than any other scientific publication. The implications of this evolutionary theory were quite completely grasped by Darwin; however, even its junior co-discoverer Wallace did not fully grasp all its implications, leave alone several of the other intellectuals at and after that time. Indeed, the situation is rather peculiar: while much of biology done since then can be seen as a footnote to Darwin, a large fraction (at least ) of the biologists do not fully understand the evolutionary theory and how to use it.
It is not commonly understood that the evolutionary theory has a close relationship to Shannon’s generalization of entropy as a measure of information. Its tremendous predictive power stems from this aspect. The way to understand this can be briefly described thus: the sequence of a biopolymer (nucleic acid or protein) is replicated by a replicator or synthesized by a synthetase using another as a template. This in principle produces identical copies of the biopolymer or a single polymer with repeats of the same sequence. If we thus align the sequence of the identical copies of these biopolymers then the Shannon entropy of a column in the alignment will be zero — i.e. there is no disorder. However, this replication process is not always perfect; hence over time copies would emerge with changes along the sequence (mutations). Thus, the column-wise entropy will keep increasing. What natural selection does is: 1) to prevent increase in entropy of certain columns or 2) be blind to differing degrees to the entropy increase in certain columns or 3) favor increasing entropy of certain columns. These three modes of action are commonly seen as 1) purifying selection; 2) weak selection or neutrality; 3) positive selection.
Now, the one thing in biology that lies beyond the entropy principle but still has some relation to it is the semantic principle: Each of the three above types of action relates a certain biological/biochemical meaning of the given residue in the biopolymer. This semantic aspect of the biopolymer sequence is what is the unique “domain” of biology, even as the semantic aspect of a linguistic sequence is the unique domain of a text. As an illustration let us consider the following peptides from Homo sapiens:
Both these peptides have the same length of 9 amino acids and the same Shannon entropy. However, the first one is primarily involved in signaling reproductive functions like birthing, bonding and lactation whereas the second is involved in regulating water-balance, primarily as anti-diuretic hormone. Thus, even though they have the same information content their biological semantics have diverged as they diverged from a common ancestor.
Yet, entropy does impinge on semantics in a general sense. To illustrate this let us consider a linguistic example first:
: dhiyo yo naH prachodayAt;
buM buM buddhAya buM buM;
The two linguistic strings above are both the same length but without knowing anything else, by just comparing the entropies of the two we can state that by itself is likely to have greater semantic complexity or richness of meaning than . By the same token, let us look at two equal-length parts of a protein, the Drosophila Antennapedia, which binds a specific DNA sequence and initiates the development of a specific aspect of the animal body plan along the antero-posterior axis:
Now, just comparing the entropies of the two parts of the protein we can say that the first has lower complexity than the second. Thus, the second is more likely to be the functionally more important or involved part of the protein. If you were then asked to guess which part was more likely to perform the specific DNA-binding function then the second part would be the obvious choice. Of course the power of this method increases with an alignment of multiple sequences for here we are exploiting not just the entropy distribution but also the effects of natural selection on it. Thus, natural selection acts on the increasing entropy of the biopolymer alignment to retain some of it and discard the rest based on what its semantics are. Thus, with a single sequence using the entropy measure across the sequence, we can infer which part of it is likely to fold into a globular structure, which part might be unstructured or fibrous or if might be embedded in the membrane. With an alignment of sequences we can additionally tell which protein would be an enzyme, what are its active sites and some aspect of its catalysis if it were an enzyme because the biological semantics are based on the ground of chemistry. However, beyond this one would require empirical approaches.
Thus, the biological semantics place limits on the predictive capacity of its foundational theory. In this sense the situation is quite different from the role of the underlying physics in astronomy (e.g. stellar evolution) or chemistry. Further, say if we take DNA replication, the bacteria have one primary DNA polymerase, archaea and eukaryotes other ones with independent origins from the bacterial one. Why that particular DNA polymerase became the primary enzyme of a particular superkingdom does not currently seem to be accountable purely from theory. It could simply boil down to the microscopic events, i.e. the local dynamics of selection acting at the time of the fixation of the polymerases. One could then call it historical contingency, though we currently do not know that for sure. If this were the case then prediction from the foundational theory in biology has certain limits in terms of what it can do by itself and the rest is contingency. Is this comparable to the questions in physics like why the electron’s mass is what it is? That is also not clear to us.
Nevertheless, the idea of the action of natural selection on the products of entropic diversification leads us to the realization that this principle is more general. Indeed, the evolutionary process of life through natural selection and the emergence of structure in the universe can be compared. With regards to the universe, as noted above, the early phase was low in entropy with more uniform distribution of matter and energy. As the entropy increased various alternative configurations of matter and energy emerged. The physical laws dominant in these new matter-energy regimes now “selected” for certain configurations like first the atoms of hydrogen and then those of heavier elements. Further, the laws in the regime of what had by then emerged as chemistry selected for certain atomic combinations, i.e. molecules. At the macroscopic level, they selected for formation of galaxies and stars. This gave a certain temporal sequence for the evolutionary process. The basic set of galaxies with their globular clusters were produced only once and new ones are not forming like in that initial phase. Instead, the old ones are maintained and what is happening is the stellar evolution within those galaxies which had formed long ago. With regard to the stars themselves, formation of those with very low metallicity happened only at certain early time and could not repeat in the second generation as the stellar nebulae were already seeded with heavy elements.
We believe that a similar entropic process is reflected in the emergence of life as we know it. First, we do not see life forming afresh again and again on earth: all of life we know of had a single origin, which might have not been on earth. In fact, the observation that the archaeal and bacterial lineages, while having a common origin, had a phase of genetic separation where there was no sign of the commonplace lateral transfer of genetic material. Thus, after a common origin on a distant source there was likely a phase of spatial separation followed by independent seedings on earth. Second, the study of the proteins in extant organisms points to a “big bang” when a great section of the stem lineages of all currently present protein domains were produced. We do not see evolutionary “bangs” of that magnitude happening again. Thus, we posit that the entropic diversification resulted in temporal layering — i.e. the early diversity was never reproduced. The diversity that emerged in the early phase was acted upon by natural selection to give rise to the stem lineages of all the major old protein domains. Since then selection has been for a good part playing the role of maintenance of the old lineages with only sporadic completely new innovations. This might be compared to the above-mentioned situation with galaxies and intra-galactic stellar evolution in the universe at large.
It would be almost banal to state that events set rolling in 1859 have had profound influence on our current way of thinking. It is story of towering intellectual heroes who covered themselves in glory one hand and forgotten foot soldiers on the other comparable to any great military endeavor. For some of them the end came before they could see the full glory of their findings: Riemann was dead at 39. JC Maxwell was dead at 48. The great Boltzmann penetrated many realms and caught glimpses of others like the treatment of space-time in special relativity and the use of Riemann’s geometry. But by 1906 madness was gripping him and he committed suicide that year. Planck lived a long life of nearly 90 years but for a good part of his life he had difficulty coming to terms with the theory he had birthed. Four of his children died as adults during his own life. However, it is hard to relate to most of them at a personal level as they come from a very alien culture and religion. In any case one can still relive some of their glory moments by re-tracing their scientific paths.
The aftermath of these upheavals brought a lot of drama to science. There was a golden age of physics which seems to have slowed down closer to our times with the three pillars still standing firm but no underlying unification yet in sight. In biology the situation is more peculiar. On one hand, from the “tradition” of Darwin there arose a certain mathematicization which really did not bring much insight in terms of biology — not surprising given the weakness of the philosophical foundations of this particular direction. On the other hand, many practicing biologists themselves often labor on with a poor understanding of what old Charles had so clearly expounded and blunder into dark ditches. However, those who understand its depths can penetrate the biological science.
Any philosophical system that fails the recognize and engage the consequences of these upheavals is likely to be in deep trouble. For the followers of the sanātana dharma a start in these directions was given long ago by traditions of the great Kaṇāda and Pāṇini but somewhere down the line they chose more sterile paths. In some sense these upheavals may be seen as a return of those principles of the great ancients in a modern guise.