Read. Reflect. Repeat.

Author: yuganka (Page 1 of 7)

On Culture

What is culture? How does it come into existence? How does it grow? How does it spread, or sustain?

I would like to draw a distinction between two types of culture – historical culture and experiential culture; and historical culture is merely experiential culture that used to exist at some point in the past but doesn’t anymore. It has receded into the “collective memory” of the society and lives on through stories, legends, oral traditions and, if they are lucky enough, artefacts. Note that if it is still alive through any customs that are followed in the present, it becomes a part of the experiential culture.

Experiential culture is exactly what it sounds like – culture that is experienced – whether through food, clothing, rituals, festivals, music, art, customs, dance forms and so on. It is something that happens in the here and now.

It could be claimed that “historical culture” as laid out above is, in fact, flawed since we are always affected by our past culture even if it may not live on in our daily lives – after all doesn’t it affect our thoughts and behaviour, however subtly, merely by occupying some portion of our consciousness?

Notice, though, that there are a few ways in which we interact with our cultural history. Let’s remember that culture keeps mutating, changing forms or evolving into something different. What, then, really counts as historical?

Certain cultural practices die out with time, for example Sati. For centuries considered as a part of Indian culture where the woman in a marital relationship “always belonged” to her husband, with time new ideas around the autonomy of women arose and swept this ritual aside – it is now a part of our historical culture and not experiential culture.

But notice how we engage with the idea – we certainly think about it in the present, horrified at how the woman had to sit with her husband on his pyre and sacrifice herself, but we take this ritual as a lesson around how society progresses and throws away older systems that impinge upon new ideas that arise in it, in this case that of one particular but important dimension of gender equality. We think of it with some sombreness and even relief but it doesn’t affect our current experiential culture, at least not directly. Its absence lives on in the form of a larger idea of gender equality, but the ritual in itself, and only by itself, doesn’t affect any experiential aspect of our present life. It only lives on in the lesson it taught us.

Let’s now talk about the opposite case, where a cultural element dies not because there was something inherently wrong with it, but simply because other systems arose that replaced it. Well, to be technically correct, we may not always be able to make normative judgements about cultural elements, and even when we may, it will generally be possible to find both positive and negative aspects in it; so, to be clear, what I am referring to is an overall assessment of that element as something that was in net positive or negative, both in its contemporaneous context and in the context of our present.

Let’s take the case of one of the major systems of education in ancient India – the gurukul pratha. Under this system, children from a young age were sent to a residency program under a sage who not only imparted them academic knowledge but also social values and principles. It meant the children were under his tutelage and led a disciplined life from their childhood which held them in good stead later on in their lives. To be accurate, this system hasn’t completely vanished and lives on in how education is imparted in modern Buddhist monasteries.

Did the gurukul pratha fade away as a result of any social backlash on how it wasn’t aligned with new ideas that were arising in medieval India? Absolutely not. Then what happened? Over time, noticeably during the Gupta empire, more institutionalised centres for learning started to come up or get consolidated, including universities like Nalanda . This gradual shift continued for many centuries until the first major impact came from the influence of Islamic systems of education with the rise of the Delhi Sultanate in the early 13th century. This continued during the period of the Mughal empire from 1526 to the middle of the 19th century, by which time the second major, and most devastating, impact came with the introduction of Macaulayism and the Woods Despatch of 1854, which officially ushered in Western education in India.

So cultural elements rise and fade with time, and there can be a complex interaction of a whole range of factors which decides which aspects stay and which disappear.

Let’s switch the prism and instead of looking at the past, turn our glance to the future.

Think of any museum that you have gone to and the objects that you may have looked at. The very fact that you are seeing them now, means someone in the past took the pains to preserve them for future generations. But how did they decide what to save and what to discard? Keep in mind that preserving any kind of artefact over long durations of time (more than a few decades) requires exceptional care and, importantly, finances. Barring cultural artefacts that could make it to the present without human intervention (like partially corroded items from the iron age or bronze age, or a shipwreck that dumped items into the seabed which got preserved due to low oxygen levels), any item that has made it to the present (Egyptian mummies, paintings, sculptures and so on) needed an intentional act at some point in the past. Some person or entity at that point felt it was important enough to be preserved.

What does that mean? Could that person foresee the importance of that thing for future generations? Or did they simply preserve the thing that was most important for them at that point of time? It can be difficult to ascertain, but it was most likely a combination of the two. In that sense, through that very choice, they were moulding a certain part of the future culture of their society.

Could they have known, though, how it would affect the future generations? The answer is no, they were only trying to do something that they felt was important, even necessary, at that point of time.

And so it happens. When we look into the future, it can be difficult, if not impossible, to gauge what will be important for the society at that point. All we can do in the here and the now is to try to preserve the things that we think are important so that it may be possible for our future society to be able to interact with and learn from what we have now. It is our message, our echo into the future.

Culture is not preserved in one day. It takes centuries. But the first act starts in this moment, and if we try to do something now, and build systems with the hope that they can save and sustain those symbols for centuries, then we may be able to bequeath a rich cultural heritage to our future generations.

The Music of the Primes (Marcus du Sautoy)

A beautiful introduction to the Holy Grail of mathematics

Two years back, exactly to this day, I had visited Alliance Francaise in Delhi with a friend. They were going to add new titles to their library, so some part of the older collection had been put up for sale. As expected, they had books and editions not usually found in bookshops or online bookstores. Amongst the gems I picked up that day was Marcus du Sautoy’s The Music of the Primes. In all honesty, I purchased the book because it was a hardcover with an intact dust jacket, it was on mathematics, the book was in perfect condition and didn’t have a single pen or pencil mark.

The book would turn out to have two interesting connections with my earlier forays into the world of maths, and I was oblivious to both at that time.

The first one was the author of the book. A few years earlier I had watched a BBC Four series titled The Story of Maths. It was presented by Sautoy and although I hadn’t gathered his name, his face had stuck.

The second was what the book was about – the Riemann Hypothesis, the most important unsolved problem in mathematics.

The importance of the hypothesis can be gauged from the fact that it is the only problem that occurs on both Hilbert’s list of twenty three problems and the seven Millennium Prize problems – the two most important lists of unsolved problems that have come up over the last century and a quarter and which have provided an impetus and given a general direction to mathematical research. The former were presented by David Hilbert at the International Congress of Mathematicians at Paris in 1900, and the latter were put forth by the Clay Mathematics Institute in the year 2000.

The Riemann Hypothesis was put forward by Bernhard Riemann, one of the most important mathematicians of all time, in the year 1859. Riemann brought about a shift in perspective in the philosophy of mathematical research. He believed it was more important to understand the hidden structure of maths than to try to solve specific questions. In that sense, he heralded a revolution in the psychology of approaching mathematical problems, a culture that continues to this day. In fact, half a century later, Einstein would discover that Riemann’s new mathematical language, of which the Hypothesis was merely an incidental observation, was perfectly suited to express his transformative ideas of special and general relativity.

Over time, hundreds of other results have come up which proceed by assuming either the truth or falsity of the Riemann Hypothesis. Thus the resolution of this hypothesis, either way, will have huge implications for the mathematical edifice.

At its heart, the Riemann Hypothesis asks a simple question – is there some hidden pattern in the distribution of prime numbers?

Despite being the building blocks of arithmetic, prime numbers are really not that well understood. Why does their distribution seem so random? Are they following some pattern? Is there some logical structure that permeates them, and which could be used to catch a glimpse about their mysterious world? Given a prime number, how long do we have to count upwards till we encounter another one?

Mankind’s quest to understand prime numbers actually has a pretty rich history that goes back over two thousand years to Euclid who, in his book Elements published around 300 BCE, provided a very simple argument to prove that there are an infinite number of primes. Over time a number of mathematicians have worked on it. In fact, the list of mathematicians whose work has either directly or indirectly helped in our understanding of primes sounds like a who’s who of the history of mathematics – Euler, Fermat, Gauss, Dirichlet, Fourier, Hilbert, Riemann, Ramanujan, Hardy, Gödel and Turing.

The modern history of the story, as well as this book, starts with Gauss who brought about the first fundamental shift in how we think about prime numbers. Instead of asking when the next prime number will occur, he asked, instead, how many primes occur up to any given number ‘n’. John Napier had come up with his logarithm tables just a few decades before, and Gauss realised he could use them to convert multiplications of huge numbers into simple addition. He developed his ideas and came up with his path-breaking approximation of n/log(n).

It was Riemann, however, who shifted gears and brought about the second fundamental shift by opening up a new landscape to understand the problem. He transformed the puzzle of the distribution of primes into the properties of a certain curve in three dimensions. The Riemann Hypothesis essentially states that the set of coordinates where the height of the curve is 0, follow a particular pattern.

The Music of the Primes provides an extremely enriching and exhilarating vision of the developments in the study of primes over the last two hundred odd years when, really, most of the progress has been made. Among the many interesting things I came to know, two have really stood out.

The first one was realising that in 1976 a group of mathematicians, working on a theorem put forth by the Russian mathematician Yuri Matiyasevich, came up with a formula in 26 variables using which it is possible to generate all the prime numbers. The second was realising that even though we have such a formula, it is not as valued today because the focus has now shifted following Riemann’s gear change, and it was precisely this striving to make sense of the hidden structure and behaviour of mathematics that has led to connections between, lo and behold, prime numbers, quantum theory and chaos theory. That’s right. Please do yourself the favour of reading the previous sentence again, and then kindly proceed to pick up your jaw that may have fallen to the floor.

Sautoy is the Simonyi Professor for the Public Understanding of Science at the University of Oxford, a chair that was created in 1995 and was occupied by Richard Dawkins till 2008, when Sautoy took over. His choice for the title of the book reflects his ability to realise that referencing the Hypothesis would end up restricting his audience to a very niche subset of mathematical enthusiasts, whereas a title such as The Music of the Primes is at once both mysterious and evocative, and would be able to vibe even with the general public.

Four Colours Suffice (Robin Wilson)

A watershed proof in the history of mathematics

While growing up, we sometimes come across certain special mathematical problems which have the following three properties – they are easily stated, are simple enough to understand and are either unsolved or require the knowledge of advanced mathematics, certainly for that age, to be tackled. Such problems perform an especially important role of stimulating our young minds even as they provide us a fleeting glimpse of the beautiful world of mathematics that lies beyond our school texbooks. Prime examples of such problems are Fermat’s Last Theorem, the Goldbach Conjecture, the Seven Bridges of Königsberg and so on.

The Four Colour Theorem (4CT) falls in this unique category of problems.

The 4CT says that in any continuous map containing an arbitrary number of countries, the maximum number of colours needed to colour them such that no two adjoining countries are of the same colour, is four. If two countries meet at a point, then they are not considered to be adjoining.

The story starts in the middle of the nineteenth century in England with two Guthrie brothers – Francis and Frederick, the latter of whom was studying under the famed mathematician Augustus De Morgan, the formulator of the well-known De Morgan’s Laws of set theory.

Francis came up with the Four Colour Conjecture (now theorem) in late 1852 when he noticed that he only needed four colours to colour all the counties of England such that adjoining counties were of different colours. He discussed this with Frederick who then consulted De Morgan. Therefrom, there was no turning back.

The problem was not as popular among mathematicians in the initial decades, and it took another half a century before it started to be taken seriously by mathematicians on the other side of the Atlantic. Nevertheless, all this while, it continued to invoke the occasional public interest and draw in amateurs and non-mathematicians to dabble in it.

The 4CT was proven by contradiction and at its heart lay the idea of a “minimal criminal” – if the theorem is false, then there must exist a particular smallest possible map that necessarily requires at least five colours to be coloured. Two core ideas formed the bedrock of this method of attack – “unavoidable set”, which is a set of configurations of shapes such that at least one member of such a set must be present in every map, and “reducible configuration”, which is any arrangement of countries that cannot exist in a minimal criminal.

So, if we are able to find a non-empty “unavoidable set of reducible configurations”, which is a set of reducible configurations such that at least one member of the set must be present in every map, then that would prove that a minimal criminal could not exist, thereby proving the 4CT.

As the twentieth century progressed, much of the effort towards proving the theorem was done along these lines, and it was finally in the year 1976 that Kenneth Appel, an Assistant Professor, and Wolfgang Haken, a visiting professor at the University of Illinois at Urbana-Champaign, finally proved the 4CT, having extensively used the computing resources available at the university. They had come up with a candidate set of 1,482 possible configurations and it took over twelve hundred hours of computing time on the university’s supercomputer to verify them all, one by one.

The proof of the 4CT was a watershed moment in the history of mathematics as it was the first major theorem that was proved using a computer. This raised serious philosophical questions at the time regarding what is a proof, and whether something that couldn’t be verified by hand could be trusted.

In addition, the proof’s brute-force method was also criticised for lacking beauty and being inelegant, something even Appel accepted.

Robin Wilson does a pretty good job of preparing the mathematical foundation for the important ideas linked with the proof that come up in the second half and there are ample diagrams to help the reader visualise the arguments that are being presented. In fact, I cannot remember reading any other science book with as high a ratio of diagrams to text.

The first one third of the book is a breezy read that the reader will sail through, and it is only when the ideas start to come together from this point on that the book demands attention and careful reading to connect the dots. The brute force nature of the proof means that the reader does lose context a few times when the details are being discussed but, thankfully, Wilson refers to earlier parts of the text at a few important stages of the book to help the reader align his thoughts vis-à-vis the path they have taken to reach the final proof. In fact, barring some brief sections that may be difficult to understand (which is natural, considering the historical significance of the problem), I believe most of the book can be understood, with some effort, even by high school students who are even slightly mathematically inclined.

All in all, Four Colours Suffice is a welcome addition to one’s library.

On Feeling a Word, Rather than Reading It

In recent months I came to associate a very commonly used pronoun with a certain person. Gradually, the word came to depict the set of feelings I had for her. Over time, a particularly interesting thing happened – the word transcended her and became the feeling itself. And then one day, when I was reading an article, I came across that pronoun and felt, for lack of a better term, a mesmerising cognitive dissonance. My brain’s logical side was interpreting the dictionary definition of the word while, simultaneously, its emotive side was feeling all that I had come to associate with that word and, consequently, her.

I wasn’t really reading the word, but feeling it. I was not imagining what the word represented, but actually feeling what it stood for.

The emergence of this sensation is radically different from when a word conjures up some image in your head, a picture in the Wittgensteinian sense of corresponding to some external entity, like how the word “chair” corresponds to “an object that, mostly, has four legs and is able to provide seating to one person” – where the image of such a chair comes up in our mind.

No, this is very different kind of sensation – a particular mix of feelings rise within you, and it’s not that you feel that the word represents those feelings, but, rather, that the word is that feeling. It would be like reading the word “pain” and actually feeling pain, whether physical or emotional, rather than imagining what pain is like or what are its components.

What do I mean by “feeling” a word?

The visual cue of that word makes me feel a certain way, not by association or by acting as a medium to something which is the actual cause of that feeling; but as the very feeling itself without the need of any intermediary. In this process, somewhat paradoxically, the visual form of the word has transcended the very entity it was meant to refer to, and has become that entity itself; the experience of this appropriation of identity is peculiar in itself; but trying to convey what it feels like is even trickier.

Let us take a step back.

When we first meet someone, we observe them even before we come to assign any tag to them, like a name. This experience cannot be quantified, for human interactions flow like a wave – they are continuous, imbued with countless subtle variations in their flow; and are multi-layered events with varying degrees of depth.

And then we get to know their name. What does this do?

The word (a name in this case), whether in its written form or in the sound that it creates when spoken, is just a symbol which has two different modes of cummunication, but both the modes refer or correspond to the same entity – that person.

What exactly is the symbol doing? It is basically encapsulating a range of impressions and maps that set of impressions on to itself. Over time, we keep adding more impressions to that set but the word remains the same, and that is how the idea of that person in our minds comes to change over time – the name, akin to a tag, serves as the immutable point of reference.

But what exactly is a name? I think before we even try to answer that question, we must answer an even simpler question – what is a word?

Let us start with an example – the word “rose”.

Notice two important things.

Firstly, a word (whether written by hand or printed on a paper) is essentially just a specific combination of lines and curves – only a well-choreographed movement of our pen can give rise to this particular word and it will make sense only as long as the form it takes is within a certain error margin. If you were to, for example, accidentally extend the “o” a bit downwards, the word will become gibberish – it will either become “rpse” or “rqse”, both of which make no sense.

Secondly, a “word” derives its meaning from a language. The set of lines and curves that exist in the visual structure of the word “rose” carries meaning only in the context of English, and certain other related languages that are in the same language family.

Essentially, a language constrains the set of combinations of lines and curves that will be meaningful for the people who know that language. The symbol “chemise” is not a word and is, hence, meaningless in the English language, but it is a word with a well-defined meaning in French – “shirt”.

From among the lexical categories (or parts of speech) of any language, one specific category is particularly interesting from our present perspective – nouns. The word “noun” comes from the Latin word “nomen”, which literally means “name”.

So what is a name? A name is any such combination of lines and curves, a symbol, that refers to some object or concept that exists out there in our shared world. “Rose” is the name of a particular flower, “India” the name of a particular place and so on. Both of these are also nouns.

Notice that technically, not all names are words. Every language has a dictionary, which is nothing but the set of legitimate words of that language. This does not have to be a physical dictionary, but could even be an “in-principle dictionary”. When exactly is a word added to the dictionary? Only when the usage of that word has reached a certain level of widespread use – which is why you may not find the nickname you have given to your cat in a dictionary, unless you named it after something that was already there in the first place.

Now, let us consider a random combination of lines and curves that is not a word in our language – “bfejqgfe”. This symbol is meaningless; it is not a name, and is definitely not a word. The reason I want to start with such an example here, unlike what I had actually personally experienced, is to prevent any already existing meanings of the symbol, if any, to form the basis upon which our subsequent understanding of the symbol grows.

Let us imagine two situations.

In the first case, we assign this symbol as a name to a person. We form a link between a static visual symbol and a dynamic set of qualities that exist in an object, the person, who is out there in the physical world. As we interact with that person, we keep adding, to the initial set of first impressions, bits and pieces of details, opinions, observations, feelings, judgements, biases and a whole range of subjective responses to the idea of that person, and then whenever we see the symbol “bfejqgfe” this person springs up in our imagination with all her traits.

Now let’s imagine another situation. What if, instead, we assign this symbol as a name to a feeling that we have? And what if, over time, we start adding certain other shades of feelings to that same name? What are the implications?

Feelings are inner states of being and do not have a visual form – when we feel a certain way, there is no image that comes up in our minds. So what will happen when I see the symbol “bfejqgfe” somewhere?

Let us consider a word like “car”. I have known this word since I was a child. I did not create this word, so I imbibed its meaning from the different instances in which I was exposed to this word around me – from the newspapers, television, relatives and friends talking about it and so on. In the case of such pre-existing words, I am a part of a bigger set of people all of whom have common knowledge about the word and its concept. Consequently, whenever any member of that set uses that word, the corresponding concepts flood in my brain and I am able to understand what he or she is saying. Even if I read a relatively complex statement about cars in a newspaper, for example “barring the three new models launched last week, this car has the fastest acceleration among all the sedans currently in production in India”, I am able to make sense of what is being said since I share a common undestanding of this concept with a lot of other people.

Here is the interesting part – this particular chain of events leading to a shared understanding will not happen when it comes to words I have created.

Why?

When I created the word “bfejqgfe”, only I was aware of its existence. I had associated this word with a particular feeling of mine. So, it is not possible for anyone else to use that word unless two conditions are fulfilled. First, I will have to convey this word to someone else and second, they will have to understand what exactly I mean by this word. Naturally, the first one will hardly take a few seconds, but accomplishing the second one can be tricky as we will ourselves have to have a clear idea of the set of feelings that we have included in the concept. However, since in this piece my focus is more on our internal experience of a particular word that has risen subject to a specific set of circumstances, I shall limit my consideration to the time in which the word is not mature enough to be properly communicated to other people.

So, since only I am aware of that word, all the instances of that word in the outer physical world (whether written or aural) are my own creation.

Put simply, it is not possible for me to come across novel usages of this word where I could put to use my own understanding of the word. All I will ever come across are my own usages of the word, and since I will always be knowing the context of that particular usage, there will be no spontaneous growth in the concept of that idea, unless I deliberately decide to add something to it.

When I had read the news about the fastest sedan in the newspaper, it is possible that I was encountering the word “sedan” for the first time. I would have subsequently referred a dictionary to understand what the word meant, and then coalesced that idea to my current understanding of cars. In other words, my concept of a “car” developed due to uncontrolled input from the external world and it was moulded by the ideas of someone else.

This spontaneous growth of a mental concept is not possible in the case of a word I have created – a word like “bfejqgfe”. This word can grow if and only if I deliberately decide to change its concept and what all it should imbibe.

I am in control of what it means. I can give this word any meaning I want, add to it any feelings I may have.

I can imprint this word with signatures of my experience of both the external physical world, and the internal world of emotions, feelings and moods.

In essence, through this private emotional signature, I can create a permanent bookmark for certain feelings of mine. If I use an already existing word, however, then through such cognitive dissonances as I mentioned in the beginning, I allow the possibility of the feelings to get eroded away. Not so when I use a word that I have created.

Interestingly, Ludwig Wittgenstein has given a critique of such “private language”, showing that it cannot exist. Essentially, his argument goes, that since there is no dictionary for such a private language, how can we ever be sure that we are referring to the same concepts (feelings in our case) when we consider the private word across different instances of time.

I hope I can wade into those waters some day, but for today, I’ll let him have the last laugh.

On Knowledge Creation and Propagation

Imagine a telescope that is tasked with observing a certain distant exoplanet, named 27X. The telescope keeps gathering new data points every minute, and logs them in an internal database which is then studied by the scientists involved in the project.

The telescope can be seen as a primary source of data.

Research institutions around the world, microbiologists peering into their microscopes to study a new behaviour observed in a certain microorganism, sociologists studying cultural responses to a pandemic, mathematicians coming up with new variations of existing formulae to tackle a particularly difficult problem – people and instruments working at the absolute edge of human knowledge – these are all primary sources of data. It is when some new insight is derived from this data that they also become primary sources of information since they are telling the human species things we didn’t already know.

Most research institutions around the world share their research findings with the general public after a certain time gap. Indeed, some institutions share their data sets in real time even before someone from their team may have seen it, let alone extracted any new information from it. This has sometimes led to cases where people not in any way linked with the research team have managed to discover something merely by having access to that data. Primary sources of information, thus, lead to discoveries – mostly from the people immediately involved in the process that creates that information, but also, sometimes, from unrelated individuals who have, firstly, access to that data and, secondly, the ability to extract new information, and consequently new knowledge, from it.

Notice that the transition from data to information will not necessarily be a transition to “useful information”, insofar as their ability to expand our knowledge base is concerned. They shall, however, still be providing us new information in the sense made famous by Edison – they will still be telling us that “this particular thing doesn’t work”.

Distinct from, but complementary to, the set of primary sources of information is the set of information carriers which perform the equally crucial task of spreading the knowledge generated from the primary sources to the masses. These I refer to as the secondary sources of information. The secondary sources can be further sub-divided into many other levels, but we can skip that for the present discussion.

Thus, primary sources of information (for example telescopes), which keep logging new data, become repositories from which someone with expertise can sift through and make discoveries (for example that the rotation period of 27X is eight hours).

This discovery is then shared with the rest of the community through secondary sources of information – journals, conferences, educational videos, interviews and so on. Maybe it leads to the creation of a YouTube video where someone introduces 27X and tells the viewers about its properties. Maybe it leads to the creation of a meme – if you lived on 27X, you could have lived a life three times longer!

One subtle but very relevant difference between the two types of information is that while the repetitive consumption of primary sources of information can lead to the creation of new knowledge, secondary sources only ever provide new knowledge the first few times they are consumed. How?

Imagine I am a researcher looking at the data gathered about 27X. My efforts at gleaning new information from the text are dependent on the kind of knowledge base I already have – I will approach the data with a different mindset if my forte is statistical science, number theory, computer science or something else.

If I am a statistical scientist, I will probably think about plotting the values on a certain kind of graph and derive some information from it. I may even feel that I need a new skill (say, machine learning) to extract information in which case I might learn that new skill and then come back to wrestle with the data, maybe even deriving some new insight in the process!

This is because primary sources of information are created as black boxes – we don’t know what we might find in them.

Secondary sources of information, however, are derived from primary sources of information so there is clarity, at least, about what they are trying to convey, even if this process of conveying may lack quality or coherence. It may take me time to understand what they are saying, but once understood, I will not be able to learn anything more from it – they are not black boxes but fully transparent in what they represent. It may send me, and other consumers of that information, into flights of fancy, but then that will not create new knowledge in the species, only in that individual. If I am an astrophysicist, such a secondary source could even provide me ideas to, for example, change the direction in which I have pointed my telescope. So a secondary source could open avenues for creation of primary sources of information, but it cannot itself be a direct source of it.

YouTube has videos on a wide variety of topics. Let’s take, for example, a fairly specific category that has consistently ranked among the top trending categories online – cat videos.

Cat videos may provide us knowledge about cat behaviour, their attention spans, their curiosity, their social life and social structure, their anatomy, and other such things regarding their species, but they are unlikely to directly lead to a new discovery regarding them.

Primary sources of information exist at the absolute edge of human knowledge. Thus, they generally involve significant financial investment, people with deep domain knowledge and the results often take time to crystallise.

Secondary sources of information, on the other hand, exist in the daily lives of the people. It can be something as simple as a video I shoot from my phone or an article (like this) that I write – and which I then publish on my website. Secondary sources of information can be created by anyone, for they are nothing but derived works from the primary sources.

Thus, for an individual, it is far easier to create secondary sources of information than primary ones. You can pick up your smartphone to shoot and share a video and, voila! You have become a secondary source of information.

But if one wants to become a primary source of information, then firstly, one will need to display the talent and skill to be allowed access to both the technological and social machinery that is already involved in doing that work and, secondly, even if one has that talent a multitude of other reasons like financial status, social networking and ease of access could prove to be the deciding factors, either way.

In general, human beings are prone to errors when estimating the effort needed to attain something that is beyond their immediate reach, especially when it comes to something like attaining new skills or learning new things. So these hindrances, if they do arise, should not demotivate us from doing something we really want to do for there are two contrasting results that could happen.

It is possible that we end up spending our entire lives just trying to make that transition from being a secondary source of information to a primary one, but never manage to cross that line despite all our efforts. But in that case, when we are on our deathbed, we will be looking back at our lives not with regret, but with the satisfaction of having tried everything we could. In addition, we will be able to better appreciate the gravity or the difficulty of the problem we had undertaken, and will be able to assess our failures in a more pragmatic fashion. We will know how far we really were from being true sources of primary information.

However, the polar opposite can also happen. We could very well manage to make that transition and give to the world, to our species, and to that mutable fabric of human knowledge, something that wasn’t there before. We would have left our mark.

In either case, if you want to make the transition, the importance of trying cannot be over-emphasised.

So, what would you rather do – create new knowledge, or spread that which already exists?

What would you rather be? A creator, or a propagator?

On Love, Strand the Second – On Finding “The One”

I sometimes imagine an evening out with my future partner. We are having dinner at a vibrant restaurant playing light music, with the white noise of the general din of the crowd humming in the background.

I, then, driven by some instinctive urge, get up and proceed to the stage. I start to talk – about what I feel for her and what she means to me, and after some time when I reach the denouement, I say, teary eyed and in a choking voice – “all the wait, all the years and months and days and minutes I spent alone, yearning for a company, were worth it, because today I have you. There was meaning in all of that agony, all the loneliness, all the time I believed. Every time it did not work out, it was for this moment. Today, I have crossed the brimming river, and stand on the other side holding your hand. Today, I am with you.”

And then, I am summarily dismissed by my rational mind which laughs, chides me, asks me to get a grip, and diverts me from the beautiful but frail castle I had been building. As I see the castle – the closest resemblance to perfection I could afford – dissolving into the waves, I am comforted by him. It is but natural to behave non-rationally when any question of love is involved, he says.

The way we meet new people in our lives is inherently non-deterministic in nature since our volition is involved – we may, of our own accord, choose to initiate conversations with any one of several people we meet on any given day. And then, as the days and months pass and we meet more people, forge new bonds, and as the existing ones stengthen or weaken, our life emerges and takes shape. Over time, we are able to discern noticeable changes and realise how different our life has become driven by our past choices.

In chaos theory, a particularly interesting phrase is used to define chaotic systems – they are defined as “deterministic systems that have a sensitive dependence on initial conditions”, implying that in such a system, it is possible to calculate the future state of the system if we know its present state, but even the slightest error in defining the present state will lead to huge discrepancies in the future predicted state. A good example would be the rolling of a dice.

Our current scientific knowledge is enough to predict the outcome of a roll of dice provided we are able to precisely define its initial state when it is falling – something which is inherently very difficult, not least because the dice has pointed corners and the surface on which it falls is never perfectly smooth.

Although our life is not, in strict terms, chaotic because our free will makes it a non-deterministic system, as a metaphor it does imbibe the beauty of chaos theory in that the smallest change at one stage of our life can lead to a completely different life progression over the years.

So how and where I shall end up meeting my future partner cannot be predicted, for all it needs is the firing of one arbitrary neuron in my brain at any arbitrary point on any given day to change my life path from the default state – maybe I take a different route to office that day, or travel in a different bus, or maybe I just decide to walk; maybe I visit a new restaurant for lunch, or take part in a workshop with people from all over the city; maybe I travel to a nearby city on the weekend and stay in a hostel, or maybe I visit my parent’s house and it just so happens that I end up making eye contact with someone across the street who is visiting her relatives.

It may even happen that I am strictly following my daily routine, but a rogue neuron in my future partner’s brain sends her day into a different trajectory that comes and intersects with mine.

Or it could even be that both of us are following our usual routines, taking the usual route to office in the metro, and we just happen to raise our heads and look at each other and smile. Let me look around, maybe that lady sitting a few rows away from me is my future partner?

The possibilities are endless and our lives exemplify the butterfly effect in practice, uniquely imprinting everything, and tearing and pulling down any pretences at predictability – every action, every decision we make is a potential catalyst that could bring the two of us together. It is not written, for it cannot be written. Your fate could, at least in theory, set you up with any person in the world provided a certain set of circumstances arise; and no set of circumstances, as far as bringing two people in contact are concerned, are impossible – some are merely more probable than others.

So, in reality, my future partner is not going to be that “special someone” because the chaotic nature of a human life betrays any sense of purpose – it is non-teleological. It cannot, so to speak, “work towards bringing to reality a certain chain of events”.

But then how does one reconcile one’s feeling of “having found the one” in the face of such non-determinism? How do we end up finding someone who is “perfectly suited for us”?

It is nothing but our naive human nature at work, which only needs to feel the slightest sensation of butterflies in the stomach to start ascribing all kinds of emotions and rationales and reasons to why the given person entered our life, including the feeling that “the entire universe has conspired to make the two of us meet”, that it was “written”, that you had been “waiting for her for all your life”, that you had “known it all along” and so on. This is basically a catharsis coming in the clothing of hindsight bias.

To be fair to our human nature, though, we should concede that the agonising gut instinct that we will “never again meet someone like them” is actually a fairly well-placed fear, for even if there were other people who were better suited for us, the chances of their lives intersecting with ours to such an extent that it leads to a conversation, are minuscule.

I will end by taking a detour into the world of science, and from where I can find a striking parallel.

In the year 1925, Erwin Schrodinger, a quantum physicist, came up with an equation called the “Schrodinger Wave Equation”. Very simply put, it describes the wave equation for any given object and tells us what is the probability of finding that object at any given point in space.

When such a wave is observed, it is said to “collapse” and the object ends up occupying a particular point in space where we can see it. Yes, quantum physics is weird.

I think there is a similar “Schrodinger Love Equation” – there is a wave of uncertainty about who is “the one” for us, and it is only when we “observe” a given person and choose her and take steps towards her that the uncertainty breaks down, the Love Equation collapses and, out of all of the millions of women out there, this particular lady becomes our “the one”.

The LinkedIn Song

// to be sung like Yellow by Coldplay

Look at these stars
Look how they smile at you
In the LinkedIn school
Yeah, they are all shallow…

I came along
With a lot of stuff to brood,
And when I saw them, puked
For, it is all shallow…

When I felt heartburn
Saw through their projections
Oh it was so shallow…

Your bios
And oh your headlines
Trying to show oh your perfect lives,
Do you know
You know I know that sometimes…
You, too, do yearn to just cry…

I saw the loss
Could not be honest with you
For you are here to prove
That you are all shallow…

I gave a sigh,
This internal divide,
To show the world you’re fine
Oh you are so shallow!

Your bios
And oh your headlines
Trying to show oh your perfect lives,
Do you know
You know I know that sometimes…
You, too, do yearn to just cry..

With you, I’d rather sit and cry
Yes, I’d rather sit and cry…

It’s true
Look how they smile at you
Look how they smile at you
Look how they smile at
Look how they smile at you
Look how they smile at you
Look how they smile…

Look at these stars
Look how they laugh at you
And all the things that you do!

The Poincaré Conjecture (Donal O’Shea)

An involved introduction to a modern mathematical beauty

The Poincaré Conjecture is one of the seven Millennium problems that were announced by the Clay Mathematics Institute in the year 2000. These were problems deemed to be the most important open problems in mathematics, and whose solutions, or even attempts towards the same, were expected to lead to pivotal advancements in our mathematical knowledge. Although not explicitly stated, the kind of problems chosen and the general acceptance of the list in the mathematical community means it could, potentially, significantly affect the general direction of mathematical research in this century.

The conjecture goes back over a century when Henri Poincaré, a leading French mathematician of the time, and who is also considered to be “The Last Universalist” i.e. someone who excelled in all the fields of mathematics that existed during his lifetime, came up with the following gem in 1904 –

“Every simply connected, closed 3-manifold is homeomorphic to the 3-sphere.”

Do not stress if you cannot understand a word of it for, and I admit this with shame, even after completing the book I was not clear what all the terms used above meant, and I could only afford some basic level of clarity once I consulted a friend of mine who has done post-graduate studies in mathematics.

This is a fairly technical book, and although it is possible to understand certain sections and even chapters in their entirety, prior exposure to topological concepts and terms are an essential requirement in order to fully appreciate the beauty of the ideas presented. Of course, I could easily blame the author for writing the book that way, but the truth is that it is immensely difficult to break down such involved mathematics for a genuinely layman audience. This conjecture took over a century to prove; surely I couldn’t have hoped to understand it by reading a few hundred pages?

O’Shea has actually done a decent job in conveying the significance of the problem – the conditions under which it arose; how it affected its field in the years to come; the psychological effect of having a problem that an entire generation of mathematicians knew about, even if they were not directly linked with that discipline; its social and historical context and how its solution, and efforts towards the same, affected all of the above.

To understand the problem, however, in all its technical beauty, would be a bit too much to expect from someone without decent mathematical exposure. Significant knowledge of the discipline is required to understand why the conjecture couldn’t have been solved back then, what tools and techniques arose in the quest for its solution, and how it affected the growth of its own and related fields in the years to come, both in terms of the kind of problems that it solved, and the kind it created for the next generation of mathematicians to ponder over.

While I did understand the historical significance of the conjecture, the technical beauty all but eluded me and although I do remember and understand some parts of the book, and some vestiges of the arguments do remain in my intellect, they are in a form that betrays my ability to pass them on to anyone else in an intelligible manner. In that sense, can I even claim to have read the book?

Having said that, will I suggest reading this book?

My gold standard for writing popular maths is Fermat’s Last Theorem by Simon Singh, against which I have come to compare all the popular mathematics books I have read since. If you have read that one, know that The Poincaré Conjecture demands a bit more effort on the reader’s part.

If you haven’t, and if you have some background in mathematics, or if you loved the subject growing up, you could take up this book, but maybe skip through the parts that get into the intricacies of the mathematical concepts. I have personally felt the gradual arrival of the moment, in some books, when my mind says it is all going over my head. In most of the cases, especially if one doesn’t have the requisite background, heeding that request is beneficial.

However, if you have studied mathematics after high school, then I would request you to wade through the work and put in slightly more efforts. You will still not get everything, but maybe enough to appreciate the sheer beauty of the ideas.

And, if you have been exposed to a bit more advanced mathematics, and if you have some prior knowledge of the topological vocabulary used in the work, it may be worthwhile to hang in tight and read with full concentration. You may just get it! Reading is not always a leisurely way to pass time. It can also be challenging, and that is why we read in the first place, to peer into a field far removed from our daily lives, waiting with mysteries to delight and leave us in awe.

Crime and Punishment (Fyodor Dostoyevsky)

The turning point of my reading career

In the youthful exuberance of our teenage years, with all its ups and downs, the mood swings, and our search for a productive outlet of our energies, as also our imagination, it only needs the faintest touch from the right kind of an author, for those suitably inclined, to be sent on an entirely different and profound trajectory, as far as their reading habits and choices are concerned.

Dostoyevsky is one such writer.

I clearly remember the day I had bought Crime and Punishment, and I think what led me to the book was my recollection of a reference to it in Babe : Pig in the City – a cinematic adaptation of Animal Farm which I had seen a few years earlier.

Crime and Punishment was the first unabridged classic I read, and it was also my introduction to the genre of philosophical fiction; before that I wasn’t even aware that such a combination existed (I was seventeen, what do you expect!) and had been reading the thrillers of John Grisham, Dan Brown and the occasional Jeffrey Archer; in addition to the omnipresent Harry Potter series.

Over the course of the eighteen or so months that I spent in slowly completing the book, I eased into my college studies, and experienced a whole new range of emotions, and probing questions, for the first time in my life. All this while, Crime and Punishment was performing a kind of slow baptism by fire in the background, and completely transformed me as a reader.

Being my first classic, this book was also my introduction to Victorian English (I read Constance Garnett’s 1914 translation) and I just fell in love with the language. Victorian English is often accused of prolix verbiage, of dramatic and extended monologues, and of having a proliferation of words and a profusion of sentences so articulate one will never use them in real life. But I have always found this criticism incorrect on two accounts.

Firstly, it fails to see just how much, and in what degrees, the expectations from literature have changed even within the relatively short time frame of the past hundred and fifty years and the whole point of literature, good literature in any case, is to bring us closer to thoughts, ideas and situations we may never experience in our own lives.

And secondly, this tendency is more on account of our having gotten used to internet lingo and abbreviations in our normal conversations in the twenty-first century, than as an objective criticism of the language in itself.

How has the book affected me? That is impossible to determine, and yet I can observe a few noticeable changes.

I take my time when reading a book as I want to absorb each sentence, each word, each gesture and wave of a hand. The books I read come to life in my mind in vivid detail. Since I am on Earth for only a finite amount of time, and there are just too many books I want to read, I have become an elitist in my book choices and I think it was a natural result of having gained more from this one book than probably all the books I had read until that point, combined.

Additionally, with my to-read list being so excruciatingly long, it is a foregone conclusion that I will almost never try out a new author I know little about. This, unfortunately, automatically deprives me the pleasure of serendipity, but that is a sacrifice I seem willing to make.

Crime and Punishment radically transformed what I came to expect from the written word, as also what I believed were the limits of what could be expressed through this medium. The internal turmoil of the protagonist Rodión Románovich Raskólnikov is beautifully portrayed, as are his frequent bouts of feverish obsession with the crime(s) he has committed. In addition, the portrayal of his crushing poverty, in the backdrop of the depiction of St. Pertersburg of the day, and his interactions with other people provide an extremely enriching reading experience.

I distinctly remember one particular scene where Pyótr Petróvich Lúzhin, a lawyer who is engaged to Raskólnikov’s sister in the beginning of the book, accuses Sónya Marmeládova, the daughter of a drunkard whom Raskólnikov meets in a tavern, and whom circumstances have pushed into prostitution, of stealing a hundred rouble note from him. Katerína Ivánovna Marmeládova, Sónya’s stepmother, rushes to counter Lúzhin’s claim and vouches for Sónya’s innocence. And then, finally, Lúzhin’s roommate Lebeziátnikov enters the scene and counters him, managing to prove the innocence of Sónya with the help of a moving monologue by Raskólnikov.

This scene was spread over five pages and provided me my first experience of, what I can only refer to as, a literary orgasm.

It has been almost a decade since I read Crime and Punishment. I never came around to writing what I felt about the book for it seemed very difficult to express the myriad ways in which the book has affected me and the very fact that I am able to talk so clearly about my feelings associated with the book, even after so much time has passed, is a testament to that.

Dostoyevsky belongs to that certain breed of authors where first impressions are nearly impossible to improve upon. Ayn Rand and Khaled Hosseini are others that come to my mind.

I think in the case of such authors, whichever book a given person picks up first will generally remain their favourite work from that author.

I have read two more books by Dostoyevsky in the intervening years, and they were delights to read. But I doubt I will ever be affected this much again, by any other work by him.

Thank you, Fyodor.

The Subtle Art of Thinking Clearly (Rolf Dobelli)

Enjoyable crash course on cognitive biases

I avoid self-help books like the plague, and my initial impression of The Art of Thinking Clearly was the same.

Fortunately, even my contempt for self-help books does not prevent me from an objective assessment of my labelling, if the book were to come into my hands. During a visit to a local bookstore, I happened to come across this book and remembering the praises of a friend of mine, proceeded to skim through the contents page.

There are certain kinds of reference books that any decent library should have. These may not be the best books in their field, but their trademark is their appositeness on two counts.

Firstly, the method of finding something within the book is easy to follow, even if actually finding it may take time. For example, the method of finding a word in a dictionary is very clear, even if you may take some time to find a specific word if there are over two thousand pages.

Secondly, they give clear and concise answers when you have found what you are looking for, thus minimising the time needed to understand the idea and get on with one’s work.

Dobelli does an excellent job on the second count, and an above average one on the first. Covering nearly a hundred such biases, Dobelli writes straight to the point and devotes around three pages to each bias. His examples, so far as I could figure out, are optimally chosen – sufficiently detailed as to properly explain the cognitive bias and not miss the woods for the trees.

However, there are a few idiosyncrasies which I particularly disliked.

He frequently quotes Nassim Nicholas Taleb, the author of books like Fooled by Randomness and Antifragile. In fact this happened so often that at some points I felt I was reading the footnotes or the appendices of some Taleb book, and that The Art of Thinking Clearly was actually a decoy used by Taleb to popularise his own work.

The same applies to the (lesser) frequent references to Charlie Munger.

Secondly, over the course of the ninety-nine cognitive biases and behavioural aspects, certain examples, and even phrases, appear repeatedly. This could be because these pieces were originally written for weekly columns in certain German, Dutch and Swiss newspapers and were only later compiled into a book.

All in all, The Art of Thinking Clearly is a useful addition to one’s library for quick reference purposes, though you will need to refer to other resources if you want to know about a given cognitive bias in more detail.

« Older posts

© 2024 Yuganka Sharan

Theme by Anders NorénUp ↑