Read. Reflect. Repeat.

Tag: Epistemology

On Feeling a Word, Rather than Reading It

In recent months I came to associate a very commonly used pronoun with a certain person. Gradually, the word came to depict the set of feelings I had for her. Over time, a particularly interesting thing happened – the word transcended her and became the feeling itself. And then one day, when I was reading an article, I came across that pronoun and felt, for lack of a better term, a mesmerising cognitive dissonance. My brain’s logical side was interpreting the dictionary definition of the word while, simultaneously, its emotive side was feeling all that I had come to associate with that word and, consequently, her.

I wasn’t really reading the word, but feeling it. I was not imagining what the word represented, but actually feeling what it stood for.

The emergence of this sensation is radically different from when a word conjures up some image in your head, a picture in the Wittgensteinian sense of corresponding to some external entity, like how the word “chair” corresponds to “an object that, mostly, has four legs and is able to provide seating to one person” – where the image of such a chair comes up in our mind.

No, this is very different kind of sensation – a particular mix of feelings rise within you, and it’s not that you feel that the word represents those feelings, but, rather, that the word is that feeling. It would be like reading the word “pain” and actually feeling pain, whether physical or emotional, rather than imagining what pain is like or what are its components.

What do I mean by “feeling” a word?

The visual cue of that word makes me feel a certain way, not by association or by acting as a medium to something which is the actual cause of that feeling; but as the very feeling itself without the need of any intermediary. In this process, somewhat paradoxically, the visual form of the word has transcended the very entity it was meant to refer to, and has become that entity itself; the experience of this appropriation of identity is peculiar in itself; but trying to convey what it feels like is even trickier.

Let us take a step back.

When we first meet someone, we observe them even before we come to assign any tag to them, like a name. This experience cannot be quantified, for human interactions flow like a wave – they are continuous, imbued with countless subtle variations in their flow; and are multi-layered events with varying degrees of depth.

And then we get to know their name. What does this do?

The word (a name in this case), whether in its written form or in the sound that it creates when spoken, is just a symbol which has two different modes of cummunication, but both the modes refer or correspond to the same entity – that person.

What exactly is the symbol doing? It is basically encapsulating a range of impressions and maps that set of impressions on to itself. Over time, we keep adding more impressions to that set but the word remains the same, and that is how the idea of that person in our minds comes to change over time – the name, akin to a tag, serves as the immutable point of reference.

But what exactly is a name? I think before we even try to answer that question, we must answer an even simpler question – what is a word?

Let us start with an example – the word “rose”.

Notice two important things.

Firstly, a word (whether written by hand or printed on a paper) is essentially just a specific combination of lines and curves – only a well-choreographed movement of our pen can give rise to this particular word and it will make sense only as long as the form it takes is within a certain error margin. If you were to, for example, accidentally extend the “o” a bit downwards, the word will become gibberish – it will either become “rpse” or “rqse”, both of which make no sense.

Secondly, a “word” derives its meaning from a language. The set of lines and curves that exist in the visual structure of the word “rose” carries meaning only in the context of English, and certain other related languages that are in the same language family.

Essentially, a language constrains the set of combinations of lines and curves that will be meaningful for the people who know that language. The symbol “chemise” is not a word and is, hence, meaningless in the English language, but it is a word with a well-defined meaning in French – “shirt”.

From among the lexical categories (or parts of speech) of any language, one specific category is particularly interesting from our present perspective – nouns. The word “noun” comes from the Latin word “nomen”, which literally means “name”.

So what is a name? A name is any such combination of lines and curves, a symbol, that refers to some object or concept that exists out there in our shared world. “Rose” is the name of a particular flower, “India” the name of a particular place and so on. Both of these are also nouns.

Notice that technically, not all names are words. Every language has a dictionary, which is nothing but the set of legitimate words of that language. This does not have to be a physical dictionary, but could even be an “in-principle dictionary”. When exactly is a word added to the dictionary? Only when the usage of that word has reached a certain level of widespread use – which is why you may not find the nickname you have given to your cat in a dictionary, unless you named it after something that was already there in the first place.

Now, let us consider a random combination of lines and curves that is not a word in our language – “bfejqgfe”. This symbol is meaningless; it is not a name, and is definitely not a word. The reason I want to start with such an example here, unlike what I had actually personally experienced, is to prevent any already existing meanings of the symbol, if any, to form the basis upon which our subsequent understanding of the symbol grows.

Let us imagine two situations.

In the first case, we assign this symbol as a name to a person. We form a link between a static visual symbol and a dynamic set of qualities that exist in an object, the person, who is out there in the physical world. As we interact with that person, we keep adding, to the initial set of first impressions, bits and pieces of details, opinions, observations, feelings, judgements, biases and a whole range of subjective responses to the idea of that person, and then whenever we see the symbol “bfejqgfe” this person springs up in our imagination with all her traits.

Now let’s imagine another situation. What if, instead, we assign this symbol as a name to a feeling that we have? And what if, over time, we start adding certain other shades of feelings to that same name? What are the implications?

Feelings are inner states of being and do not have a visual form – when we feel a certain way, there is no image that comes up in our minds. So what will happen when I see the symbol “bfejqgfe” somewhere?

Let us consider a word like “car”. I have known this word since I was a child. I did not create this word, so I imbibed its meaning from the different instances in which I was exposed to this word around me – from the newspapers, television, relatives and friends talking about it and so on. In the case of such pre-existing words, I am a part of a bigger set of people all of whom have common knowledge about the word and its concept. Consequently, whenever any member of that set uses that word, the corresponding concepts flood in my brain and I am able to understand what he or she is saying. Even if I read a relatively complex statement about cars in a newspaper, for example “barring the three new models launched last week, this car has the fastest acceleration among all the sedans currently in production in India”, I am able to make sense of what is being said since I share a common undestanding of this concept with a lot of other people.

Here is the interesting part – this particular chain of events leading to a shared understanding will not happen when it comes to words I have created.

Why?

When I created the word “bfejqgfe”, only I was aware of its existence. I had associated this word with a particular feeling of mine. So, it is not possible for anyone else to use that word unless two conditions are fulfilled. First, I will have to convey this word to someone else and second, they will have to understand what exactly I mean by this word. Naturally, the first one will hardly take a few seconds, but accomplishing the second one can be tricky as we will ourselves have to have a clear idea of the set of feelings that we have included in the concept. However, since in this piece my focus is more on our internal experience of a particular word that has risen subject to a specific set of circumstances, I shall limit my consideration to the time in which the word is not mature enough to be properly communicated to other people.

So, since only I am aware of that word, all the instances of that word in the outer physical world (whether written or aural) are my own creation.

Put simply, it is not possible for me to come across novel usages of this word where I could put to use my own understanding of the word. All I will ever come across are my own usages of the word, and since I will always be knowing the context of that particular usage, there will be no spontaneous growth in the concept of that idea, unless I deliberately decide to add something to it.

When I had read the news about the fastest sedan in the newspaper, it is possible that I was encountering the word “sedan” for the first time. I would have subsequently referred a dictionary to understand what the word meant, and then coalesced that idea to my current understanding of cars. In other words, my concept of a “car” developed due to uncontrolled input from the external world and it was moulded by the ideas of someone else.

This spontaneous growth of a mental concept is not possible in the case of a word I have created – a word like “bfejqgfe”. This word can grow if and only if I deliberately decide to change its concept and what all it should imbibe.

I am in control of what it means. I can give this word any meaning I want, add to it any feelings I may have.

I can imprint this word with signatures of my experience of both the external physical world, and the internal world of emotions, feelings and moods.

In essence, through this private emotional signature, I can create a permanent bookmark for certain feelings of mine. If I use an already existing word, however, then through such cognitive dissonances as I mentioned in the beginning, I allow the possibility of the feelings to get eroded away. Not so when I use a word that I have created.

Interestingly, Ludwig Wittgenstein has given a critique of such “private language”, showing that it cannot exist. Essentially, his argument goes, that since there is no dictionary for such a private language, how can we ever be sure that we are referring to the same concepts (feelings in our case) when we consider the private word across different instances of time.

I hope I can wade into those waters some day, but for today, I’ll let him have the last laugh.

On Knowledge Creation and Propagation

Imagine a telescope that is tasked with observing a certain distant exoplanet, named 27X. The telescope keeps gathering new data points every minute, and logs them in an internal database which is then studied by the scientists involved in the project.

The telescope can be seen as a primary source of data.

Research institutions around the world, microbiologists peering into their microscopes to study a new behaviour observed in a certain microorganism, sociologists studying cultural responses to a pandemic, mathematicians coming up with new variations of existing formulae to tackle a particularly difficult problem – people and instruments working at the absolute edge of human knowledge – these are all primary sources of data. It is when some new insight is derived from this data that they also become primary sources of information since they are telling the human species things we didn’t already know.

Most research institutions around the world share their research findings with the general public after a certain time gap. Indeed, some institutions share their data sets in real time even before someone from their team may have seen it, let alone extracted any new information from it. This has sometimes led to cases where people not in any way linked with the research team have managed to discover something merely by having access to that data. Primary sources of information, thus, lead to discoveries – mostly from the people immediately involved in the process that creates that information, but also, sometimes, from unrelated individuals who have, firstly, access to that data and, secondly, the ability to extract new information, and consequently new knowledge, from it.

Notice that the transition from data to information will not necessarily be a transition to “useful information”, insofar as their ability to expand our knowledge base is concerned. They shall, however, still be providing us new information in the sense made famous by Edison – they will still be telling us that “this particular thing doesn’t work”.

Distinct from, but complementary to, the set of primary sources of information is the set of information carriers which perform the equally crucial task of spreading the knowledge generated from the primary sources to the masses. These I refer to as the secondary sources of information. The secondary sources can be further sub-divided into many other levels, but we can skip that for the present discussion.

Thus, primary sources of information (for example telescopes), which keep logging new data, become repositories from which someone with expertise can sift through and make discoveries (for example that the rotation period of 27X is eight hours).

This discovery is then shared with the rest of the community through secondary sources of information – journals, conferences, educational videos, interviews and so on. Maybe it leads to the creation of a YouTube video where someone introduces 27X and tells the viewers about its properties. Maybe it leads to the creation of a meme – if you lived on 27X, you could have lived a life three times longer!

One subtle but very relevant difference between the two types of information is that while the repetitive consumption of primary sources of information can lead to the creation of new knowledge, secondary sources only ever provide new knowledge the first few times they are consumed. How?

Imagine I am a researcher looking at the data gathered about 27X. My efforts at gleaning new information from the text are dependent on the kind of knowledge base I already have – I will approach the data with a different mindset if my forte is statistical science, number theory, computer science or something else.

If I am a statistical scientist, I will probably think about plotting the values on a certain kind of graph and derive some information from it. I may even feel that I need a new skill (say, machine learning) to extract information in which case I might learn that new skill and then come back to wrestle with the data, maybe even deriving some new insight in the process!

This is because primary sources of information are created as black boxes – we don’t know what we might find in them.

Secondary sources of information, however, are derived from primary sources of information so there is clarity, at least, about what they are trying to convey, even if this process of conveying may lack quality or coherence. It may take me time to understand what they are saying, but once understood, I will not be able to learn anything more from it – they are not black boxes but fully transparent in what they represent. It may send me, and other consumers of that information, into flights of fancy, but then that will not create new knowledge in the species, only in that individual. If I am an astrophysicist, such a secondary source could even provide me ideas to, for example, change the direction in which I have pointed my telescope. So a secondary source could open avenues for creation of primary sources of information, but it cannot itself be a direct source of it.

YouTube has videos on a wide variety of topics. Let’s take, for example, a fairly specific category that has consistently ranked among the top trending categories online – cat videos.

Cat videos may provide us knowledge about cat behaviour, their attention spans, their curiosity, their social life and social structure, their anatomy, and other such things regarding their species, but they are unlikely to directly lead to a new discovery regarding them.

Primary sources of information exist at the absolute edge of human knowledge. Thus, they generally involve significant financial investment, people with deep domain knowledge and the results often take time to crystallise.

Secondary sources of information, on the other hand, exist in the daily lives of the people. It can be something as simple as a video I shoot from my phone or an article (like this) that I write – and which I then publish on my website. Secondary sources of information can be created by anyone, for they are nothing but derived works from the primary sources.

Thus, for an individual, it is far easier to create secondary sources of information than primary ones. You can pick up your smartphone to shoot and share a video and, voila! You have become a secondary source of information.

But if one wants to become a primary source of information, then firstly, one will need to display the talent and skill to be allowed access to both the technological and social machinery that is already involved in doing that work and, secondly, even if one has that talent a multitude of other reasons like financial status, social networking and ease of access could prove to be the deciding factors, either way.

In general, human beings are prone to errors when estimating the effort needed to attain something that is beyond their immediate reach, especially when it comes to something like attaining new skills or learning new things. So these hindrances, if they do arise, should not demotivate us from doing something we really want to do for there are two contrasting results that could happen.

It is possible that we end up spending our entire lives just trying to make that transition from being a secondary source of information to a primary one, but never manage to cross that line despite all our efforts. But in that case, when we are on our deathbed, we will be looking back at our lives not with regret, but with the satisfaction of having tried everything we could. In addition, we will be able to better appreciate the gravity or the difficulty of the problem we had undertaken, and will be able to assess our failures in a more pragmatic fashion. We will know how far we really were from being true sources of primary information.

However, the polar opposite can also happen. We could very well manage to make that transition and give to the world, to our species, and to that mutable fabric of human knowledge, something that wasn’t there before. We would have left our mark.

In either case, if you want to make the transition, the importance of trying cannot be over-emphasised.

So, what would you rather do – create new knowledge, or spread that which already exists?

What would you rather be? A creator, or a propagator?

On the Epistemology of our Emotional Responses to Dreams

Nearly two years ago I had a dream which made me think.

I was on a plain, and there were two hills on either side of me. From those yellow hills, huge boulders, three to six meters in diameter, were hurtling towards me. Stuck between these imminent messengers of death, I felt panic. My mind was racing, evaluating my options and, finding none, it was panicking even more. The last thing I remember was the boulders barely a few feet away from me, as I embraced my death-by-sandwiching.

When we experience any feeling, we automatically compare it with our past feelings. As I remembered what I had felt in those moments of terror, I realised I had felt something I had never felt before. The fear of death.

I shook my head. The fear of death? I experienced the fear of death, for the first time, in my dream? But how was that possible? How could my dream supply my mind with information that previously wasn’t there? How could something that never really happened, induce in me a feeling of something new, something I was yet to actually experience in my life?

What was the source of this information? How did this knowledge arise?

I shared this experience with a couple of friends through my preferred mode of communication – email. And then that thread receded into the confines of the past.

This morning, I received a reply on that thread from one of those friends. He had had a terrifying dream, one which he was sorely trying to forget. Towards the end he mentioned ‘…I know now “how will I feel like if I were raped”‘.

I felt a conflicted feeling of solemn amazement. Fear of death is a very generic feeling. We may never have actually felt or even given conscious attention to this fear, but it is something we inherently know and which silently lurks beneath our awareness. The only constant is change, and the only certainty is death.

But rape?

Rape is abhorrent, the most diabolical crime imaginable. Well into our journey into the twenty first century, we are still centuries behind our times when it comes to gender equality. Rape has always been a tool of subjugation, a weapon to subdue. Due to a variety of reasons I won’t go into so as not to digress, the average female lives in a constant fear of violence directed from the opposite gender, both physical and emotional.

My biased mind could not help but see it a bit differently than if a female friend had written about the same thing to me.

Let me clarify.

I contend that the fear of death is a common sub-conscious strand for all people, irrespective of gender.

I then contend that fear of violence from the opposite gender is a similar strand, albeit this time very conscious and palpable, but specifically for females.

So how could a fear that is normally not associated with the male gender, arise in the dream of a man?

Admittedly, what my friend dreamt could have been a result of some experiences of his own life, yet the above question was enough to point me towards, what I think, is a potential solution to the question posed at the beginning of this piece.

The pivotal observation, and something which has also been extensively covered in recent media, including movies like Inception, is that we are unaware of the process of dreaming. Our subjective experience of a dream while in it is indistinguishable from our experience while we are awake.

How does that make a difference?

It implies that if we faced a particular situation for the first time in our real life, which was then deleted from our memory (to prevent it from acting as a benchmark for our reaction the next time) and we were then made to face that same situation again but in a dream, our emotional response in both the cases will be almost the same because, and this is important, our subjective experience in both the cases are identical.

Put simply, our subjective emotional response to a particular situation will remain the same, even if the source (and the very nature of the source) of that experience was changed without our knowledge. This is the same reason why pranks work – you are not aware that it is a prank and take it to be real, in its full intensity.

There is another very important thing to notice here.

My friend had mentioned ‘…I know now “how will I feel like if I were raped”‘ (emphasis mine). The dream had only opened up, or brought into his conscious awareness, his own subjective emotional response to a particular situation i.e. it did not give rise to any new objective piece of information.

In other words, even given that our dreams can supply us new information, they can never be a source of truth, for truth is objective. My friend could never have found out what it feels like to be raped, only how he will feel if it were to happen to him.

So dreams are like recipes. Ignoring for now the source of what we see in a particular dream, its content still acts as a stimulator which cooks up a realistic scenario, an experience felt in its full lucidity, and then our own life, our memories, our thoughts, biases, prejudices, hopes, dreams, aspirations, fears, phobias and residual consciousness gives rise to our own subjective emotional response, which is as genuine as if we had actually experienced that dream in real life, for one believes it to be real.

So the next time you wake up from a dream experiencing something for the first time, remember that you have been pranked by your own subconscious.

Free Will (Sam Harris)

Even if free will were an illusion, we would not be aware of this illusion

This book made me think.

Harris’s arguments against the existence of free will rest on two main observations.

The first one is that we assign a sense of freedom to ourselves by thinking over our past actions and saying – we could have done it differently. Harris says this ex-post method of giving ourselves the notion of free will is illusionary as we did what we did and there is no way to check that we could have acted differently.

This makes sense, but only because we cannot go into the past. If I were to be presented with an absolutely identical situation a few days later, I would be free to make a different choice. But here Harris will argue something along the lines that the two situations are not identical as your brain neurons aren’t exactly as they were the last time around. How can one argue against this.

The second argument by Harris, is that the choices we make are from a specific subset of all the possible choices out there, and that subset is chosen by unconscious processes in our brain which take place beyond our control. Where is the freedom in that?

This reminds me of something I came across a while back. Many kids do not like to drink milk and no amount of coaxing or incentives will make them say yes. The parents are given a neat little trick which often works. Just ask the kid whether he wants the milk in the red cup, blue cup or the green cup. They will fall for this choice subset and end up choosing one of the colours and viola! The kid chose to drink milk without being consciously aware of it. This is the gist of the Harrisian argument.

Again, I do partly agree with Harris’s premise on our false choices, but that only makes me think a deeper introspection will make the child see through the trick. If the child fails to see the false choice he was forced to make, it wasn’t because he lacked free will, but because he lacked the sense to look deeper into the choices he was presented, which could have saved him from that glass of milk. Harris confuses an ignorance of initial conditions with an absence of free will, and this is fallacious at best.

There are a few points where Harris is irrefutable, though. He says if I make a choice driven by my physiological necessity, I am not really free at all. For example, when I extend my hand and pick up a glass of water, then bring it to my lips and drink it, I take it as an example of my free will, as I could easily have drunk it from any other glass, taken it out of any other bottle, and even drunk it at any time before or after that moment. But this freedom is fallacious. I had no choice but to drink water, drinking beer wouldn’t have sufficed. I am bound by my biology.

One recurring chain of reasoning that Harris employs is that by repeatedly asking why something happened, we will fall into infinite regress and since some of the stages in that chain are events out of our control, the action itself betrays our being “free”.

Why did I choose A over B, given that no physiological reason compelled me to it – for example, why did I get down from the left side of my bed in the morning and not the right side? Maybe it was sheer habit. Or even if it wasn’t, we chose one due to some spontaneous thought in our brain which we had no control over (we didn’t choose that we wanted to get down from the left side), and then labelled our actions as suggesting free will in an ex-post-facto basis.

In short, Harris says events beyond our control (electrical connections in our brain) lead us to some of our choices, and then we retrospectively ascribe them to our free will. We can’t choose what we want to choose.

Our brains have finite capacity to process information, and to say that my choice of a specific flavour of ice cream from a set of three flavours betrays free will because that set of three was a product of unconscious neurology is to miss the point. Maybe in my life I have had ten different flavours of ice cream, and I consciously remembered only three of them at that point. That doesn’t make me less free even though I unconsciously (and without exercising choice) reduced my possible choices from a set of ten to a set of three.

I think Harris took our physiological and neurological limitations as filters that drive our choices by presenting us partial information. I may not be free to choose what I want to choose, but I definitely am free to choose any of the things from the diminished subset that my mind provides to me. And, in fact, that should be enough as we aren’t consciously aware of the seven flavours of ice cream we forgot, so we feel free when choosing from one of the three we consciously had.

And the moment we would remember that we had forgotten seven flavours the last time, our next choice will automatically be from the entire set of ten flavours.

We are not conscious of the “illusion of free will” that Harris talks about. When we are making a choice, we don’t know that we have a diminished subset in front of us and we do feel free. Doesn’t that, then, defeat his argument?

Discourse on Method and Meditations (Rene Descartes)

Modern western philosophy begins with Descartes. Do you afford to miss the opening ceremony?

Descartes has been called the father of modern philosophy. And it is not without sufficient reason.

A little background is necessary to realise the enormity of what he did – the “method” he introduced.

In Discourses, fully titled ‘Discourse on the Method of Rightly Conducting One’s Reason and of Seeking Truth in the Sciences”, Descartes discusses what pushed him towards his quest for a new way of thinking. Aristotelianism had been followed for nearly two millennia, with the result that each successive generation was learning its ideas without applying any critical thinking in the process. Attempts to question some of the assumptions or arguments put forward by Aristotle were not just discouraged but even throttled. In his formative years, Descartes could somewhere sense this rigidity of thought in the contemporary establishment, and, in his early 20s, he decided to do something about it. However, on closer inspection, he realized he was not yet ready for such an enormous task and so gave himself a few years’ time in which he travelled far and wide, interacted with people of different cultures and different classes in society, all the while observing their customs and ways of thinking.

“Discourse on Method…” is his exposition of the technique he developed and the circumstances and reasons which led him to it, while “Meditations..” is his attempt at applying that method in order to find certain and indubitable knowledge.

Among the many strands in his method, the common thread is of “Method of Doubt” – to doubt absolutely everything in which he is unable to claim certain knowledge, and then proceed with whatever he has left. In fact, he decided to consider statements even slightly doubtful to be on the same footing as statements that were manifestly false. This is a remarkable approach for someone living in the early 1600s.

Descartes starts by doubting everything his senses present to him, for senses often deceive us – the sun and a street lamp both look the same size when in fact they aren’t. This means he doubts he has a body; he doubts that material things exist; he doubts God for there is no proof of his existence (this consideration proves how serious Descartes was in his quest, for religion occupied a very important part in society in the early 17th century – we all know what happened to Galileo); he even doubts mathematical truths for there is always a possibility that a devil is so deceiving him that he is able to feed this belief in his mind that mathematical statements like 2+2=4 are objectively true, when in fact they might not be.

However, even after he doubts everything, he notices that he cannot doubt the fact that he is doubting. That he is a thinking being. Thus emerged Cogito ergo sum, or “I think therefore I am”. This statement is arguably the most popular phrase ever written or said by any philosopher.

In Meditations, Descartes introduces a number of ideas – some original and some rephrased versions of those that had previously existed. For example, the Ontological proof for the existence of God had existed for a long time, and Descartes gave his own version – God is an entity greater than whom nothing can be conceived; existence is a positive trait; therefore, God without existence is inferior to God with existence, therefore the concept of God necessitates his existence.

His work also saw the emergence of two new revolutionary ideas.

The first one was Rationalism, the view that knowledge can be derived from pure reasoning and logic, without any inputs from the external physical world. Descartes never uses this term, but his methodology serves as a perfect example of this technique.

The second one was Dualism, the view that there are two types of substances – mind and matter. Humans, for example, had a thinking non-material mind and a non-thinking material body.

The rise of Empiricism in the British Isles, and Kant’s subsequent struggle to balance the two views has set the course of philosophy ever since.

The first time I had heard about his proofs for the existence of God, I had wondered how he had been called a rationalist. But what Descartes is trying to say is that a God is necessary for us to have any knowledge at all – the concept of a benevolent God ensures that I am justified in accepting the general beliefs that make life possible, for he is presenting those ideas to me and, being benevolent, he cannot be a deceiver. If I reject his existence, I cannot possibly know anything at all, as I may be being deceived at every instant of my life.

Descartes often uses long sentences, and it is a treat for the involved reader as he tries to make sense of them. Often, I would have to re-read entire paragraphs just to understand what he was saying, because they would amalgamate various issues related to the central message. If not anything else, the book would surely serve as an example of how to coherently present a set of ideas which have many strands at each level.

The importance of this work in the history of philosophy cannot be overemphasised. The two works combined barely reach a hundred and fifty pages, and it is indispensable reading for anyone even slightly interested in the history of development of human thought.

© 2024 Yuganka Sharan

Theme by Anders NorénUp ↑