The Four Types of Information Assimilation

We assimilate information in varying degrees.  I can identify four fundamental stages, from least to most permanent and effective.  It's important to be aware of this spectrum (and of the differences between the stages) because we often make incorrect assumptions about our understanding of something (and, consequently, our ability to use that understanding).

  • Reception -- this is a necessary condition for information assimilation, but it is almost never sufficient.  We need to receive the information (see it, hear it, etc.) but if we don't do anything with it, our brain will simply filter it out.  We can hear things, but if we don't pay attention to what we hear, we will not absorb any of the information -- it will appear in our subconscious and disappear as soon as we context switch into anything else that engages us.
  • Focus -- by listening (as opposed to just hearing), we tell our brain to start processing the information.  Just focusing on it, however, is not indicative of a high absorption rate: our brains need to process it in a way that makes the information fit in with our thought framework.  This is the idea behind the next stage.
  • Understand -- if we process the information and convince ourselves that it fits in with the rest of our world models, we understand it now.  This is where most people end their assimilation process.  In other words, when quizzed, they can explain the information; it makes sense to them; when asked if it applies to a particular circumstance, they can provide the correct answer.  However, this is reactive behavior -- just understanding is most times not sufficient to naturally pattern match and proactively know when the information is relevant.  For example, we can understand what the word obsequious means but we won't be able to produce it when we need to because understanding is not sufficient.  We need a stronger form of assimilation.
  • Internalizion -- this I believe to be the final stage of assimilation.  Once we've internalized something, it becomes a natural part of our world model.  We can produce it on demand; we know when to produce it; it becomes natural to us.  Usually long exposure to something allows us to internalize it.  For example, people understand the mechanics of driving pretty quickly but it takes practice to internalize the rules, guidelines, and systems they've learned so that driving can become natural, almost common-sensical.

Most of us stop at understanding, which is dangerous because it provides a shallow form of assimilation.  The information they've understood is not readily available.  It's important to be aware of the distinction between understanding and internalization, and to know when understanding alone is simply not going to cut it.

On the Ten Commandments

The idea of the Ten Commandments is a great one. It attempts to distill a set of ethical norms interwoven with principles of the fledging new monotheistic religion into a small, easy-to-remember set of fundamental rules. The Ten Commandments can be memorized, recited, referred to by number, which makes them a great framework for normalizing social behavior.

Let’s analyze them one by one.

  1. No false gods. This is a great one: it epitomizes the most unique aspect of the religion, the idea of a single God. But it transcends religion — it establishes the basis for obedience, the credibility of religion. It enables other commandments. If there was only one rule, that would be it. It’s like saying, “Rule 1: Follow all the rules,” and by emphasizing it (after all, it’s the first of the Ten Commandments) it provides a strong bond and creates shared context that’s easy to understand. This rule makes it very hard to dilute it or create offshoots: if there is only one God, you can’t create a more powerful one to win over believers (in fact, after the idea of a single God becomes so engrained in everyone, the only way to compete is to focus on God’s representative on Earth, or later vary the interpretations of God’s writings). More importantly, it centralizes power (one God means one place of worship) which becomes crucial later in the evolution of organized religion.
  2. No names in vain. At first I found this a strange rule. First of all, why would anyone care if I shouted God’s name for no good reason? And if they did, why should it occupy such an important place, the second of the ten? I think just as the purpose of the first Commandment is to establish credibility, the primary purpose of the second one is to establish hierarchy. Making the name, and thus its use, special — holy, illustrates God’s place as extreme (God is no longer some minor deity). It’s an incredibly important Commandment because it shapes how religion is interpreted by its believers in everyday life — it creates God that must be feared and loved, God that demands, God that punishes. But it also justifies the organized institution behind God (after all, someone needs to be cleared to use God’s name). The focus on language as an important fabric that facilities the experience of coexistence with God is crucial too, because it further ensures the centralization of power (ensures it is focused in the hands of those who wield the Word well).
  3. Honor your mother and father. This Commandment begins a series of ethical norms. It also establishes hierarchy, but unlike the previous one, it deals with a social one, not a theological one. It introduces the concept of respect and position in society. It is a strong reason why the society it creates is stable — if esteem for the elders is almost as important as esteem for God, the young will less likely revolt against the status quo. It also makes life holy: you are to respect somebody solely because of the fact that they gave you birth and brought you up.
  4. Respect the Sabbath. Another Commandment that calls for respect, but it’s different in that it asks for respect of a structure (a particular schedule) rather than a deity or a person. It forces the believers to stay connected to their spirituality, to remain believers (and as history shows, the lack of a connection with the religion creates schism). This provides further stability to the religion.

    At first glance, the third and the fourth Commandments may seem reversed (wouldn’t #4 follow logically from #2?). The need for social stability and strong unbringing was probably deemed more important (and more impactful) than the need for the constancy of religious worship. After all, if you respect your parents, they can teach you the values better than a list of Commandments can.

  5. Don’t kill. This is the first of the Commandments that establish social norms, and, understandably, it focuses on the importance of life (and the irreversability of death). I think the reason it’s a Commandment may have to do with the difficulty of enforcing this rule early on. I’m not sure it necessarily establishes the sanctity of life.
  6. No adultery. Is it really that important? Compared with the other ones it seems strange un-lofty. It’s a norm that is certainly murky (less black-and-white than killing a person) but maybe it’s precisely why it’s been included. It creates a society with higher norms than other societies, a more civilized one. It is a rudimentary form of social protection. Of course, in addition to this, it further enables the creation of a stable, conservative society.
  7. Don’t steal. The Commandment establishes the importance of property, and, again, is probably a good rule to include as coming from God as it’s fairly difficult to enforce.
  8. Don’t lie. Actually, the Commandment is more specific — it tells you not to bear a false witness against your neighbor — which in my view points to the fact that not all lying is bad (a pretty progressive thought!). Truth-telling is notoriously difficult to enforce so it makes sense why it would become a Commandment.
  9. Don’t desire your neighbor’s wife…. This I’ve always been baffled by. It’s a rather stringent moral rule that addresses one’s desires (rather than actions). Why curb the desires? It seems preventive, extremely conservative, doesn’t fit with the other commandments, unless one considers that actions are borne from thoughts and the Commandment is really trying to force you to think (and thus be) morally not just behave that way. Morality that’s been internalized is much stronger than one that is an outcome of fear of punishment.
  10. …or any other thing. Really? Why separate it from the previous one? Was it just added because it made a nice set of 10 (a natural size since we have 10 fingers)? It seems dangerous because while desiring of another’s wife is naturally frowned upon, the rulethat comes from God that penetrates all thoughts of desire, even envy, seems too stringent and as a result ineffective.

In general, the Commandments are a wonderfully condensed mixture of rules that establish the religion (in terms of its uniqueness–thus stability–but also its role in everyone’s lives, and its self-preservative properties), the social order, and moral norms. It’s not surprising that they aided in the perpetuation of a very strong and stable religion and a society intrinsically linked to it.

Music Recommendations

  • Gorillaz, El Mañana: A wonderfully melancholic chorus: a descending scale that fits in well with the music video
  • Tomoyasu Hotei, Battle Without Honour or Humanity: An instant classic; when I first heard it after Kill BIll came out I couldn't believe the song hadn't been around for a long time and hadn't been used anywhere else
  • Jan Hammer, Crockett's Theme: One of the early favorites; I liked its simplicity and coolness
  • Mozart, Turkish March: This song made me want to play the piano.  I first heard it in the video game Civilization.  I loved how complex it was; it was fast but didn't feel rushed
  • Scarlet, Independent Love Song: The first song I heard on my portable radio.  I was moved by its sadness.  It is resentful and powerful
  • Grieg, In the Hall of the Mountain King: One of my favorite classical tunes, it's short, recognizable and has a lot of velocity
  • Vivaldi, Four Seasons-Spring: Another classical tune, probably the most pleasant four bars (that the tune is synonymous with) in all of the classical music
  • Queen, We Will Rock You: I love its simplicity and energy
  • Queen, The Show Much Go On: The song as well as its story is incredibly powerful
  • Metallica, Nothing Else Matters: My second most favorite song of all time.  It never gets old
  • Harold Faltermeyer, Axel F: My most favorite song of all time.  It even survived my making it into a ringtone (fortunately I no longer do; it destroys songs)
  • Pink Floyd, High Hopes
  • Red Hot Chilli Peppers, Under the Bridge
  • Ennio Morricone, Ecstasy of Gold
  • Kansas, Dust in the Wind
  • A3, Too Sick to Pray: Surprisingly I only saw the Sopranos intro some seven years after I first heard this song
  • Radiohead, Everything in its Right Place
  • Santa Esmeralda, Don't Let me be Misunderstood
  • Eagles, Hotel California: Certainly a one-show band.  That live version of this song is breathtaking
  • Carol of the Bells: I'm a huge sucker for this Christmas song.  It's intense and serious, very unlike any other carols
  • Air, Alone in Kyoto
  • Pink Floyd, Comfortably Numb
  • Gnarls Barkley, Crazy
  • Eric Clapton, Layla: I can only imagine what Clapton (and friends) were doing when the chorus of this song started
  • Linkin Park, My December
  • Gorillaz, Feel Good Inc.
  • Beatles, Hey Jude: After all, everyone has a favorite Beatles song
  • Gary Jules, Mad World
  • Moby, Extreme Ways
  • Explosions in the Sky, First Breath After Coma
  • Coldplay, Cemeteries of London: I admit, I like Coldplay
  • Flobots, Handlebars: Probably the only song whose lyrics I could really appreciate.  A great crescendo, too
  • Cab, High Hopes in Velvet
  • Bonobo, Transmission 94: Is it one song?  Or two?
  • Rodrigo y Gabriela, Orion
  • Pearl Jam, Arc: The live version where Eddie loops samples is indescribable
  • Yoshida Brothers, Fukaki Umi no Kanata
  • Sigur Ros, Samskeyti
  • Jude, Prophet
  • Just Jack, Heartburn
  • Protomen, Keep Quiet
  • The View, Unexpected
  • Band of Horses, Funeral
  • Snow Patrol, Shut Your Eyes
  • Kings of Leon, Closer
  • Mumford & Sons, Little Lion Man
  • Placebo, Running Up that Hill

Recording versus Experiencing

Is what I'm experiencing now worth living or should I spend the time recording instead?

I've struggled with this a lot.  Do I experience the moment, running the risk that my volatile memory will fail to maintain the experience, or do I record it, running the risk of not really experiencing it?  And what value is the recall of the experience, anyway?

Taking photos while traveling is a great example.  Most tourists love taking photos, as if somehow reproducing existing photographs but with a much lower quality camera and much less skill and aesthetic sense served any purpose whatsoever.  They see the objects they are photographing through the lens of the viewfinder (or, even worse – I get chills – a 2.5" LCD screen) which is no different than sitting in front of their computer at home and looking at images on Google.  They don't experience these objects; they don't experience being in their presence.

I've traveled a bit and the best experiences I've had were those that I didn't take pictures of.

Most people have this irrational romanticized idea of going through all the pictures they ever took when they are sixty-five, with their grandchildren on their lap.  First of all, most of the pictures people take are crap; why on Earth other people, especially those two generations away from us would be interested in seeing them at all baffles me.  We'll likely never see them ourselves either; and again, if we do, we need many fewer to trigger those great memories.

Ultimately, I have settled for some 5-95% split between recording and experiencing.  It's useful to write down a few bullet points to maybe expand on the idea in the future: in fact, most of the posts on this blog came from short phrases I wrote down for my future self to discover and think about further.  But the value of the experience, even if it is only a fleeting, present value, is immense.  When I'm sixty-five, I'll be happy remembering the fact that I've experienced so much, even if the individual experience have long been lost in the darkness of my fading memory.

The Philosophy of Reductionism

 I’ve been longing to write this post. It describes most closely how I make sense of the world.

Throughout many posts, I’ve shown examples of some simple concepts that I think are fairly universal. Those concepts are related in a kind of hierarchy. For example, here are three concepts related to one another:

  • Change: change is good; it’s fundamental and more powerful than any of us; it happens all the time (despite our perception bias related to viewing things, for example history, in a very narrow way)
  • Cyclical behavior: a lot of the change the happens is cyclical (in fact, if one is to take anOccam’s razor view of things, the simplest change in nature is a cyclical one because of the balance that is held between many competing factors; or in math, the simplest function that changes all the time is a sine wave). New ideas are just permutations of old ideas; we often take diametrically opposite views and switch back and forth many times
  • Equivalence: things are instances of higher concepts; those who can see it (we call those people conceptual thinkers, can make more out of the world because they can take the specific things they learn everyday, convert them to learnings about the concepts, and then apply the concepts back to the specific.

If you take the concept of equivalence to its logical conclusion, you will realize that everything is related in some kind of hierarchy. In fact, this idea of recursively reducing concretes into concepts is a very powerful one — you can build an entire life philosophy on it. Let’s call it reductionism.

According to reductionism, you begin understanding that everyday complexity can be lessened by relating things to one another. In other words, by taking concrete things, creating equivalence classes of them by grouping them by which concepts they represent, and then grouping those concepts together, you can travel up that ladder where the concepts are few, simple, and very fundamental. The feeling of understanding the fundamentals of the world is a very satisfying feeling. It can also help you make decisions: start with the fundamental concepts, derive the consequences, and keep going until you get to the level of specificity you require. In a way, reductionism is a wonderful framework for knowing what to do, and it’s a wonderful way for you to feel connected to everything.

Of course, there is a trade-off implied in reductionism. The higher up the hierarchy you go, the bigger the distance between your thinking and everyday life. This means that to make specific decisions (and, operating in a very concrete world, we have to make specific decisions every minute of every day), you have to do a lot of thinking: derive a lot of information from the few highly conceptual ideas. While some people I know can do it very well and almost automatically, it seems to me that nature prepared us to deal with the concrete very well — by giving us relatively more scratch space (a kind of cache to keep the details in) than computational ability (there’s only so fast that we can derive these concepts). It probably makes sense, evolutionarily — when you’re chased by a predator, you want to be able to trust your intuition rather than re-derive the idea to jump on a tree from the concepts of survival, physics, and the physical characteristics of the predator.

There are other caveats too. There is more than one way to create a hierarchy of concepts, to reduce a set of things into a much smaller set of more abstract things. There is no right answer when it comes to the most fundamental concepts (after all, those are the different philosophies that, just like apples and oranges, cannot be compared) despite what people tell you. There is no canonical arrangement of all things in the known Universe in a hierarchy of concepts, although a poster that shows one example of such a thing would be a wonderful idea.

In other words, reductionism reshuffles the risk: from millions of tiny errors you could make in the realm of the concrete, to one humongous error you could make in the realm of the super-conceptual. A small difference in the definition of the concept at the very highest level propagates down the ladder in a nonlinear way and can produce an entirely different picture of the world (and thus can easily make you pick a totally opposite view to the one you had before the correction).

What if, despite these caveats, we want to reduce everything that’s around us to as few concepts as possible? At first it seems pretty easy. We reduce a lot of behavior to human nature; we reduce nature to evolution; we reduce the fabric of the Universe to a small set of rules. We reduce the different religions to one concept. Then we reduce the concept of religion and science. We reduce art to feeling (synthesis) and science to understanding (analysis).

In fact, I believe that we can reduce anything to a set of two concepts that are opposites. Above, synthesis and analysis are opposites. Many things can be reduced to good and evil. Other good opposites which things can be reduced to are change and stasis.

All these concepts are themselves an equivalence class. Let’s conveniently call them yinand yang. So we can reduce the infinite number of objects, ideas, thoughts, words into just two.

Then what? Can we reduce Two to One?

We can, but reducing Two to One is infinitely more difficult than reducing Infinity to Two.

Programming PCs circa 2000

 As a teenager I spent a lot of time programming. I was mostly interested in making games, and for this I had to get involved in the fairly low-level structures, such as interrupts, system calls, and I/O ports. While I mostly programmed in C (DJGPP — a C clone that was available for free for PCs, implementing an extended memory module which allowed me to finally relax the limitation of being able to use up at most 640 kB of memory in my games), I had to implement some assembly routines because using a high-level language for this was simply too slow. The most complex graphics routine I implemented was a routine to copy a rectangular block of pixels from one place (usually an off-the-screen buffer) to another in a way to preserve all pixels marked with a special color (a mask).

It was a wonderful world–there is something magical about dealing with low-level operations. The information I needed to display pixels on screen, or to control the mouse felt in a way like knowing a secret code that unlocks the door with the treasure behind it.

Around 2000 I had enough knowledge (all before I ever got a modem!) to compile it into a kind of cheatsheet. Now, about ten years later, it’s time to share it with the world. Of course, very little of it is relevant anymore — although most if not all of the information should still produce the same effects, thanks for the crippling yet comforting fact that PCs have been painfully backwards compatible for the past decade or more.

The PDF is a little dense, so it deserves a brief walkthrough.

  • Some operations were possibly simply by inspecting a fixed location in the computer’s memory. Most of the information could simply be read, but some could also be fetched by writing a particular set of codes in specific locations. Putting information directly in the computer’s memory has never been recommended (and these days, virtual memory makes it almost impossible), but by inserting bytes directly in memory the computer’s behavior could be changed wildly — most times it would crash the OS, but sometimes it allowed you to get infinite lives in your favorite game or get some creepy screen effects. I used to stay up at night and hunt for locations in memory, mucking with which produced the most spectacular effects.
  • Most low-level operations were provided through issuing a set of interrupt operations to the microprocessor. The OS (in this case DOS) would interpret them in a particular way. Usually you needed to specify additional information — you did that by writing directly to the processor’s registers
  • I spent a lot of time figuring out how to display graphics on the screen. Back in the early 2000, everyone’s resolution of choice was a 320×200 resolution with a color table of 256 colors. Since every pixel was a byte, and the colors could be changed globally, this allowed for a number of games that displayed graphics fast and cycled through the colors. Extended modes (called VESA) were also possible and they offered higher resolutions and full color spectra (15-bit, 16-bit or 24-bit)
  • When you wrote a game, you pretty much had to intercept everything that the OS (MS-DOS) tried to do for you. The default mouse offered by MS-DOS was awful, the keyboard had a frustrating delay that you couldn’t get rid of and the OS wouldn’t even inform you when most of the keys were pressed. Fortunately, it was possible to handle the keyboard and the mouse through similar OS interrupts
  • Finally, the document concludes with some common file formats.

Fortunately, technology today has great abstractions and high level routines that make it unnecessary to know most of what’s in the cheatsheet. I am proud to have had to figure all this out, though — in an extension to many Java jokes, I think in order to be a great programmer, you have to understand the technology stack all the way down to microprocessor commands. The effect is a kind of deep connection with technology, the ability to make optimizations based on what’s going on deeper in the stack (such as my need to write assembly code back in 2000), but also a good deal of humility.

Here it is: programming-cheatsheet-2000.pdf

Faith

Nothing has been more polarizing in conversations than faith — or, more often, conflicts between members of different religions. I will try to unite everyone now (or make everyone just as angry). I believe we are simply thinking about faith narrowly, which causes all the disagreement.

First, let me identify the most fundamental concept of a belief: a statement that an individual makes that that individual considers to be true (and as such, uses as a basis of decision-making), a conclusion about reality as we perceive it. Beliefs don’t have to beaxiomatic (I can believe in something that someone else could prove to be derived from another, more fundamental truth) or even consistent (since we don’t mechanistically apply our beliefs to our life, inconsistencies can persist without any day-to-day conflicts). We all have beliefs at a variety of levels — from a set of grand ones (“I believe that I will be reborn after I die”) to tiny ones (“I believe I deserved that cake”). Sometimes belief is thought of as a conclusion that can be understood or known (as opposed to one that one can only place hope in), but I’ll define is as the broader concept.

There are some concepts that derive from belief that will be useful here. Spirituality is a belief in the immaterial. What is immaterial changes over time (before the discovery of magnetism, the force with which two magnets attract each other could be seen as of spiritual origins); spirituality also takes many forms, from abstract (an invisible energy field that permeates every human being) to specific (ghosts). Faith is a set of beliefs internal to each person that deal with the unknowable.

Finally, religion is an institution that proposes a particular framework around faith.

Of all these concepts I think faith is the most interesting one. What is “the unknowable”? In my view it’s precisely the set of statements that no logical person will be able to confirm or refute. For example, “There is life after death” is unknowable — there is no logic that includes the axioms that define the words “life”, “death”, “existence”, and “afterness” that can prove the statement. “2 and 2 is 5″ is not unknowable, because a logical person can prove that given the definitions of “2″, “addition”, “equivalence” and “5″, the statement is false (interestingly, as Gödel showed, there exist statements that are undecidable so they could theoretically form a basis for someone’s faith).

A crucially important property of faith is that it’s personal, and, more importantly, one person’s faith cannot in any way be compared to another person’s faith. Specifically, one person’s religion cannot be superior to another person’s religion because religions are organized around the idea of faith, which is only applicable to a particular individual. Obviously, in reality religions as just institutions and so they employ a variety of devices — competitive differentiation (or, in the extreme, instilling hate of other religions) being one of them in their plight for survival, and there is nothing surprising about it unless a religion becomes too powerful (and just like institutions, monopolies can have a very negative effect) or becomes a device in the hands of, say, a government (in which case it’s likely abuse of power).

Faith is also universal. Everyone puts faith in something, because our observations very quickly lead us to the unknowable. We don’t have to go very far either — while you may believe in the current model of the Universe (it’s expanding and finite, by the way), it’s still a model and no logical person can prove it’s a complete model. Furthermore, as of today no logical person can tell why it’s that model and not any other model. It’s a common fallacy of many intelligent people to assume that a belief in the current model of the Universe (or even in the scientific method itself) has nothing to do with faith — after all, science cannot prove the model is right; it can only prove that it’s wrong.

In a way, then, we could create an equivalence of all systems of faith — they all serve the same purpose, they are incomparable, and they are universal. It doesn’t matter what a person’s faith is. They are all the same.

If people didn’t organize themselves in religions, there would be much less conflict since discussing one’s faith is harder and looks more like trying to compare apples to oranges. However, just like people organize themselves in nations, they will organize themselves in religions because of strong community-based synergies (mostly good ones — a strong support network, a strong shared moral context making the society safer as a whole, institutional memory). I wouldn’t be surprised, however, in the age of the rising individualism made possible through the vast improvements in efficiency (I wouldn’t be surprised if we could create our own religion online) if more people converted away from their religions and into the more fundamental (and thus personalizable) faith systems.

The Asymmetry of Daylight

There is a time of day when the sky is beginning to get brighter; everything is waking up.  There is also a time of day when the sky is beginning to get dark.  There should be a symmetry in the amount of sunlight, and, assuming we could control for environmental factors (such as the number of cars on the road), those two moments should be indistinguishable.

How come we can almost always distinguish between the two?

Humanities, the enemy of Science

When I was younger, I strongly favored sciences and mathematics over humanities. I didn’t enjoy the seeming arbitrariness in what I was learning about humanities, and the fact that what was rewarded didn’t seem conceptual but factual (in sciences and mathematics, I felt I was taught the concepts and the way to derive facts from them; in humanities, I was supposed to regurgitate the facts I was taught — it seemed like memorization). Moreover, I cringed at a thought of the imprecision of humanities (what do you mean there is no exact answer?); if there was no verifiable, universal answer, how can we agree on anything, let alone be assessed on our knowledge of it? Finally, I could not for the life of it understand why everyone around me seemed to prefer humanities. Did people really prefer memorizing dates and causes of wars to deriving results from relatively few theorems?

As I grew older (and as I learned to take deep looks at my observations), I discovered a certain complexity to the above picture which made it not so obvious anymore. First, I realized that mathematics, sciences and humanities (in that order) are disciplines on a continuum and that continuum has several important characteristics. I already knew that as you move from the former to the latter,

The disciplines become less precise and exact, that is, it becomes harder to make statements which can be validated, verified, and agreed upon

I had also observed long time ago that

They seem to require more information for the same amount of conclusions drawn (memorizing many causes of wars vs knowing only a few mathematical formulae)

However, what was a relatively new realization (and what gave me a rather powerful aha moment) was that

They are increasing in complexity because of the systems they are trying to describe and whose behavior they are trying to predict

In retrospect, this last characteristic is pretty obvious, but it has powerful implications:humanities tackle much more interesting (and important) problems. They deal a lot with the human nature, with what makes us us, with inter-personal relationships, with our feelings and intangible abilities (such as the appreciation of the art). In a way, humanities take the world for what it is even if they can’t fully grasp it, as opposed to creating a simplifying model of the world and making exact predictions about it.

Let’s take mathematics, for example. What got me very excited about it was how richly it could talk about the world constructed just from a few assumptions, for example, discuss all numbers existing in nature (and even those that don’t!) by starting with five simple axioms. It could describe an incredibly complex world of geometry by postulating five things (and eight even more complex worlds by tweaking the fifth one). Yes, mathematics is exact — once proven, statements remain proven — but the domain that mathematics deals with is so narrow that it doesn’t really correspond in any meaningful way to the real world; it can’t even get to a kind of complexity we’re dealing with every day.

Similarly, the ethos of all sciences is that they propose and test models based on consistent observations. A model is a gross oversimplification of some real-world phenomenon; again, sciences (in the strict definition of the term) are unable to talk richly about any sufficiently complex phenomenon — in fact, physics (probably the purest of all sciences) chokes on even the simplest (in terms of the amount of complexity) systems — one of the interaction of inanimate matter in the universe.

So instead of thinking of humanities as “weaker” forms of the sciences or mathematics, I started thinking of humanities are their “more ambitious” forms. True, because the complexity mounts so quickly, the specific disciplines we know of as “history” or “economics” are more vague and less precise than the sciences, but fundamentally, the problem is simply much more difficult. Unsurprisingly, more information is required to make the same level of predictions.

Once I realized that the humanities and the sciences are the same conceptual discipline that happens to deals with problems of varied complexity, I realized that while humanities scholars have the humility to point out the inexactness of their disciplines in search for answers to complex problems, scientists don’t convey the flip side (that the exactness of their responses comes at a cost of transforming what’s around us to something simpler. In a way, then, the problem with the sciences is that the apparition of precision creates a dangerous approximation. Moreover, by forcing you to frame yourself in terms of models, sciences tend to be escapist and detach you from your nature; wouldn’t you rather feel the answer even if you can’t write it down, than write down a precise answer to a much more simplified question?

A final strength of humanities is that they don’t constrain themselves to be brittle. In mathematics, out of billions of statements, if you insist on just one to be different, you destroy all of mathematics. In physics, a new discovery may force us to rewrite the textbooks that we have used to teach generations (this has, in fact, happened about a hundred years ago already!). In a way, past results in the sciences are not indicative of future performance. But the lessons of history, even if imprecise, are a pretty good beacon for its future.

What is Intelligence

 Let me try something dangerous and talk about intelligence without really defining it; there are many different kinds of intelligence and the arguments here will hold for most definitions I can think of. The necessary requirement is that intelligence is an emergent property of individuals (not necessarily humans; not even necessarily biological life formsbut for now constrained to life forms in general–in the sense of mutating auto-replication and pursuit of survival) that allows them to adapt to changing conditions on an intra-generational scale (evolution, for example, is a mechanism for adaptation on an inter-generational scale). I believe (though, quite frankly, haven’t thought hard about it) this is sufficient to go on.

Is intelligence a necessary artifact of evolution? To expand on this, what set of circumstances make intelligence a much more desirable trait than other traits, and how likely is intelligence to emerge? Evolution deals with randomness — it’s a greedy random walk, favoring changes that increase the species’ chance of survival. What makes intelligence better than, say, a stronger set of legs? I have two theories. First, as life forms evolve and strengthen their physical characteristics, it becomes inefficient to continue the physical growth; either it leads to massive energy needs which begin to outweigh the individual’s abilities to gather food, or it leads to side effects inherent in the mechanics of a body (stronger legs may lead to worse injuries). Evolution, essentially, runs out of avenues to pursue and non-physical development becomes the most energy-efficient. Secondly (now I realize the two theories are related), evolution’s greatest limitation is its speed — it must act over generations; and with complex enough organisms the generation cannot be very short. If the natural circumstances favor quick adaptability (for example, a series of ice ages come and go too quickly for any single species to evolve around them), evolution must replace itself with intelligence.

Of course, I may be wrong and intelligence could just be a fluke.

Regardless, if I wanted to have a particular characteristic evolve, I could manufacture a world which favors that characteristic and watch nature come up with it through a process of evolution. In the extreme, if all I had was plants and wanted the species to be able to walk, I would provide incentives for the plants to displace themselves (maybe an ever-moving source of food). Early species will probably simply grow fast, or maybe have the ability to detach themselves from the soil and attach themselves back, propelled by wind. Ultimately species would develop self-propulsion (I could help them by providing a negative incentive to simply go where the wind takes them). Nature would “cheat” and use water as an interim medium — it’s easier to be able to walk if you are already swimming — and so we can see how ultimately we would have species able to walk.

Similarly, what would I have to do to favor intelligence?

I did make an assumption that the life form evolves, that is, life replicates itself (a “species” composed of a single individual that doesn’t die cannot evolve) with mutations between successive generations. In order for evolution (that is, a long-term progression) to take place, there must be survival of the fittest, and with it, the favoring of life to non-life by the individuals. That second assumption is interesting because I’m not quite sure how it came about and why it holds true for species. With intelligent species such as humans you could make an argument that the will to live is an outcome of consciousness — a constantly running narrative of our life, created thanks to the development of memory and the ability to make connections (non-intelligent species have memory but they can’t connect it into a narrative) — but for all other species, it’s not so black-and-white.

It’s, obviously, just as fascinating to talk about why intelligence exists as how it exists — I think that we tend to focus too much on the latter and not enough on the former (and the theories above are just a small step towards that thought). But, on the how, we can learn a lot just by drawing a parallel between it and other non-mental features of evolution. Intelligence requires the environment — and with it sensory inputs and the feedback element with the environment. Intelligence is a wonderful example of a (relatively — all purists calm down) binary characteristic that nevertheless came about gradually from non-intelligence (just as flight came from non-flight; the outcome is clearly distinct but it’s not immediately clear how non-flight evolved into flight).

England's success as a colonial power

There are many negative and tragic effects of England's several-century-long colonial drive. Below I want to focus on one impressive aspect of Colonial Britain.

When I think of management, and examples of the highest possible level of management, the case of the Commonwealth comes to mind.  Here is very small (in terms of both area and population) country running the world's largest organization.  The kinds of problems England encountered when managing its colonies were unprecedented and massive.  How do you manage an entire people remotely; what incentives do you use to discourage the people in the colonies from rebelling?

England was successful as a colonial power because it understood man's universal driving forces – money and power.  They built an impressive trade system which likely boosted the colonies economically.  I'm not a historian, but it seems to me that England had a near-monopoly on international trade.  England leveraged the political systems of their colonies, rather than changed it – the highest positions were British, but below that all was open to the natives; this, combined with  provided a kind of power continuum – given enough levels of hierarchy, there was always a more powerful position to aspire to, and with the overwhelming majority of the pyramid belonging to the natives, nobody wondered who actually runs the country.

England during the imperial period is an excellent example of that fact that it's not size, or population, that matters, but leverage.  With its unbeatable navy, and a relatively efficient (and definitely ahead of its time) organization England had control over vast lands and, thus, natural resources.

Schools of the Future

What will the school of the future look like? What should it look like?

Rilke in one of his letter wrote:

Each person ought to be guided only to the point where he becomes capable of thinking by himself, working by himself, learning by himself [...]
Schools ought to think about all in terms of individuals, not in terms of grades

and I think that in the future, the educational system will fully embrace that philosophy.

What would it look like, exactly? Well, first, schools need to be personalized. Through teaching various people in wildly different circumstances (teaching math to a seven-year-old; teaching college students; on-boarding new hires) I realized that different people learn differently, and the element of feedback is crucial to continuing progress; in other words, if the specific lesson someone took (as opposed to being given – which implies passivity) from a session isn’t reflected on by the teacher and built upon, the teaching is not going to succeed.

Fortunately, technology can make this possible, giving the teacher leverage he or she could never have dreamt of. However, historically the teaching industry has been going through much slower release cycles and so it will probably take a long time before any change is apparent.

This is one part of the “thinking in terms of individuals, not grades” philosophy — instead of standardizing on the outcome, think about how the teaching is internalized by each student. Another part has to do with the goal of education itself — that goal is to increase the intellectual capacity of an individual, not to produce a society that achieves high grades. The latter is just a construct of the educational system and just as any system that starts overly relying on its metrics, it runs a significant risk of the educators losing sight of the goal (not to mention the reality of any standardized system being game-able by those who have spent a lot of time in it). So in addition to focusing on the individual, we will have to come up with more meaningful success metrics, probably more qualitative ones (since the tuition will be so individualized, we will no longer be able to come up with a single number to describe an entire population).

“Guiding each person up to a point” is just as important. I’ve always thought that the purpose of school is to teach you to think, not to teach you anything specific–the specific may be a side effect, a necessary outcome of a particular educational design (and may be required no matter what design, although we don’t know that), but should definitely not be a goal onto itself. This also means specific knowledge should not be used as a determinant of how well someone has been taught.

In a way, nobody should ever “fail” an education — the point of education should be to determine someone’s potential and enable them to achieve it by themselves. Of course, an individual may choose not to fulfill that potential, but that is not a failing of the educational system (or, at least, not a primary failing of it); it’s probably a failing of the value system instilled by the parents and the society. Today the educational system also plays a role in providing these values; it would be interesting to decouple the two in order to focus better on the thing an individual has a problem with — and possibly use different techniques for either.

My friend had a particular design for a school of the future: it should teach the concept of a concept, and hopefully at some point the students will understand that this meta-ness is a fundamental block of reasoning and intelligence. I think while it’s an elegant design, it’s impractical — the students need to be bootstrapped first, before they can understand what meta-ness is. Focusing on the concept of a concept for its own sake will probably not lead to a good internalization.

I think back to my education. How much time did it take me to get to the point where I could think for myself? Specifically, when did I internalize that things are related in hierarchies, and that there are different kinds of relationships between the objects in hierarchies, for example an “instance-of” relationship. It took a while, and by the time I understood these concepts viscerally, I could say that the education satisfied an important objective. But the way I got there was certainly complicated and had many diverse and uncorrelated paths — trial and error, learning by example, learning by rote, learning by thinking (surprisingly not much of it!).

We will not get to the school of the future overnight. It probably needs a revolution just like many other industries did. But there is little economic incentive for this to happen — schools are monopolies and profit is not usually correlated with efficiency (which reminds me of the DMV). Teaching by definition takes longer. Unlike e.g. the financial sector, it’s very difficult to come up with good metrics for success. And the barrier for entry is huge (can a startup really revolutionize an educational system?).

There have been instances in the past of people overcoming similar obstacles. So I am hopeful. And while I wait, I may come up with my own syllabus.

A Life's Work

Today's generation is used to instant gratification.

I want to earn lots of money now.
I want to be an expert now.

An idea of a 15-year career sounds crazy to us -- we have no idea what we will be doing in 15 years, but it will almost certainly not be the thing we're doing now.  Because of that we're limiting ourselves and our potential to the kinds of work that reward bursty, energetic types but don't stick to their guns.

In doing so, we are missing a lot.  The most impressive careers are not those that are made overnight, but those that have substance, purpose and meaning.  Have you seen someone's life work?  A work of someone who has been working on the same thing for 15 years?  How about forty years?  How impressive is it?

Branding

Words have a lifecycle; hey are born and they die.  They die for many reasons; some die naturally (get out of fashion).  Others die more untimely death (for example, they suddenly acquire a negative connotation).  There are also words that get perceived as politically incorrect – they are on life support because there is still a large group of people who use them, if just to spite everyone else.

This is a good thing.  Language is a strange combination of social conventions and explicit structure, and it's dynamic to reflect the changing needs and thoughts of the society.  A language that wouldn't be allowed to change would be dead.  I'm glad unfriend is now a word.

It's interesting to note that the half-life of phrases seems to have decreased over the past fifty years or so – we get tired of words more quickly, perhaps because as consumers, we're not satiated and are looking for the next big thing faster and faster.  We react to words, and a new word can make us switch from one product to a different one.  Inexpensive becomes economical and we suddenly stop thinking of ourselves as cheapos and instead pat ourselves on the back for not wasting money.

Branding is also one way to keep up with class inflation.  We don't want to be gardeners, stewardesses or waitresses.  We want to be landscapists, flight attendants, and servers.  A different word – borrowed from what used to be a slightly different role – changes our perception enough to trick the same minds of ours that make us think that it's somehow less glamorous than it once was to have such a profession.

However, this can also be used as a political tool, to cover up issues or make them seem less like issues.  What does vertically challenged mean?  Is it because it makes us appear to do something about the issue?  Or are we ashamed of the fact that the characteristic has found its way into popular culture as a signifier, just like retarded has?

 

One of my Favorite Proofs

Proof that \(\pi\) is irrational.

Assume \(\pi\) is rational, that is, assume it is of the form \(\frac{a}{b}\) where \(a\) and \(b\) are both positive integers. Let

\[\begin{align} f(x) &= \frac{x^n (a-bx)^n}{n!} \\ F(x) &= f(x) + \cdots + (-1)^j f^{[2j]}(x) + \cdots + (-1)^n f^{[2n]}(x) \end{align}\]

where \(f^{[k]}\) denotes \(k\)-th derivative of \(f\).

  1. \(f(x)\) has integer coefficients except \(\frac{1}{n!}\)
  2. \(f(x) = f(\pi - x)\)
  3. \(0 \leq f(x) \leq \frac{\pi^n a^n}{n!}\) for \(0 \leq x \leq \pi\)
  4. For \(0 \leq j < n\), the \(j\)-th derivative of \(f\) equals 0 at 0 and \(\pi\)
  5. For \(j \geq n\), the \(j\)-th derivative of \(f\) is integer at 0 and \(\pi\) (from 1. above)
  6. \(F(0)\), \(F(\pi)\) is integer (from 4., 5. above)
  7. \(F(x) + F''(x) = f(x)\)
  8. \((F'(x) \sin x - F(x) \cos x)' = f(x) \sin x\) (from 7. above)
  9. \( \int_0^\pi f(x) \sin x\) is an integer
  10. For large \(n\), this integral is between 0 and 1 (from 3. above)

Contradiction. So \(\pi\) is irrational.

Animated Shows

When I was growing up, we had two TV channels, programming started at 6pm and ended at 11pm. At 7pm all kids would sit in front of the TVs and watch a great animated TV show. Sundays were the best — animated shows imported from abroad (mostly the U.S.), with voiceover and everything!

 

The Moomins

moomins.gif

 They haven’t really been popular here much, but I have some very warm memories of watching the Moomins. The show is really serene, soft, and a little creepy.

 

Rescue Rangers

cartoons_chip-n-dale.jpg

 My favorite TV show tune of all times. And, quite frankly, I kind of wanted to be Chip (that was before Teenage Mutant Ninja Turtles, I guess).

 

Gummi Bears

gummibears.jpg

 I liked this show primarily because there was a whole colony of “Other” Gummi Bears that was across the sea that the Gummi Bears desperately tried to find. No matter how close they got, they could never actually find the others. I liked the mystery of it.

Come to think of it, I guess I also liked the fact that those seemingly quite normal bears acquired superpowers after they drank the mystery potion.

 

The Smurfs

smurfs.jpg

 I used to like the Smurfs because they were a rather original idea; I’m not sure anybody expected blue creatures with white hats to be so successful. However, the show was a little too predictable; the characters were too stereotypical (one is always clumsy, the other is always depressed, etc.). I abandoned the Smurfs in favor of Gummi Bears.

 

Duck Tales

ducktales.jpg

 Another great show tune.

Disney did a great job casting their shows with characters — first, the choice of an animal was just brilliant: ducks are toony, their faces are expressive (and the beaks are funny), and the great sounds they make when under stress come for free. But also the decision to base the show on three brothers and their uncle, a stingy millionaire modeled on Scrooge, is original, clever (the fact that it’s their uncle and not their dad gives them more freedom to go on their crazy adventures (which are also made possibly by their uncle’s wealth) and make fun of him occasionally (and point out why it’s bad to be greedy).

 

Darkwing Duck

darkwingduck.jpg

I haven’t really watched it much, but I really liked the imagery — the protagonist, a duck in a cape, dressed in purple. There was a video game based on the show that I would play religiously.

 

Cancer and Evolution

We all learned about how evolution led to homo sapiens and how through being more competitive, humans displaced other, similar but inferior, species.  I often wondered how natural selection will manifest itself in the future -- who will replace us, humans?  Is evolution per se dead now that we are intelligent and can influence our adaptability through our actions?  Or will it take some other, more nuanced, form?  I think the answer is closely related to cancer.

People think of cancer as a kind of defect that affects healthy cells.  But from a microscopic point of view, where the survival of cells rather than large organisms is relevant, there is an alternative explanation.  At that level, cancer is a higher stage of evolution of cells, since usually those cells have a higher chance of survival (they don't get recycled by the body, they take over other cells, they are much harder to eliminate than healthy cells).  The existence of cancer is an outcome of natural selection of cells – evolutionarily, each cell undergoes mutations and those cells that happen to have superior features survive in a harsh natural environment.  Cancer cells are regular cells that have mutated and gained features that allowed them to push past their harsh natural environment, i.e. a boundary defined by the human body.  In a way, a human being dies of cancer when the cancer cells are too powerful to be confined in the body, that is, when cancer succeeds.  So on a microscopic scale, we see a kind of natural selection of cancer cells over normal cells.  The fact that they end up killing the body is a tragic side effect of the cancer being so successful.

Now let's look at this at a macro level.  We live in an age where our bodies are bombarded by perturbations of many sorts -- electromagnetic waves permeating us, radioactivity and toxins (such as those in highly processed food) that we -- willingly or unwillingly – absorb.  These perturbations increase the frequency with which our cells mutate, and so, unsurprisingly, the incidence of cancer is much higher now than it ever was in the past.  These perturbations, of course, are a natural consequence of progress – a consequence of our intelligence.  Intelligence, therefore, is a double edged sword: on one hand it's our greatest weapon, allowing us to reign over all other species, but on the other hand, it's our weak spot, crippling us by allowing parts of us to be naturally selected against the body -- and since evolution is a fundamental law of the Universe (since it's nothing really than a statistical phenomenon), it's likely we will not be able to fight it.

It's possible we have reached the end of the evolutionary path for species: once a species reaches the Age of Intelligence, it dooms itself.  This instability brought about by the decay may ultimately cause us to lose our position as the Earth's most supreme beings; smaller, less intelligent creatures, less susceptible to toxins and disease, prevail.  From our point of view this would seem like a regression, but evolution doesn't have a design in mind, of course.

It's also possible that evolution is circular – the species that survives us will not be an organism but a class of rapidly-mutating cells.  Those cells will not need hosts as they will be so powerful no host will be able to harness them.  Such rapidly-mutating cells could start the path anew, giving the lineage of species an unheard of momentum that encourages great diversity.  And who is to say that the same hasn't happened before.

It's also possible that, being intelligent creatures as we are, we'll preempt our dark fate and change our lifestyles.  A culture of "natural living" may become popular, a minimalistic (though not primitive) living aiming to perturb Nature the least.  A culture of men realizing the fragility of life and striving to limit the rate at which they increase entropy in the Universe.

Corruption and capitalism

There is no difference between very efficient corruption and capitalism.

Imagine going to any of a number of shows during a music festival.  There are seats but since it's a festival, seating is sequential and kept in order by the festival staff.  Initially the staff seat the customers on a first-come, first-served basis but pretty quickly into the festival, people start offering bribes to the staff in exchange for a better seat.  The staff has full discretion over seating so they willingly take the bribes.

As more and more people catch on, the staff begin reserving seats in anticipation for future late-comers who may offer a higher bribe.  A secondary market forms where people come to the concert, offer bribes for multiple seats and resell them later.  These people now have specialized jobs which allows them to find customers better (and, since they are not staff members, can openly offer good seats for money).  This also pleases the staff members because they get bulk pricing and have to spend less time dealing with money.

If this is an efficient market, there is competition between secondary market makers, and the best ones minimize the risk of having a seat unfilled, so all seats are filled, but each seat now has a price tag attached to it.  A system where staff members were given the power to seat the customers and they succumb to corruption naturally turns into a fully capitalistic system.

The Dramatic... Depressing of the Button

The pop culture taught us the drama of a button press.  Be it a doorbell that the stranger rings on a rainy night, or a nuclear weapon being launched by the Army commander, the culminating moment happens when the button is pressed.

Which is why it was somewhat shocking for me to realize that with buttons on a computer screen, what triggers an action is not the moment the button is pressed, but the moment it is released.  Once you press the button, you can still change your mind -- simply move the cursor away.  This creates a very different kind of drama -- in a way a slightly diminished one, as we have one more chance to rethink what we're doing, but also a more suspenseful one, as the thing that now separates us from the action is a natural state -- a release of a button, the removal of a hand from the mouse key.

 

Fake News: Mixing Sodas

We've received reports of middle school kids all over the U.S. getting high in a new, legal kind of way. Apparently some of more than fifty Coca Cola and Pepsi flavors, when mixed together and heated up, synthesize a powerful chemical similar in its structure to THC, the main ingredient in marijuana. Which precise soda flavors need to be mixed, in which proportions and the details of the heating process are unclear; the "recipe," as the youths impudently call it, has managed to be kept secret among the fourteen-year-olds.

Our reporters scoured the Web looking for further explanation; however, seeing as there are about thirty-five hundred Facebook groups, each of which claims to be named after the recipe, it is unclear whether the information will see the light of day.

The spokespersons at Coca Cola Co., and Pepsi Co. refused to comment on this speculation.

In Fairfield County, Connecticut, local government officials, prompted by pressure from the wives of several affluent residents, said that, pending the verification of the reports, they would begin drafting legislature aimed at limiting sales of certain combinations of flavors.  More drastic measures include the introduction of regulations that prevent young people below the age of 21 from purchasing sodas, or a ban of certain flavors altogether.