Hey guys,

I decided to start writing more about business and marketing in addition to my more philosophical rablings so I decided to move everything over to a new domain – TaylorPearson.me.

I just published a post on Table Selection and why ideas might not be quite as useful as I thought.

Head over there and subscribe if you’d like to keep getting notified when I publish new stuff.

As always, I’d love to hear your thoughts.

Posted in Uncategorized | Leave a comment

The Black Box Theory

A small garden in Tokyo that I stumbled on during a long walk.
A small garden in Tokyo that I stumbled on during a long walk.

I’m always very hesitant to believe in causative theories for why things happen in complex systems. It’s become so clear to me how good I am at rationalizing and creating patterns where none exist.

I was listening to an episode of Planet Money and they gave the example of financial news. If the market is up, all the analysts go find very convincing reasons why. If the market is down they’re able to find equally convincing reasons why.

And yet, over a long period of time, no one consistently beats the market, which would be the true sign of someone actually understanding the system.

Instead of looking for causative factors then, I find myself subscribing more and more to the black box theory.

In complex systems, you can’t actually understand what’s going on in the inside, you can only control the inputs and observe the corresponding outputs.

There are a lot of traditional eastern medicine practices like acupuncture that no one really understands the mechanism for, but it’s quite clear that they do help people achieve their desired outcome.

There are also some more modern Western “practices” – like  Modafinil – that fall into the same category. No one understands the mechanism behind Modafinil, but if you’ve taken one, there’s no doubt about what outcome it produces.

Justin Hayes of Superhuman Pursuits recently told me the story of how he tore up his knee in his early 20s and went on a quest to figure out and learn everything he could about fitness and mobility.

He learned from all the leaders in the field and then stepped back and said, “what do these guys have in common?”

Even though they seemingly disagree with each other on lots of points,  all of their systems work to achieve the desired outcome. So, he identified the inputs that all of them had that were the same. By focusing on those inputs, he solved his knee pain.

I think that’s  the best method for achieving any desired outcome in a system where the cause and effect aren’t obvious.

I read a lot of histories and biographies and I’m always looking for what the common inputs are among people I look up to.

And then I’ll try some things out and see if they work for me. After I listened to Einstein’s biography, I started going on long walks. I like long walks a lot now when I want to clear my head.

I’m listening to the Rise and Fall of the Third Reich now and it’s striking how deliberate and calculated Hitler is in comparison to the bumbling allied leaders in the pre-war period.

I realized that it’s hard to imagine any great leader rushing around and acting emotionally. They’re all very deliberate and systematic.

I’ve always liked the story of Lincoln writing a letter Meade after Meade had defeated Lee at the Battle of Gettysburg. Meade had refused Lincoln’s please to pursue Lee after the battle, a move that many believe would have ended the American Civil War.

I can’t imagine the rage that most people would have felt in Lincoln’s position. Though he relieved Meade of his command, the tone of the letter is entirely civil. Lincoln describes himself not as infuriated, but as “distressed immeasurably.” He closes the letter by saying:

I beg you will not consider this a prossecution, or persecution of yourself As you had learned that I was dissatisfied, I have thought it best to kindly tell you why.

So I’ve been consciously trying to move slower and more deliberately lately.

To a large extent, though I think what goes into your black box is just a product of your environment and not something you consciously control.

That’s why everyone likes to say that we are the average of the 5 people you spend the most time with or that we are all products of our environment. By changing who you hang out with, you change the system inputs in ways you don’t understand, but that definitively, and in my experience, profoundly, affect the outcomes.

Posted in Biznass, Philosophy | 6 Comments

The Exponential Increase in the Value of Good Ideas

I’ve been stuck on this notion lately that the value of ideas is increasing faster than the value of execution. I’m reading The Singularity is Near by Ray Kurzweil. Throughout the book, a key concept that the relative value of ideas to execution is tilting more and more towards ideas keeps coming up.

In a chapter on the implications of nanotechnology, Kurzweil forecasts that…

…the value of everything in the world, including physical objects, would be based essentially on information.

Information is the result of new ideas. However, the value of ideas, their ability to produce wealth, is always limited by our ability to execute on them. So while you might have 10 great business ideas, they’re only valuable if you’re able to actually execute on them. Of course, personal capacity is limited. You only have so many resources –  time, energy, money, etc.

Historically, or at least since the rise of capitalism, the way to increase the value of your ideas was through hiring and delegation. You could put together a group of people who would execute on your ideas for you. As Bertrand Russell said:

Work is of two kinds: first, altering the position of matter at or near the earth’s surface relatively to other such matter; second, telling other people to do so. The first is unpleasant and ill paid; the second is pleasant and highly paid.

However, as technology improves, our ability to both distribute and execute on ideas is increasing.

Improved Distribution

Increasing transparency as a result of technology increases the power of good ideas while reducing the power of bad ideas. Blogging, podcasting and social media are all examples of the increasing ability to distribute ideas in a highly transparent way.

I started listening to Robb Wolf’s Paleo Solution Podcast about a year and a half ago. By sitting down, flipping on a microphone or and talking about the  his ideas, Robb has created value in the lives of hundreds of thousands of people.

He’s done it by efficiently distributing his ideas. It’s been proved to be truly valubale because of the transparency of modern day distribution channels like social media and blogging (testimonials, transformation pictures, etc.).

Value is only going to be more and more closely related to the creation of actual wealth as transparency increases.

Increasing Transparency

Transparency is important here because the ideas actually have to be useful. If Robb were a fly by night guy spouting stuff that wasn’t true, it would be easier than ever for people to figure it out.

I think of the example of the itinerant “snake oil” salesman in the American West. Because of the distance between cities and lack of technology to effectively disseminate information, he could make a living moving from town to town deceiving each individual town’s members.

The snake oil salesman today is one viral tweet away from being out of business though. This isn,’t totally true of course. There are still lots of industries that are more complex and opaque, but that’s the direction things are moving.

Increasing Execution and Leverage from Technological Automation

It’s increasingly feasible for a very small group of persons with access to technology to execute on large ideas and create tremendous amounts of wealth. In 2010, Google, Amazon and Cragslist were able to generate more than 1 million dollars in revenue/employee.

In a purely industrial or knowledge-based economy, those numbers are impossible. It’s only in the an idea-driven, entrepreneurial economy (powered by the exponential growth in technology and computing) that those kinds of numbers become a possibility.

Marketing automation software is a clear example of our increasing ability to execute ideas.

I talked to someone recently based out of Asia that had used marketing automation software He was able to set up a system that, while he was asleep, could capture contact information and follow-up via email, SMS and phone with his customer base in the U.S.

One clever dude and his laptop can be more effective today than entire marketing departments were 20 years ago.

The Rate of Change in the Value of Ideas is Nonlinear

The increasing value of ideas is being driven by the improvements in distribution, transparency and automation created by computing technology which is, according to Moore’s Law, doubling in terms of computing power and halving in terms of price per performance every 16 months and has been since the invention of vacuum tubes in the 1960’s.

That means that in 16 months from now, all your ideas are TWICE as valuable. Twice! In 32 months, they’re four times as valuable. In 48 months, they’re 16 times as valuable.

This is difficult for us to understand as humans. Our brains didn’t evolve to understand exponential growth.

Ray Kurzweil tells a story of talking with the leading geneticists in the world in the early 1990’s and advocating for the human genome project. They all said that it was a waste of time. That to decode the human genome, that it would take 1000 years.

The smartest, best-educated people in the world, forecasting about their field, predicted it would take 1000 years to complete the Human Genome Project.

It took 10. A complete draft of the genome was done in 2003.

What they didn’t understand was how fast technology would improve.

Even if Moore’s Law and the rate of growth can’t continue on indefinitely, and I certainly don’t understand the science well enough to say whether it will or not, I think it’s almost certain that computing technology will continue to improve fast, at least by historical standard of change.

Because computing power is increasing at an exponential rate, I believe the ability to execute on ideas (and thus their value) is increasing at the same rate
Because computing power is increasing at an exponential rate, I believe the ability to execute on ideas (and thus their value) is increasing at the same rate


Impact of the Rise of the Developing World

The increasing ability of technology to increase the value of ideas is compounded by the rise of the developing world to devalue knowledge worker level execution even further than technology alone. Hiring is still a huge leverage point and as the value of human capital in countries outside the West continues to increase, it’s increasingly possible and affordable to hire very smart people who can execute for you.

As we transition from a knowledge-based economy to an entrepreneurial economy and as previously third world countries in Eastern Europe, Asia, South America and Africa continue to develop human capital, there’s not going to be any reason to deal with an accountant or project manager in the West when you can get one in Vietnam for 20-40% of the cost.

Geographical barriers to doing business are falling daily due to improvement in computing technology. A few weeks ago, I sat in my parents house in Mississippi and had meetings with a developer/designer in the Philippines, a PPC consultant in Vietnam, a CEO in Prague, a product marketer in Boston and a General Manager in San Diego.

So the value of ideas accelerating at an exponential rate facilitated by the transparency, distribution, and automation abilities of computing technology. Furthermore, the rise of human capital in the developing world is making white collar, knowledge level work less and less valuable.

The Caveat – A Minimum Viable level of Execution

However, there’s still a clear balance that has to be struck between ideas and execution if our aim is to maximize our ability to produce value (which is getting easier and easier to convert into wealth).

Growing Your Platform

The reason technology is enabling us to create more value is because it allows us to enlarge our platform and create more leverage for ourselves for a lot less money than ever in the past. However, it still takes some money.

Even though it’s less expensive than ever, you still need to be able to buy the technology and hire the knowledge workers. That means that the smaller your platform is, the more value you’re able to create by focusing on execution over ideas.

A rather smart friend expressed it to me as the dreamer vs. the foot soldier.

The dreamer is always talking about the next amazing thing that he’s planning on doing. Every time that you run into him, he has a new project or idea. While it’s likely that a lot of his ideas are good and would be successful if he followed through on them, they aren’t ever executed on and so they never generate any value.

The foot soldier doesn’t have nearly as many good ideas, but in many cases his platform will initially grow more quickly. He’ll have one pretty good idea execute ruthlessly and substantially increase the size of his platform. As one of histories ultimate practitioners of execution said:

“A good plan, violently executed now, is better than a perfect plan next week.”

This doesn’t mean that there’s anything wrong with having a lot of good ideas even when you’re platform isn’t that large (in fact, I’m saying it’s a good thing in the long run), but you’re going to grow your platform more rapidly early on by focus on one or two ideas and executing than by trying to do too many things at once.

Derek Sivers’ post on ideas as a multiplier of execution illustrates this well:

——– ———
GREAT EXECUTION = $1,000,000

To make a business, you need to multiply the two.

The most brilliant idea, with no execution, is worth $20.
The most brilliant idea takes great execution to be worth $20,000,000.

I think as your platform expands, you gain more leverage then there is a lot more (exponentially more) to be gained by shifting farther towards the idea side of the spectrum than the execution one. Because of the rate of technological development, it’s possible to grow your platform and your ability to execute faster than ever.

You want adopt the execution, foot-soldier mindset to the degree that it’s necessary to make consistent, meaningful progress in growing your platform while recognizing that in the long run, it’s ideas that will become more and more valuable.

I think establishing some basic Key Performance Indicators are the best way to monitor this. If you have a clear vision of where you’re going and can work backwards to the point you’re at now. You can set yourself up short-term goals that define what level of execution that you need to be at to get there.

Those KPIs are measures of you short term execution and efficiency. As long as you’re making consistent progress on those, you’re increasing the size of your platform and  increasing the value of your ideas and future ideas.

Becoming a Person that Has Good Ideas

If you are executing effectively enough to be making consistent, meaningful progress in increasing the size of your platform and you accept the premise that we’re moving towards a world where ideas are increasingly more valuable then the next question becomes how to become the type of person that has good ideas.

I think the reality is that most people don’t have good ideas (which is not to say they couldn’t, just that they don’t currently). Even in people that do have good ideas, most of their ideas are probably crappy. I’m not sure if I have a lot of good ideas, but I certainly have more good ideas than I used to.

There’s a few different reasons for that I think.

Doing Less Stuff – Via Negativa

By doing fewer things, I think it becomes easier to be successful at those things. For one, it means you have to make a lot less decisions and the decisions you do make have much fewer choices.

I know I’m not as productive when I travel because a lot of mental bandwidth gets consumed just trying to figure out basic stuff. Where do I eat? Where’s a cafe with wifi? How does the subway work?

Neil Strauss talks about his writing process in an interview with Tim Ferriss. When he’s writing a book, he’s extremely methodical. He eats the same meals from 5 different restaurants, writes at the same time everyday, and only meets up with friends on Wednesday afternoons. He’s allocating all his mental power to expressing his ideas in his books.

Improved Inputs

Nothing of me is original. I am the combined effort of everybody I’ve ever known.

Chuck Palahniuk

Human beings are miraculously complex systems. One of the principles of the popular workout program Crossfit when it was created was the concept of the black box. You alter the inputs to create the outputs you want, even though you don’t really know what’s happening inside. It’s a black box.

I don’t really understand what’s happening inside, but it seems like as I’ve improved the quality of the inputs in my life – people, books, blogs, my environment in general – the quality of outputs goes up correspondingly.

One of the heuristics I’ve been using is the Lindy Effect – the idea that the longer a biological thing has been around, the more likely it is to survive. In the case of ideas, this means that ideas that have been around a long time are more likely to offer some value than newer ideas. Basically, I read more old books.

Eliminating the “Grey Zone”

Stop Watching Fucking Lost – Gary V.

I’m not sure where the concept of the grey zone originated, but the basic principal is that most people spend the majority of their time neither working or resting, but in a “grey zone” between the two.

In my experience, good ideas typically come to me while I’m either in the middle of trying to solve a difficult problem, doing deep work, or completely relaxing. If someone would only create a product to take notes in the shower

Up-leveled Mindset

I’ve found that one of the best ways to come up with more valuable ideas is to widen the scope of what I’m dealing with. This is the cat furniture problem in a way.

We often times falsely constrict ourselves to think within a limited framework when we’re actually capable of thinking much bigger. This is often seen in the Law of the Instrument – if all you have is a hammer, everything looks like a nail.

I am, at best, moderately more capable and skilled than I was 6 months ago, yet I’m working on things that seemed substantially out of my reach 6 months ago. It’s not that I’ve become more capable, it’s merely that I believe those things to now be real possibilities.

So while you can certainly start thinking bigger, it seems that mindset changes at a largely linear rate while technology is increasing at an exponential rate which means we have to learn to think bigger even faster.

Posted in Biznass, Philosophy | 4 Comments

Compound Interest

I was out at dinner with a couple of friends this week at a Western style restaurant in Saigon. It’s one of those places that has an almost obnoxiously large menu.

We started talking about why some people take a long time to order and go over all the choices and why some people just pick something and roll with it.

Someone made the argument that it doesn’t really matter what you order because you’re going to order so many times over the course of your life that a single incidence doesn’t really matter.

That’s true. Over the course of you’re life, the consequences a single meal, good or bad, is effectively 0 (I’m discounting black swan events – like contracting food poisoning or salmonella).

But the counter argument to that is that what you order that one time sets a pattern. It in a way, it determines the type of person you are.

I started thinking  in the context of business and goals.

In the long run, having an off day where you don’t work towards your goals isn’t that costly because you’ve got lots of days.

But if that day begins to form habits, then the consequences become a lot more profound. It’s no longer a single incident, it’s one of many repeated incidents of the same class.

I recently read this Tucker Max article where he said that the best predictor of future success is past success. I think this is true of pretty much everything. The best predictor of what you’re going to do tomorrow is probably what you did yesterday.

In this sense, it’s why what you do every single day is so important, because it’s setting the course for the rest of your life.

In that sense, what you do today is urgent, because it’s charting the course for the rest of your life.

There was this Lao Tzu quote that one of my high school teachers had on her wall that I think of frequently:

“Watch your thoughts; they become words. Watch your words; they become actions. Watch your actions; they become habit. Watch your habits; they become character. Watch your character; it becomes your destiny.”

I’ve recently begun to think more and more about the inputs in my life.

At least for me, my thoughts seem to be overwhelmingly dictated by my environment. So I’ve gone more and more out of my way to create an environment that generates the kind of thoughts that reflect the person I want to become.

A lot of far wiser people than I have realized this obviously.

The quote, “You are the sum of the 5 people you spend the most time around” springs to mind.

The dangerous thing, at least for me, is becoming neurotic about every little single thing. I think that’s probably a waste of time and energy.

But, I think it is important to see the broad trends in small decisions that we make. Over time, the compound interest on those becomes enormous.

If 90% of days you wake up and think and do the same things, then those thoughts and actions will become your destiny.

It can be tough though when you’re in the middle of it to keep pushing. Because the very small actions take a long time to compound into something meaningful and visible.

At least for me, the best way to get a feel for this is by looking back on 1 year time frames. It’s long enough that you can see real progress from the compound interests of all those little actions, but short enough to show that you’re actively moving in the right direction.

I’ve found it’s tremendously beneficial to schedule a day or at least half a day every few months to do this kind of review. I think the Weekly Review that David Allen advocated in Getting Things Done works as well, though it can seem too frequent.

Posted in Philosophy | 6 Comments

The Cat Furniture Problem

One of the concepts that’s been on my mind since I finished reading Nicholas Nassim Taleb’s book Antifragile a few months ago is the Lindy Effect:

For the perishable, every additional day in its life translates into a shorter additional life expectancy. For the nonperishable, every additional day may imply a longer life expectancy. So the longer a technology lives, the longer it can be expected to live.

Taleb gives the example of cookware found in homes in Pompeii. At around 2000 years old – it looks remarkably similar to the cookware you’d see in any house around the world today. The fact that it’s been around so long means it’s very likely to be around for a long time to come. It’s fundamentally valuable. Cookware creates a lot of value in the world because it’s super useful.

This intersects with concept that I’ve thought about lately is about choosing the  games your playing and how important that decision is. The example I’ve been rolling over in my head is the one Dan gave me of a mediocre entrepreneur versus a highly-accomplished philosophy professor. Because of the game he chose, the mediocre entrepreneur is in a better position to screw off for four days, read philosophy, and write whatever he wants about it. Ironically, because of the structure of academia, the professor is a lot less free to do that.

It’s basically the businessman-scholar distinction that Taleb presents in Antifragile. Business, by it’s nature, lends more freedom than Academia to the scholar because his livelihood isn’t tied up in what he’s saying. He’s more free to openly express himself. He also has skin in the game so he isn’t talking theoretically. He’s not able to idly spout philosophical BS, he actually has to make decisions and live with the repercussions.

The convergence between the concept of choosing the right game and Taleb’s Lindy effect occurs because my perception of the Lindy effect is that it can be modeled by a reverse exponential curve.

Taleb uses the example of books. On any given reading list – books that are older offer more value since they’ve been around longer and are still being recommended. I’m proposing that books that are well-known  from 500 years ago are both exponentially rarer AND more valuable than books that are well known from 5 years ago.

Lindy Effect Modeled

If you were to look at a typical person’s reading list, the vast majority of books would be crammed into the recent, low-value portion of the curve while many fewer books would occupy the much larger high-value, older section of the curve.

So your ROI on reading and understanding a concept from 500 years ago is highly likely to be exponentially greater in the long run than one presented only 5 years ago.

What I’m trying to get at is that the more fundamental or closer to the source that you move, the better the ROI in the long run. In 20 years from now, you’ll be far more accomplished in whatever you’re pursuing if you spend the next 5 years reading the seminal works in the field as opposed to reading blog posts. It guess it’s sort of strategy vs. tactics. Tactics pay off higher short term, but less in the long term.

So the implications in my mind on lifestyle choices are that by choosing a better game – I’ll use the example of  a career path – your ROI in the long run can be much higher than if you chose the wrong game even if you tactically execute the wrong game better than the right game.

What’s the Heuristic?

If we accept this, the problem then becomes figuring out the heuristic to make those choices. As systems grow more “fundamental,” they also grow more complex.  There are far more possible outcomes and variables involved in choosing a career path versus choosing what to do tomorrow.

I think there are two potentially useful heuristics depending on what type of system you’re looking at.

The Resistance

I saw this on Seth Godin’s blog a few months ago –

“Opportunity exists in the gap between what is perceived as safe (what we’re hardwired to think is safe) and what actually is safe in the real world.”

At the point we’re at in history, it seems like that gap is larger than it’s ever been.  There is more opportunity in the world we’re living in than there ever has been in the past. There are more opportunities to generate wealth by creating value because increasing transparency has more closely correlated value and wealth than ever before.

The problem is that our paleolithic brains are are hardwired to avoid areas where there most value stands to be created because they’re perceived as risky. So much opportunity exists precisely because of the massive disparity between the modern world in which we live and the world in which we evolved.

Because the pace at which the world is changing is accelerating, the value of socialization is also decreasing and the value of adaptability is increasing.

The question then becomes how to identify which activities are the highest ROI longterm. Where is the gap between what is perceived as safe and what is actually safe?

It seems to me that the heuristic is The Resistance as described by Steven Pressfield in The War of Art. The Resistance, at least in my experience, seems to be one of  the best indicators that you’re moving towards something that is perceived by the vast majority of people as dangerous but in our current world (the imagination economy) is in fact safe and extremely valuable.


The other framework that is useful to me in understanding this concept at a more systematic level is Antifragility.

In many cases pushing towards the resistance ends in failure. However, by systematically pursuing many things that feel dangerous but are relatively safe – that is the reward is high relative to the risk even though both are higher than most people are comforable with. Over time, if you doing a lot of Antifragile things (which you identify via the presence of the Resistance) the long term payoff is potentially huge because of the gap between risk perception and actual risk.

Entrepreneurship is the obvious example. Entrepreneurship is the systematic taking of perceived high risks with relatively larger payouts.

Our caveman brains combined with how we’ve been socialized identify entrepreneurship as a very high risk activity.

However because of technology, both the barrier to entry into entrepreneurship and the risk involved is lower than it’s ever been before. In 18th century England in order to be an entrepreneur, you needed a large amount of capital.

That’s no longer the case. Because of technology – time and energy can be used instead of capital to pursue an entrepreneurial path. So instead of risking capital, you’re able to spend your time and energy to pursue an antifragile path.

This concept doesn’t just apply to more meta level choices (though you gain more leverage by applying it there).

It could be applied on a more micro-level too. Moving jobs or moving cities, even if you’re staying within the same career, can present opportunities to take advantage of the distortion between real and perceived risk.

The Cat Furniture Problem

Within the context of small businesses – the opportunity that exists is in systematically pursuing opportunities with relatively high reward to risk in situations where both are perceived as high by the general populace (or in this case other small business owners or potential small business owners). This comes back to something Rob Hanly made clear to me in a conversation recently.

If you have a business with an established cash flow – it seems like the opportunity is in systematically testing and leveraging the Antifragility of the system and the opportunity created by the real/perceived risk gap.

Though it’s applicable at any level of your business, I think it’s more beneficial in the long run to focus on more macro level concepts (a la the Lindy Effect). So choosing the right industry has better longterm ROI than choosing the right marketing strategy.

An ok marketing strategy in a growing, under-invested industry probably puts you better off than an amazing marketing strategy in a dying, over-crowded industry.

Let’s say – theoretically – you started two businesses. One selling cat furniture and one selling portable bars.

An equally effectively run business in both instances could easily result in a 10x difference returns because of the business model and the industry you’re operating in. So one unit of resources invested into the portable bar business (the better industry) could result in a 10x return compared to the cat furniture business in the more competitive, lower margin industry.

So what I’m getting at here – and it’s what Rob basically told me a month ago – is that starting to by systematically investing in Antifragile strategies at the most meta level possible, you’re long term ROI increases at a larger and larger exponent. That is, any anti fragile investment in any convex system produces exponential return, but if you invest in something more fundamental (ie. the industry or business model as opposed to the marketing strategy or warehouse procedure) then the returns are exponentially larger.


While this all makes sense to me, I still find it really hard to actually implement.

Making it Explicit

For one, it feels really risky. Even though I consciously recognize that I’m falling prey to the risk/reward gap that Seth talks about, my emotional/gut reaction is still to avoid it. This is magnified when you’re making a decision about a more fundamental system – like your career path – since the implications are much greater.

The best way I’ve found to deal with this is to explicitly state the risk/reward on the front-end. I think I stole that from Stoicism, but by explicitly defining the worst possible scenario, it suddenly becomes much less scary.

If this whole entrepreneurship thing doesn’t pan out for me, the worst case scenario is that I spend a couple of years hanging out in cafes in South East Asia, meet some really interesting people and move back into my parents’ basement.

I like cafes in Asia, interesting people, and my parents have a pretty nice basement – so the downside there is pretty limited. The upside however is potentially enormous.

Identifying the Right Systems

The other difficult part is identifying systems in your life and business where the gap between real and perceived risk is large.

Here’s a few heuristics/characteristics that I’ve come up with:

  • The presence of The Resistance in all forms.
  • Almost always highly influenced/made by man – nature produces robust systems, man produces fragile ones so natural systems have less upside since there are less black swan events.
  • They haven’t hit the mainstream. I don’t know jack about bitcoin, but my interest in it is much lower now that it’s been on CNN. There is, I suspect, much less opportunity there since the gap between real and perceived risk has been dramatically lowered since mainstream media legitimized it. The exception this I guess could be if the mainstream media is demonizing something or exalting something then opportunity is created by moving in the opposite direction if the real risk there is much lower.
Posted in Biznass, Philosophy | 15 Comments

You Will Not Out Longball Me

I’ve been thinking a lot about the concept of long ball recently as I’ve been listening to a biography of George Washington on Audible.

It started when Dan made a convincing and impassioned argument to me about a month ago to listen to Ron Chernow’s George Washington: A Life.

I was talking with another friend in Saigon about Washington last weekend and he mentioned Washington’s character in the John Adam’s HBO series as not coming across as that intelligent or intellectual. Having (almost) finished the biography, I would say that’s right.

Washington wasn’t that intelligent. Compared to the intellectual giants of the Founding Fathers (Jefferson, Franklin, Hamilton), he wasn’t even on the same level.

Washington wasn’t terrible well-read and when he did read, it seems like he spent most of his time reading about agriculture or surveying.

Beyond not being terribly intelligent, he also wasn’t really very good at anything he did. I kept listening to the biography waiting for the turning points in his career as a general, planter or president and they never came.

As a Virginia planter, he was perpetually in debt and scraping to get by.

For as well regarded as he was as a general and a president, he didn’t really do anything in either of those roles that was terribly impressive either.

As a general, his surprise on attack on the British when he crossed the Delaware was the only significant victory that he can be credited with and that did more to raise moral than actually turn anything militarily to America’s advantage.

Yorktown, the battle that effectively ended the war,  can’t really be credited to him. The French fleet made the siege possible and Washington didn’t know anything about sieges and so the French General – Rochambeau – basically ran the whole show

As president, it seems that he effectively let his cabinet (mainly Hamilton and Jefferson) run the show. While a lot of the way he behaved as president set highly-influental precedents, they weren’t impressive in any intellectual sense.

Yet, in the American and I would say even world memory, Washington is seen as a giant. He’s revered.

I was trying to figure out why this was and I keep coming back to what Dan said when he was talking about the biography. You couldn’t out longball Washington.He was so genuinely committed to his principles. He trusted in them and believed that committing to them would serve him out in the long run.

To a significant extent, luck played a role in this. Washington caught a lot of breaks early in life that gave him the economic position to be able to play long ball.

But I think the main thing is that he just took the attitude that he wouldn’t be out longballed.

He became president in large part due to his unassailable commitment to his principles.

There’s a story in the biography of King George asking American-born painter Benjamin West whether Washington would choose to be at the head of the state or the military  now that he had won the war.

“Oh,” said West, “they say he will return to his farm.”

“If he does that,” said the king, “he will be the greatest man in the world.”

And he was.

His integrity over long periods of time grinded down at everyone’s suspicion. He earned their trust. He had the trust of a nation. And that proved more valuable than the intellectual dominance of even a Franklin or Jefferson.

He gained it because he never thought short term. When he made decision, he thought about 2nd and 3rd order consequences.

In fact, I’m not even sure the really thought about it all. He just stuck to his principles and trusted that they would bear themselves out.

That’s so hard to do. And I’m not even sure it was a smart thing to do in Washington’s case. I suspect there’s a lot of survivor’s bias. We see Washington and think that it is the right thing to do.

But there’s a lot of people that have stuck to their principles and gotten screwed by the powers that be.

There’s a relevant comparison that Gladwell makes in Outliers.

He compares Robert Oppenheimer with that guy with Christopher Langan who has an equally ridiculously high IQ but lives alone on a farm in Montana.

Oppenheimer achieved renown where Langan never has because he had the social intelligence to combine with his intellectual abilities to work the system to his advantage. In that case, principles didn’t really play out.

So is it valuable to take this mindset? To refused to be out-longballed? To be that principle driven? Or is it just being nieve?

I’ve been listening to Mitch Joel’s Six Pixels of Separation a lot recently. He did an interview a few weeks ago with Peppers and Rogers about trust. They talked about the increasing value of trust. As transparency in the market place and in our personal lives increases, trust is harder to gain and even harder to maintain. It’s becoming scarcer and, consequently, more valuable.

It seems that the current construct of society encourages short term thinking even as the value of long term, long ball, principle based decision making becomes more valuable.

And valuable not just in the moralistic sense, but even in the purely economic sense.
Trust has more and more value. And, on a long enough timeline, value can always be converted into wealth.

So, I’ve been asking myself that a lot lately. How can I out long ball you?

I was thinking back about significant things that I’ve accomplished in my life. And I realized that the common thread among all of them was consistency. In everything significant I’ve accomplished, I’ve just out-showed up other people.

I ended up playing football in college despite not being a terribly good athlete. (It was at a small D3 school in Alabama and we weren’t any good the two years that I played so that’s much less impressive than it sounds.)

However, there were probably 30 guys on my high school football team that were better natural athletes than me and only a couple of them ended up playing anywhere after high school.

I just out-showed them up. You have a two week beach vacation before two a days? I’m in the weight room. You’re 30 minutes late to practice everyday to go to study hall? I’m there early. I did my homework during lunch.

At least for me, consistency, just showing up over and over is a huge part of playing long ball.

Playing long ball feels scary though (probably why it’s potentially so valuable). It’s much easier to think in shorter time frames. The benefits are more obvious. You can see the reward. It’s a lot harder to think in 20 year time frames than 6 month time frames.

It’s harder to make sacrifices today when you’re just going on faith and trust that those sacrifices have exponential returns on a 20 year timeline.

In part I suspect I’m attracted to long ball because I don’t have the social intelligence to out-clever people people in the short term. I don’t bother playing that game because I already know that I’ll lose.

But, long ball? I think I have a shot at that one.

Is it worth it? I think so. But honestly, I’m not sure. I’ll get back to you in a couple decades.

Posted in Philosophy | 7 Comments

A Purpose-Driven, Productive, Detouristified Life

I was laying in bed and watching a movie a couple of weeks ago. It was Saturday night around maybe 9pm. A friend texted me to see what I was planning on doing. I told her that I was probably going to bed.

I sort of felt guilty. Maybe I should go out?

I don’t go out that much anymore. At most a couple of nights a week. If I do go out, at most I’ll usually have 2-3 drinks and I’m pretty much always back home and in bed by 2am.

I’m generally way more worried about things like health and productivity and how things I do affect it than I used to be.

A big part of this is because I’ve really resonated with Viktor Frankl’s logotherapy concept. I read Man’s Search for Meaning about two years ago and the book was a fundamental paradigm shift for me.  The basis of logotherapy is that the fundamental drive in humans is not pleasure (Freud) or power (Adler), but meaning.

I resonate with what Joseph Conrad had to say about work and meaning:

“No, I don’t like work. I had rather laze about and think of all the fine things that can be done. I don’t like work — no man does — but I like what is in the work, the chance to find yourself. Your own reality — for yourself, not for others — what no other man can ever know. They can only see the mere show, and never tell what it really means.”

-Joseph Conrad in The Heart of Darkness

Since I’ve reached a similar conclusion for myself, I’ve increasingly felt that what gives life purpose is meaning and striving towards bigger things. So getting drunk and being hungover for a day seems like a real waste.

That’s time and energy that I don’t have to work on things that are more meaningful to me and so I think more rewarding in the long-term.

Defined Long-Term Goals vs. Unpredictable Upside

I believe good habits are foundational to achieving long-term goals. By putting in place habits, systems in your life, you can consistently move yourself towards meaningful long term goals.

One example is diet and exercise. All the most short-term, delicious stuff isn’t going to pan out very well in the long term if you eat it consistently. I spent most of my life eating a crappy diet and the cumulative effect was not surprisingly feeling like crap.

From that perspective, I think putting systems into place that involve foregoing short term pleasure makes sense.

But there’s to be a major problem with this…

Strict systems and habits decrease optionality

The main problem with this is that habits, routines, and systems eliminate optionality Optionality, a concept Taleb advances in Antifragile, is the idea that instead of trying to predict what is going to happen, you stand more to gain by positioning yourself in such a way that you always have options. That way regardless of what happens, all you have to do is evaluate it once you have all the information and make a rational decision.

I watched a discussion between Taleb and Daniel Kahneman in which Taleb gives the example of the rational flaneur vs. the tourist (or touristification as a concept).

The tourist’s schedule is set in place. If something unexpected happens, it can only cause negative consequences like make him late for an appointment or delay a tour.

The rational flaneur has an entirely different perspective. If something unexpected happens, he merely evaluates and decides with the full of benefit of hindsight how to take advantage.

Taleb explains the idea as it relates to education or learning:

“…soccer moms try to eliminate the trial and error, the antifragility, from children’s lives, move them away from the ecological and transform them into nerds working on preexisting (soccer-mom-compatible) maps of reality.

Good students, but nerds— that is, they are like computers except slower. Further, they are now totally untrained to handle ambiguity. As a child of civil war, I disbelieve in structured learning— actually I believe that one can be an intellectual without being a nerd, provided one has a private library instead of a classroom, and spends time as an aimless (but rational) flâneur benefiting from what randomness can give us inside and outside the library.

Provided we have the right type of rigor, we need randomness, mess, adventures, uncertainty, self-discovery, near-traumatic episodes, all these things that make life worth living, compared to the structured, fake, and ineffective life of an empty-suit CEO with a preset schedule and an alarm clock. Even their leisure is subjected to a clock, squash between four and five, as their life is sandwiched between appointments. It is as if the mission of modernity was to squeeze every drop of variability and randomness out of life— with (as we saw in Chapter 5) the ironic result of making the world a lot more unpredictable, as if the goddesses of chance wanted to have the last word.

Only the autodidacts are free. And not just in school matters— those who decommoditize, detouristify their lives. Sports try to put randomness in a box like the ones sold in aisle six next to canned tuna— a form of alienation.”

[Emphasis is mine]

What I’m trying to figure out is what “the right type of rigor” looks like in real life. How do you construct systems that constitute the “right type of rigor” while still allowing for “randomness, mess, adventures, uncertainty, self-discovery, near-traumatic episodes, all these things that make life worth living.”

I’ve been really habit based lately. My life over the last few months looks a lot more like a tourist’s life than a flaneur’s.

There’s a lot of things that I could be doing that offer lots of optionality: trying new food, traveling more, spending more time making and deepening relationships. All of these things generate large amounts of optionality.

The reason I haven’t been doing these things I’ve realized is mainly out of fear. I haven’t travelled as much as I probably would have liked to in the short term because I don’t want to lose a lot of productive work time. I haven’t tried enough new food because I don’t want to fall out my healthier habits.

My fear is that once I start breaking it then the habit is broken and it becomes a long-term problem.

This is, from a more meta-perspective, quite stupid. The long-term difference between 365 days a year vs. 340 days a year of productive work isn’t that big. But the optionality to be gained of having 25 days where I do cool new stuff like travel vs. 0 is excellent.

The problem for me is that of hard vs. soft rules. I usually like to have hard rules for myself because they eliminate decision making. I get to conserve that willpower for more important stuff. The problem with hard rules, however, is that they destroy optionality.

I guess the way that I’ve approached this for the last few months is that I always plan to stick to my hard/long-term schedule, but try not to feel guilty if I want to go try something new.

I went to the Reunification palace in Ho Chi Minh City on a Monday afternoon a few weeks ago. I had planned on working, but a friend asked if I wanted to go. I did. I’m a history nerd. I love that kind of stuff.

I went, and it was really cool. The museum was interesting. I had a good time hanging out with my friend and getting to know her better. I didn’t regret it. I’d worked Saturday and Sunday so taking some time off Monday didn’t make me feel guilty.

But if something cool to do had come up the next day, I probably wouldn’t have gone, I would have felt guilty, like I needed to work.

I know that five years from now, looking back, I’ll be happier if I move marginaly slower on improving long-term goals like work and health and allow myself more optionality to do stuff like travel more, try new food and meet new people.

In fact, I believe that I’ll have made more progress on my longterm goals by allowing for those things because they’re all convex. Meeting one new key person could provide more upside than hundreds of hours grinding away on work.

The solution I’ve come up with is to set up a structure and framework which builds habits that you strongly believe will lead to you being able to realize meaningful, long term goals. But you don’t make that schedule so rigid or oppressive that you aren’t allowed time for the unexpected, the serendipitous, the highly convex.

I’m still not sure exactly what this actually looks like though. One of the main problems I have with Taleb’s concepts is that they resonate with me tremendously at a philosophical level and in the examples he gives. However, they’re so counter to the way my brain has been conditioned to think, it’s hard to come up with ways to apply them in my life.

Here’s the implementation ideas that I’ve come up with:

  • Once a week, I do a GTD style weekly review. I added a reflective portion on how I deal with people after reading How to Win Friends and Influence People. I added a question in there to ask myself: “Was I a tourist or a flaneur this week? Why?”
  • I plan to spend most all of my time sticking to my systems and working towards long-term goals, but consistently allowing time and energy when I see clear upside. This does leave the problem of not creating optionality, merely allowing it to happen when it presents itself though.

This is pretty weak overall, but I haven’t come up with anything better. As always, would leave to hear any other opinions/thoughts.

Posted in Philosophy | 17 Comments

Antifragile Book Notes

Antifragile Book CoverThese are the notes that I put together for myself after reading Antifragile. I wanted a (relatively) quick reference point for the key concepts that he elaborates in the book and particular concepts that I really like.

If you haven’t read the book, I expect they’ll be of limited use to you. The concepts that he explains throughout the book are quite complex and he’s far more adept at articulating them that I am.

If you have read the book though or just want to glance through the notes, you may find these useful.

Most all of these notes are pulled directly from the book. I’ve added a few explanation and clarifications in italics and also added some ways that I’m trying to implement the concepts Taleb presents at the bottom of this post.

If anyone else has read the book and has come up with different ways on how to practically implement Taleb’s concepts, I would love to hear about it.

Key Concepts

The Triad – Fragile, Robust, Antifragile –

This is Taleb’s central concept in both Antifragile and the Black Swan. All systems can be categorized as one of these three. Antifragile systems are those which he advocates we move towards. They are systems that improve or get stronger when unexpected, volatile events happen (like the airline industry, see below).

Fragile things are exposed to volatility, robust things resist it, anti fragile things benefit from it.

The fragile is the package that would be at best unharmed, the robust would be at best and at worst unharmed. And the opposite of fragile is therefore what is at worst unharmed.

Fragility implies more to lose than to gain, equals more downside than upside, equals (unfavorable) asymmetry and Antifragility implies more to gain than to lose, equals more upside than downside, equals (favorable) asymmetry You are antifragile for a source of volatility if potential gains exceed potential losses (and vice versa). Further, if you have more upside than downside, then you may be harmed by lack of volatility and stressors.

Good systems such as airlines are set up to have small errors, independent from each other— or, in effect, negatively correlated to each other, since mistakes lower the odds of future mistakes. This is one way to see how one environment can be antifragile (aviation) and the other fragile (modern economic life with “earth is flat” style interconnectedness). If every plane crash makes the next one less likely, every bank crash makes the next one more likely. We need to eliminate the second type of error— the one that produces contagion— in our construction of an ideal socioeconomic system.

The first step toward antifragility consists in first decreasing downside, rather than increasing upside; that is, by lowering exposure to negative Black Swans and letting natural antifragility work by itself.

The difference between a thousand pebbles and a large stone of equivalent weight is a potent illustration of how fragility stems from nonlinear effects. Nonlinear? Once again, “nonlinear” means that the response is not straightforward and not a straight line, so if you double, say, the dose, you get a lot more or a lot less than double the effect— if I throw at someone’s head a ten-pound stone, it will cause more than twice the harm of a five-pound stone, more than five times the harm of a two-pound stone, etc. It is simple: if you draw a line on a graph, with harm on the vertical axis and the size of the stone on the horizontal axis, it will be curved, not a straight line. That is a refinement of asymmetry. Now the very simple point, in fact, that allows for a detection of fragility: For the fragile, shocks bring higher harm as their intensity increases (up to a certain level).

For the fragile, the cumulative effect of small shocks is smaller than the single effect of an equivalent single large shock. This leaves me with the principle that the fragile is what is hurt a lot more by extreme events than by a succession of intermediate ones. Finito— and there is no other way to be fragile. Now let us flip the argument and consider the antifragile. Antifragility, too, is grounded in nonlinearties, nonlinear responses. For the antifragile, shocks bring more benefits (equivalently, less harm) as their intensity increases (up to a point).

Another Example – Weightlifting – Lifting heavier your body compensates to be able to lift even heavier the next time

The Teleological Fallacy

Our minds are in the business of turning history into something smooth and linear, which makes us underestimate randomness. But when we see it, we fear it and overreact. Because of this fear and thirst for order, some human systems, by disrupting the invisible or not so visible logic of things, tend to be exposed to harm from Black Swans and almost never get any benefit. You get pseudo-order when you seek order; you only get a measure of order and control when you embrace randomness.

Experience is devoid of the cherry-picking that we find in studies, particularly those called “observational,” ones in which the researcher finds past patterns, and, thanks to the sheer amount of data, can therefore fall into the trap of an invented narrative.

Antifragility Loves randomness and uncertainty. It’s better to create an antifragile structure and learn from trial and error than try to be right all the time in a fragile ecosystem. Prediction is impossible

Mediocristan vs. Extremistan – Knives vs. Atomic Bombs

Taleb explains this more in the Black Swan. Mediocristan is the world we evolved in, where volatility was much less than in the modern, Extremistan, world.

Fragilistas – People who encourage you to engage in policies and actions, all artificial, in which the benefits are small and visible, and the side effects potentially severe and invisible.

Naive Intervention and Iatrogenics – Iatrogenics is Greek for “caused by the healer.” We have predisposition to do something instead of nothing even when nothing may be the better option. We create fragile systems in our attempt to reduce volatility in the short term.

Example – Treating patients with blood pressure medication that are only slightly outside of norms.

We should do nothing to those experiencing mild volatility but be wildly experimental with those experiencing extreme volatility.

It’s much easier to sell “Look what I did for you” than “Look what I avoided for you.”

The first principle of iatrogenics is as follows: we do not need evidence of harm to claim that a drug or an unnatural via positiva procedure is dangerous.

Iatrogenics, being a cost-benefit situation, usually results from the treacherous condition in which the benefits are small, and visible— and the costs very large, delayed, and hidden. And of course, the potential costs are much worse than the cumulative gains.

Another principle of iatrogenics: it is not linear. We should not take risks with near-healthy people; but we should take a lot, a lot more risks with those deemed in danger.

The Barbell Strategy

A dual attitude of playing it safe in some areas (robust to negative Black Swans) and taking a lot of small risks in others (open to positive Black Swans), hence achieving antifragility. That is extreme risk aversion on one side and extreme risk loving on the other, rather than just the “medium” or the beastly “moderate” risk attitude that in fact is a sucker game

Antifragility is the combination aggressiveness plus paranoia— clip your downside, protect yourself from extreme harm, and let the upside, the positive Black Swans, take care of itself. We saw Seneca’s asymmetry: more upside than downside can come simply from the reduction of extreme downside (emotional harm) rather than improving things in the middle.

An example is Mark Cuban’s investment strategy. He keeps most of his assets in cash (robust, not going to crash with the market) and it allows him to move quickly when he sees large opportunities (anti fragile).


Options, any options, by allowing you more upside than downside, are vectors of antifragility.

If you “have optionality,” you don’t have much need for what is commonly called intelligence, knowledge, insight, skills, and these complicated things that take place in our brain cells. For you don’t have to be right that often. All you need is the wisdom to not do unintelligent things to hurt yourself (some acts of omission) and recognize favorable outcomes when they occur. (The key is that your assessment doesn’t need to be made beforehand, only after the outcome.)

Option = asymmetry + rationality

The mechanism of optionlike trial and error (the fail-fast model), a.k.a. convex tinkering. Low-cost mistakes, with known maximum losses, and large potential payoff (unbounded). A central feature of positive Black Swans.

Central to optionality is Taleb’s assertion that prediction in the modern world is impossible. Instead of trying to predict what is going to happen, position yourself in such a way that you have optionality. That way whatever happens, all you have to do is evaluate it once you have all the information and make a rational decision.

Touristification – an aspect of modern life that treats humans as washing machines, with simplified mechanical responses— and a detailed user’s manual. It is the systematic removal of uncertainty and randomness from things, trying to make matters highly predictable in their smallest details. All that for the sake of comfort, convenience, and efficiency. The opposite of flaneur.

Ex. The Soccer Mom. She attempts to remove all randomness and uncertainty from her kid’s lives and protect them. In doing so she prevents them from developing the ability to bounce back and adapt to future difficulties.

The Rational Flaneur

The rational flâneur is someone who, unlike a tourist, makes a decision at every step to revise his schedule, so he can imbibe things based on new information, what Nero was trying to practice in his travels, often guided by his sense of smell. The flâneur is not a prisoner of a plan. Tourism, actual or figurative, is imbued with the teleological illusion; it assumes completeness of vision and gets one locked into a hard-to-revise program, while the flâneur continuously— and, what is crucial, rationally— modifies his targets as he acquires information.

The opportunism of the flâneur is great in life and business— but not in personal life and matters that involve others. The opposite of opportunism in human relations is loyalty, a noble sentiment— but one that needs to be invested in the right places, that is, in human relations and moral commitments. The error of thinking you know exactly where you are going and assuming that you know today what your preferences will be tomorrow has an associated one. It is the illusion of thinking that others, too, know where they are going, and that they would tell you what they want if you just asked them. Never ask people what they want, or where they want to go, or where they think they should go, or, worse, what they think they will desire tomorrow. The strength of the computer entrepreneur Steve Jobs was precisely in distrusting market research and focus groups— those based on asking people what they want— and following his own imagination. His modus was that people don’t know what they want until you provide them with it. This ability to switch from a course of action is an option to change. Options— and optionality, the character of the option— are the topic of Book IV. Optionality will take us many places, but at the core, an option is what makes you antifragile and allows you to benefit from the positive side of uncertainty, without a corresponding serious harm from the negative side.

The Soviet-Harvard illusion –

Real knowledge comes from the process of Random Tinkering (antifragile) → Heuristics (technology) → Practice and Apprenticeship –> Random Tinkering (antifragile) → Heuristics (technology) → Practice and Apprenticeship

Soviet-Harvard illusion is that academic knowledge is superior and that we must understand the mechanism in order to understand the effectiveness or phenomenology.

An example would be a lot of the benefits associated with traditional Eastern practices like meditation or Yoga. We don’t understand the mechanism by which they benefit us, but it’s clear that they do.

The Green Lumber Fallacy – fallacy the situation in which one mistakes a source of necessary knowledge— the greenness of lumber— for another, less visible from the outside, less tractable, less narratable.

People with too much smoke and complicated tricks and methods in their brains start missing elementary, very elementary things. Persons in the real world can’t afford to miss these things; otherwise they crash the plane. Unlike researchers, they were selected for survival, not complications. So I saw the less is more in action: the more studies, the less obvious elementary but fundamental things become; activity, on the other hand, strips things to their simplest possible model.

Example – The guy trading green lumber most successfully at a firm in London thought it was lumber painted green. The Soviet-Harvard knowledge doesn’t translate to success in business and life. He learned how trade successfully using the  process of Random Tinkering (antifragile) → Heuristics (technology) → Practice and Apprenticeship without ever actually understanding what Green Lumber was.

Convexity – if you have favorable asymmetries, or positive convexity, options being a special case, then in the long run you will do reasonably well, outperforming the average in the presence of uncertainty. The more uncertainty, the more role for optionality to kick in, and the more you will outperform. This property is very central to life.

Concavity – the opposite of convexity. These are negative asymmetries that expose you to exponentially more harm as randomness increases.

This graph illustrates Taleb's concepts of convexity and concavity as they relate to antifragility. The top half is convex. As variability increases, so do gains. The bottom half is concave, as variability increases, so do losses.
This graph illustrates Taleb’s concepts of convexity and concavity as they relate to antifragility.
The top half is convex. As variability increases, so do gains.
The bottom half is concave, as variability increases, so do losses.


The technique, a simple heuristic called the fragility (and antifragility) detection heuristic, works as follows. Let’s say you want to check whether a town is overoptimized. Say you measure that when traffic increases by ten thousand cars, travel time grows by ten minutes. But if traffic increases by ten thousand more cars, travel time now extends by an extra thirty minutes. Such acceleration of traffic time shows that traffic is fragile and you have too many cars and need to reduce traffic until the acceleration becomes mild (acceleration, I repeat, is acute concavity, or negative convexity effect). Likewise, government deficits are particularly concave to changes in economic conditions. Every additional deviation in, say, the unemployment rate— particularly when the government has debt— makes deficits incrementally worse. And financial leverage for a company has the same effect: you need to borrow more and more to get the same effect. Just as in a Ponzi scheme. The same with operational leverage on the part of a fragile company. Should sales increase 10   percent, then profits would increase less than they would decrease should sales drop 10 percent.

Jensen’s inequality

If you have favorable asymmetries, or positive convexity, options being a special case, then in the long run you will do reasonably well, outperforming the average in the presence of uncertainty. The more uncertainty, the more role for optionality to kick in, and the more you will outperform. This property is very central to life.


An obsession with new and a discounting of the old when it is the old which is more robust as time kills all things equally thus a technology that has survived a long time is likely to survive longer.

With so many technologically driven and modernistic items— skis, cars, computers, computer programs— it seems that we notice differences between versions rather than commonalities. We even rapidly tire of what we have, continuously searching for versions 2.0 and similar iterations. And after that, another “improved” reincarnation. These impulses to buy new things that will eventually lose their novelty, particularly when compared to newer things, are called treadmill effects. As the reader can see, they arise from the same generator of biases as the one about the salience of variations mentioned in the section before: we notice differences and become dissatisfied with some items and some classes of goods. This treadmill effect has been investigated by Danny Kahneman and his peers when they studied the psychology of what they call hedonic states. People acquire a new item, feel more satisfied after an initial boost, then rapidly revert to their baseline of well-being. So, when you “upgrade,” you feel a boost of satisfaction with changes in technology. But then you get used to it and start hunting for the new new thing.

We are obsessed with the newest things when what provides the most utility to us is things which are holder. Taleb gives the examples of cooking pots and pans discovered in a Pompeii kitchens being nearly identical to the ones we use today.

via Negativa

Wonderfully simple heuristic: charlatans are recognizable in that they will give you positive advice, and only positive advice, exploiting our gullibility and sucker-proneness for recipes that hit you in a flash as just obvious, then evaporate later as you forget them. Just look at the “how to” books with, in their title, “Ten Steps for—” (fill in: enrichment, weight loss, making friends, innovation, getting elected, building muscles, finding a husband, running an orphanage, etc.). Yet in practice it is the negative that’s used by the pros, those selected by evolution: chess grandmasters usually win by not losing; people become rich by not going bust (particularly when others do); religions are mostly about interdicts; the learning of life is about what to avoid. You reduce most of your personal risks of accident thanks to a small number of measures.

The greatest— and most robust— contribution to knowledge consists in removing what we think is wrong— subtractive epistemology. In life, antifragility is reached by not being a sucker.

We know a lot more of what is wrong than what is right, or, phrased according to the fragile/ robust classification, negative knowledge (what is wrong, what does not work) is more robust to error than positive knowledge (what is right, what works). So knowledge grows by subtraction much more than by addition— given that what we know today might turn out to be wrong but what we know to be wrong cannot turn out to be right, at least not easily. If I spot a black swan (not capitalized), I can be quite certain that the statement “all swans are white” is wrong. But even if I have never seen a black swan, I can never hold such a statement to be true. Rephrasing it again: since one small observation can disprove a statement, while millions can hardly confirm it, disconfirmation is more rigorous than confirmation.

Another application of via negativa: spend less, live longer is a subtractive strategy. We saw that iatrogenics comes from the intervention bias, via positiva, the propensity to want to do something, causing all the problems we’ve discussed. But let’s do some via negativa here: removing things can be quite a potent (and, empirically, a more rigorous) action.

If true wealth consists in worriless sleeping, clear conscience, reciprocal gratitude, absence of envy, good appetite, muscle strength, physical energy, frequent laughs, no meals alone, no gym class, some physical labor (or hobby), good bowel movements, no meeting rooms, and periodic surprises, then it is largely subtractive (elimination of iatrogenics).

The Lindy effect

When you see a young and an old human, you can be confident that the younger will survive the elder. With something nonperishable, say a technology, that is not the case. We have two possibilities: either both are expected to have the same additional life expectancy (the case in which the probability distribution is called exponential), or the old is expected to have a longer expectancy than the young, in proportion to their relative age. In that situation, if the old is eighty and the young is ten, the elder is expected to live eight times as long as the younger one.

For the perishable, every additional day in its life translates into a shorter additional life expectancy. For the nonperishable, every additional day may imply a longer life expectancy. So the longer a technology lives, the longer it can be expected to live.

People have difficulties grasping probabilistic notions, particularly when they have spent too much time on the Internet (not that they need the Internet to be confused; we are naturally probability-challenged). The first mistake is usually in the form of the presentation of the counterexample of a technology that we currently see as inefficient and dying, like, say, telephone land lines, print newspapers, and cabinets containing paper receipts for tax purposes. These arguments come with anger as many neomaniacs get offended by my point. But my argument is not about every technology,

The second mistake is to believe that one would be acting “young” by adopting a “young” technology, revealing both a logical error and mental bias. It leads to the inversion of the power of generational contributions, producing the illusion of the contribution of the new generations over the old— statistically, the “young” do almost nothing. This mistake has been made by many people, but most recently I saw an angry “futuristic” consultant who accuses people who don’t jump into technology of “thinking old” (he is actually older than I am and, like most technomaniacs I know, looks sickly and pear-shaped and has an undefined transition between his jaw and his neck). I didn’t understand why one would be acting particularly “old” by loving things historical. So by loving the classics (“ older”) I would be acting “older” than if I were interested in the “younger” medieval themes.

Example for Choosing Books: The best filtering heuristic, therefore, consists in taking into account the age of books and scientific papers. Books that are one year old are usually not worth reading (a very low probability of having the qualities for “surviving”), no matter the hype and how “earth-shattering” they may seem to be. So I follow the Lindy effect as a guide in selecting what to read: books that have been around for ten years will be around for ten more; books that have been around for two millennia should be around for quite a bit of time, and so forth.

Empedocles’ Tile

Empedocles, the pre-Socratic philosopher, who was asked why a dog prefers to always sleep on the same tile. His answer was that there had to be some likeness between the dog and that tile. (Actually the story might be even twice as apocryphal since we don’t know if Magna Moralia was actually written by Aristotle himself.) Consider the match between the dog and the tile. A natural, biological, explainable or nonexplainable match, confirmed by long series of recurrent frequentation— in place of rationalism, just consider the history of it. Which brings me to the conclusion of our exercise in prophecy. I surmise that those human technologies such as writing and reading that have survived are like the tile to the dog, a match between natural friends, because they correspond to something deep in our nature.

We can’t explain these things, but they are demonstrably true phenomenologically and so there is some fundamental truth present there despite our inability to understand it.

If something that makes no sense to you (say, religion— if you are an atheist— or some age-old habit or practice called irrational); if that something has been around for a very, very long time, then, irrational or not, you can expect it to stick around much longer, and outlive those who call for its demise.

If there is something in nature you don’t understand, odds are it makes sense in a deeper way that is beyond your understanding. So there is a logic to natural things that is much superior to our own. Just as there is a dichotomy in law: innocent until proven guilty as opposed to guilty until proven innocent, let me express my rule as follows: what Mother Nature does is rigorous until proven otherwise; what humans and science do is flawed until proven otherwise.

The Agency Problem

We live in a world where Agency is divorced from consequences. People don’t have skin in the game.

Skin in the Game

Skin in the game is the only true mitigator of fragility. Hammurabi’s code provided a simple solution— close to thirty-seven hundred years ago. This solution has been increasingly abandoned in modern times, as we have developed a fondness for neomanic complication over archaic simplicity. We need to understand the everlasting solidity of such a solution.

A half-man (or, rather, half-person) is not someone who does not have an opinion, just someone who does not take risks for it.

Dignity is worth nothing unless you earn it, unless you are willing to pay a price for it.

Fat Tony has two heuristics. First, never get on a plane if the pilot is not on board. Second, make sure there is also a copilot. The first heuristic addresses the asymmetry in rewards and punishment, or transfer of fragility between individuals. Ralph Nader has a simple rule: people voting for war need to have at least one descendant (child or grandchild) exposed to combat.

For the Romans, engineers needed to spend some time under the bridge they built— something that should be required of financial engineers today. The English went further and had the families of the engineers spend time with them under the bridge after it was built. To me, every opinion maker needs to have “skin in the game”

The second heuristic is that we need to build redundancy, a margin of safety, avoiding optimization, mitigating (even removing) asymmetries in our sensitivity to risk.

The Robert Rubin Problem

An Example of no skin in the game

Corporate managers have incentives without disincentives— something the general public doesn’t quite get, as they have the illusion that managers are properly “incentivized.” Somehow these managers have been given free options by innocent savers and investors. I am concerned here with managers of businesses that are not owner-operated

Robert Rubin, former treasury secretary, earned $120 million from Citibank in bonuses over about a decade. The risks taken by the institution were hidden but the numbers looked good  …   until they didn’t look good (upon the turkey’s surprise). Citibank collapsed, but he kept his money— we taxpayers had to compensate him retrospectively since the government took over the banks’ losses and helped them stand on their feet. This type of payoff is very common, thousands of other executives had it.

The Joseph Stiglitz problem

Stiglitz Syndrome = fragilista (with good intentions) + ex post cherry-picking

Never ask anyone for their opinion, forecast, or recommendation. Just ask them what they have— or don’t have— in their portfolio.

The Alan Blinder Problem – complex environments with nonlinearities are easier to game than linear ones with a small number of variables. The same applies to the gap between the legal and the ethical.

The story is as follows. At Davos, during a private coffee conversation that I thought aimed at saving the world from, among other things, moral hazard and agency problems, I was interrupted by Alan Blinder, a former vice chairman of the Federal Reserve Bank of the United States, who tried to sell me a peculiar investment product that aims at legally hoodwinking taxpayers. It allowed the high net worth investor to get around the regulations limiting deposit insurance (at the time, $ 100,000) and benefit from coverage for near-unlimited amounts. The investor would deposit funds in any amount and Prof. Blinder’s company would break it up into smaller accounts and invest in banks, thus escaping the limit; it would look like a single account but would be insured in full. In other words, it would allow the super-rich to scam taxpayers by getting free government-sponsored insurance. Yes, scam taxpayers. Legally. With the help of former civil servants who have an insider edge. I blurted out: “Isn’t this unethical?” I was then told in response “It is perfectly legal,” adding the even more incriminating “we have plenty of former regulators on the staff,” (a) implying that what was legal was ethical and (b) asserting that former regulators have an edge over citizens. It took a long time, a couple of years, before I reacted to the event and did my public J’accuse. Alan Blinder is certainly not the worst violator of my sense of ethics; he probably irritated me because of the prominence of his previous public position, while the Davos conversation was meant to save the world from evil (I was presenting to him my idea of how bankers take risks at the expense of taxpayers). But what we have here is a model of how people use public office to, at some point, legally profit from the public. Tell me if you understand the problem in its full simplicity: former regulators and public officials who were employed by the citizens to represent their best interests can use the expertise and contacts acquired on the job to benefit from glitches in the system upon joining private employment— law firms, etc. Think about it a bit further: the more complex the regulation, the more bureaucratic the network, the more a regulator who knows the loops and glitches would benefit from it later, as his regulator edge would be a convex function of his differential knowledge. This is a franchise, an asymmetry one has at the expense of others. (Note that this franchise is spread across the economy; the car company Toyota hired former U.S. regulators and used their “expertise” to handle investigations of its car defects.) Now stage two— things get worse. Blinder and the dean of Columbia University Business School wrote an op-ed opposing the government’s raising the insurance limit on individuals. The article argued that the public should not have the unlimited insurance that Blinder’s clients benefit from.


Cherry-picking has optionality: the one telling the story (and publishing it) has the advantage of being able to show the confirmatory examples and completely ignore the rest— and the more volatility and dispersion, the rosier the best story will be (and the darker the worst story). Someone with optionality— the right to pick and choose his story— is only reporting on what suits his purpose. You take the upside of your story and hide the downside, so only the sensational seems to count.

The asymmetry (antifragility of postdictors): postdictors can cherry-pick and produce instances in which their opinions played out and discard mispredictions into the bowels of history. It is like a free option— to them; we pay for it.

Other (Related) Key Points That I Like

At no point in history have so many non-risk-takers, that is, those with no personal exposure, exerted so much control. The chief ethical rule is the following: Thou shalt not have antifragility at the expense of the fragility of others.

The process of discovery (or innovation, or technological progress) itself depends on antifragile tinkering, aggressive risk bearing rather than formal education.

Our minds are in the business of turning history into something smooth and linear, which makes us underestimate randomness. But when we see it, we fear it and overreact. Because of this fear and thirst for order, some human systems, by disrupting the invisible or not so visible logic of things, tend to be exposed to harm from Black Swans and almost never get any benefit. You get pseudo-order when you seek order; you only get a measure of order and control when you embrace randomness. – We tell stories to ourselves to make sense of the past. – The teleological Fallacy

The fragilista (medical, economic, social planning) is one who makes you engage in policies and actions, all artificial, in which the benefits are small and visible, and the side effects potentially severe and invisible.

Seek Simplicity – simplicity has been difficult to implement in modern life because it is against the spirit of a certain brand of people who seek sophistication so they can justify their profession. Less is more and usually more effective.

“you have to work hard to get your thinking clean to make it simple.” The Arabs have an expression for trenchant prose: no skill to understand it, mastery to write it.

Apophatic (what cannot be explicitly said, or directly described, in our current vocabulary)

Hormesis is Essential difficulty is what wakes up the genius (ingenium mala saepe movent), which translates in Brooklyn English into “When life gives you a lemon  …” The excess energy released from overreaction to setbacks is what innovates!

Many, like the great Roman statesman Cato the Censor, looked at comfort, almost any form of comfort, as a road to waste. 1 He did not like it when we had it too easy, as he worried about the weakening of the will. And the softening he feared was not just at the personal level: an entire society can fall ill.

It is all about redundancy. Nature likes to overinsure itself. Layers of redundancy are the central risk management property of natural systems. We humans have two kidneys (this may even include accountants), extra spare parts, and extra capacity in many, many things (say, lungs, neural system, arterial apparatus), while human design tends to be spare and inversely redundant, so to speak— we have a historical track record of engaging in debt, which is the opposite of redundancy (fifty thousand in extra cash in the bank or, better, under the mattress, is redundancy; owing the bank an equivalent amount, that is, debt, is the opposite of redundancy). Redundancy is ambiguous because it seems like a waste if nothing unusual happens. Except that something unusual happens— usually.

Causal Opacity: it is hard to see the arrow from cause to consequence, making much of conventional methods of analysis, in addition to standard logic, inapplicable. As I said, the predictability of specific events is low, and it is such opacity that makes it low. Not only that, but because of nonlinearities, one needs higher visibility than with regular systems— instead what we have is opacity.

In the complex world, the notion of “cause” itself is suspect; it is either nearly impossible to detect or not really defined— another reason to ignore newspapers, with their constant supply of causes for things.

Humans tend to do better with acute than with chronic stressors, particularly when the former are followed by ample time for recovery, which allows the stressors to do their jobs as messengers. – Think weight lifting

Some parts on the inside of a system may be required to be fragile in order to make the system antifragile as a result. Or the organism itself might be fragile, but the information encoded in the genes reproducing it will be antifragile. The point is not trivial, as it is behind the logic of evolution. This applies equally to entrepreneurs and individual scientific researchers. – Entrepreneurship is systematically antifragile, but individual efforts are fragile.

If you view things in terms of populations, you must transcend the terms “hormesis” and “Mithridatization” as a characterization of antifragility. Why? To rephrase the argument made earlier, hormesis is a metaphor for direct antifragility, when an organism directly benefits from harm; with evolution, something hierarchically superior to that organism benefits from the damage. From the outside, it looks like there is hormesis, but from the inside, there are winners and losers.

He who has never sinned is less reliable than he who has only sinned once. And someone who has made plenty of errors— though never the same error more than once— is more reliable than someone who has never made any.

My characterization of a loser is someone who, after making a mistake, doesn’t introspect, doesn’t exploit it, feels embarrassed and defensive rather than enriched with a new piece of information, and tries to explain why he made the mistake rather than moving on. These types often consider themselves the “victims” of some large plot, a bad boss, or bad weather.

By disrupting the model, as we will see, with bailouts, governments typically favor a certain class of firms that are large enough to require being saved in order to avoid contagion to other business. This is the opposite of healthy risk-taking; it is transferring fragility from the collective to the unfit. People have difficulty realizing that the solution is building a system in which nobody’s fall can drag others down— for continuous failures work to preserve the system. Paradoxically, many government interventions and social policies end up hurting the weak and consolidating the established.

This is the central illusion in life: that randomness is risky, that it is a bad thing – and that eliminating randomness is done by eliminating randomness

There is another issue with the abstract state, a psychological one. We humans scorn what is not concrete. We are more easily swayed by a crying baby than by thousands of people dying elsewhere that do not make it to our living room through the TV set. The one case is a tragedy, the other a statistic. Our emotional energy is blind to probability. The media make things worse as they play on our infatuation with anecdotes, our thirst for the sensational, and they cause a great deal of unfairness that way. At the present time, one person is dying of diabetes every seven seconds, but the news can only talk about victims of hurricanes with houses flying in the air.

The Great Turkey Problem – A turkey is fed for a thousand days by a butcher; every day confirms to its staff of analysts that butchers love turkeys “with increased statistical confidence.” The butcher will keep feeding the turkey until a few days before Thanksgiving. Then comes that day when it is really not a very good idea to be a turkey. So with the butcher surprising it, the turkey will have a revision of belief— right when its confidence in the statement that the butcher loves turkeys is maximal and “it is very quiet” and soothingly predictable in the life of the turkey.

Absence of fluctuations in the market causes hidden risks to accumulate with impunity. The longer one goes without a market trauma, the worse the damage when commotion occurs.

The ancients perfected the method of random draw in more or less difficult situations— and integrated it into divinations. These draws were really meant to pick a random exit without having to make a decision, so one would not have to live with the burden of the consequences later. You went with what the gods told you to do, so you would not have to second-guess yourself later. One of the methods, called sortes virgilianae (fate as decided by the epic poet Virgil), involved opening Virgil’s Aeneid at random and interpreting the line that presented itself as direction for the course of action. You should use such method for every sticky business decision. I will repeat until I get hoarse: the ancients evolved hidden and sophisticated ways and tricks to exploit randomness. For instance, I actually practice such randomizing heuristic in restaurants. Given the lengthening and complication of menus, subjecting me to what psychologists call the tyranny of choice, with the stinging feeling after my decision that I should have ordered something else, I blindly and systematically duplicate the selection by the most overweight male at the table; and when no such person is present, I randomly pick from the menu without reading the name of the item, under the peace of mind that Baal made the choice for me.

The problem with artificially suppressed volatility is not just that the system tends to become extremely fragile; it is that, at the same time, it exhibits no visible risks. Also remember that volatility is information. In fact, these systems tend to be too calm and exhibit minimal variability as silent risks accumulate beneath the surface. Although the stated intention of political leaders and economic policy makers is to stabilize the system by inhibiting fluctuations, the result tends to be the opposite. These artificially constrained systems become prone to Black Swans.

Think second and Third Order Consequence Re: Dalio – Principles

The separation of “work” and “leisure” (though the two would look identical to someone from a wiser era)

A theory is a very dangerous thing to have. And of course one can rigorously do science without it. What scientists call phenomenology is the observation of an empirical regularity without a visible theory for it. In the Triad, I put theories in the fragile category, phenomenology in the robust one. Theories are superfragile; they come and go, then come and go, then come and go again; phenomenologies stay, and I can’t believe people don’t realize that phenomenology is “robust” and usable, and theories, while overhyped, are unreliable for decision making— outside physics.

Over-intervention comes with under-intervention. Indeed, as in medicine, we tend to over-intervene in areas with minimal benefits (and large risks) while under-intervening in areas in which intervention is necessary, like emergencies. So the message here is in favor of staunch intervention in some areas, such as ecology or to limit the economic distortions and moral hazard caused by large corporations. What should we control? As a rule, intervening to limit size (of companies, airports, or sources of pollution), concentration, and speed are beneficial in reducing Black Swan risks. These actions may be devoid of iatrogenics— but it is hard to get governments to limit the size of government.

Since procrastination is a message from our natural willpower via low motivation, the cure is changing the environment, or one’s profession, by selecting one in which one does not have to fight one’s impulses. Few can grasp the logical consequence that, instead, one should lead a life in which procrastination is good, as a naturalistic-risk-based form of decision making.

The supply of information to which we are exposed thanks to modernity is transforming humans from the equable second fellow into the neurotic first one. For the purpose of our discussion, the second fellow only reacts to real information, the first largely to noise. The difference between the two fellows will show us the difference between noise and signal. Noise is what you are supposed to ignore, signal what you need to heed.

Noise is a generalization beyond the actual sound to describe random information that is totally useless for any purpose, and that you need to clean up to make sense of what you are listening to.

Just as we are not likely to mistake a bear for a stone (but likely to mistake a stone for a bear), it is almost impossible for someone rational, with a clear, uninfected mind, someone who is not drowning in data, to mistake a vital signal, one that matters for his survival, for noise— unless he is overanxious, oversensitive, and neurotic, hence distracted and confused by other messages. Significant signals have a way to reach you.

Curiosity is antifragile, like an addiction, and is magnified by attempts to satisfy it— books have a secret mission and ability to multiply, as everyone who has wall-to-wall bookshelves knows well.

Excess wealth, if you don’t need it, is a heavy burden. Nothing was more hideous in his eyes than excessive refinement— in clothes, food, lifestyle, manners— and wealth was nonlinear. Beyond some level it forces people into endless complications of their lives, creating worries about whether the housekeeper in one of the country houses is scamming them while doing a poor job and similar headaches that multiply with money.

A man is honorable in proportion to the personal risks he takes for his opinion— in other words, the amount of downside he is exposed to. To sum him up, Nero believed in erudition, aesthetics, and risk taking— little else.

Stoicism makes you desire the challenge of a calamity. And Stoics look down on luxury: about a fellow who led a lavish life, Seneca wrote: “He is in debt, whether he borrowed from another person or from fortune.

Stoicism, seen this way, becomes pure robustness— for the attainment of a state of immunity from one’s external circumstances, good or bad, and an absence of fragility to decisions made by fate, is robustness. Random events won’t affect us either way (we are too strong to lose, and not greedy to enjoy the upside), so we stay in the middle column of the Triad.

I would go through the mental exercise of assuming every morning that the worst possible thing had actually happened— the rest of the day would be a bonus. Actually the method of mentally adjusting “to the worst” had advantages way beyond the therapeutic, as it made me take a certain class of risks for which the worst case is clear and unambiguous, with limited and known downside. It is hard to stick to a good discipline of mental write-off when things are going well, yet that’s when one needs the discipline the most. Moreover, once in a while, I travel, Seneca-style, in uncomfortable circumstances (though unlike him I am not accompanied by “one or two” slaves). An intelligent life is all about such emotional positioning to eliminate the sting of harm, which as we saw is done by mentally writing off belongings so one does not feel any pain from losses. The volatility of the world no longer affects you negatively.

Invest in good actions. Things can be taken away from us— not good deeds and acts of virtue.

The barbell businessman-scholar situation was ideal; after three or four in the afternoon, when I left the office, my day job ceased to exist until the next day and I was completely free to pursue what I found most valuable and interesting. When I tried to become an academic I felt like a prisoner, forced to follow others’ less rigorous, self-promotional programs.

Professions can be serial: something very safe, then something speculative. A friend of mine built himself a very secure profession as a book editor, in which he was known to be very good. Then, after a decade or so, he left completely for something speculative and highly risky. This is a true barbell in every sense of the word: he can fall back on his previous profession should the speculation fail, or fail to bring the expected satisfaction. This is what Seneca elected to do: he initially had a very active, adventurous life, followed by a philosophical withdrawal to write and meditate, rather than a “middle” combination of both. Many of the “doers” turned “thinkers” like Montaigne have done a serial barbell: pure action, then pure reflection.

“f*** you money”— a sum large enough to get most, if not all, of the advantages of wealth (the most important one being independence and the ability to only occupy your mind with matters that interest you) but not its side effects, such as having to attend a black-tie charity event and being forced to listen to a polite exposition of the details of a marble-rich house renovation. The worst side effect of wealth is the social associations it forces on its victims, as people with big houses tend to end up socializing with other people with big houses. Beyond a certain level of opulence and independence, gents tend to be less and less personable and their conversation less and less interesting.

Authors, artists, and even philosophers are much better off having a very small number of fanatics behind them than a large number of people who appreciate their work. The number of persons who dislike the work don’t count— there is no such thing as the opposite of buying your book, or the equivalent of losing points in a soccer game, and this absence of negative domain for book sales provides the author with a measure of optionality. Further, it helps when supporters are both enthusiastic and influential. Wittgenstein, for instance, was largely considered a lunatic, a strange bird, or just a b*** t operator by those whose opinion didn’t count (he had almost no publications to his name). But he had a small number of cultlike followers, and some, such as Bertrand Russell and J. M. Keynes, were massively influential. Beyond books, consider this simple heuristic: your work and ideas, whether in politics, the arts, or other domains, are antifragile if, instead of having one hundred percent of the people finding your mission acceptable or mildly commendable, you are better off having a high percentage of people disliking you and your message (even intensely), combined with a low percentage of extremely loyal and enthusiastic supporters. Options like dispersion of outcomes and don’t care about the average too much. – Think Seth Godin and Tribes

Consider two types of knowledge. The first type is not exactly “knowledge”; its ambiguous character prevents us from associating it with the strict definitions of knowledge. It is a way of doing things that we cannot really express in clear and direct language— it is sometimes called apophatic— but that we do nevertheless, and do well. The second type is more like what we call “knowledge”; it is what you acquire in school, can get grades for, can codify, what is explainable, academizable, rationalizable, formalizable, theoretizable, codifiable, Sovietizable, bureaucratizable, Harvardifiable, provable, etc. The error of naive rationalism leads to overestimating the role and necessity of the second type, academic knowledge, in human affairs— and degrading the uncodifiable, more complex, intuitive, or experience-based type. There is no proof against the statement that the role such explainable knowledge plays in life is so minor that it is not even funny. We are very likely to believe that skills and ideas that we actually acquired by antifragile doing, or that came naturally to us (from our innate biological instinct), came from books, ideas, and reasoning. We get blinded by it; there may even be something in our brains that makes us suckers for the point

Random Tinkering (antifragile) → Heuristics (technology) → Practice and Apprenticeship → Random Tinkering (antifragile) → Heuristics (technology) → Practice and Apprenticeship  …

In parallel to the above loop,

Practice → Academic Theories → Academic Theories → Academic Theories → Academic Theories  …  ( with of course some exceptions, some accidental leaks, though these are indeed rare and overhyped and grossly generalized).

People with too much smoke and complicated tricks and methods in their brains start missing elementary, very elementary things. Persons in the real world can’t afford to miss these things; otherwise they crash the plane. Unlike researchers, they were selected for survival, not complications. So I saw the less is more in action: the more studies, the less obvious elementary but fundamental things become; activity, on the other hand, strips things to their simplest possible model.

Evolution is not a competition between ideas, but between humans and systems based on such ideas. An idea does not survive because it is better than the competition, but rather because the person who holds it has survived! Accordingly, wisdom you learn from your grandmother should be vastly superior (empirically, hence scientifically) to what you get from a class in business school (and, of course, considerably cheaper). My sadness is that we have been moving farther and farther away from grandmothers.

If you face n options, invest in all of them in equal amounts. 5 Small amounts per trial, lots of trials, broader than you want. Why? Because in Extremistan, it is more important to be in something in a small amount than to miss it. As one venture capitalist told me: “The payoff can be so large that you can’t afford not to be in everything.”

The difference between humans and animals lies in the ability to collaborate, engage in business, let ideas, pardon the expression, copulate. Collaboration has explosive upside, what is mathematically called a superadditive function, i.e., one plus one equals more than two, and one plus one plus one equals much, much more than three. That is pure nonlinearity with explosive benefits— we will get into details on how it benefits from the philosopher’s stone. Crucially, this is an argument for unpredictability and Black Swan effects: since you cannot forecast collaborations and cannot direct them, you cannot see where the world is going. All you can do is create an environment that facilitates these collaborations, and lay the foundation for prosperity.

Corporations are in love with the idea of the strategic plan. They need to pay to figure out where they are going. Yet there is no evidence that strategic planning works— we even seem to have evidence against it. A management scholar, William Starbuck, has published a few papers debunking the effectiveness of planning— it makes the corporation option-blind, as it gets locked into a non-opportunistic course of action.

(i) Look for optionality; in fact, rank things according to optionality, (ii) preferably with open-ended, not closed-ended, payoffs; (iii) Do not invest in business plans but in people, so look for someone capable of changing six or seven times over his career, or more (an idea that is part of the modus operandi of the venture capitalist Marc Andreessen); one gets immunity from the backfit narratives of the business plan by investing in people. It is simply more robust to do so; (iv) Make sure you are barbelled, whatever that means in your business.

Only the autodidacts are free. And not just in school matters— those who decommoditize, detouristify their lives. Sports try to put randomness in a box like the ones sold in aisle six next to canned tuna— a form of alienation.

“much of what other people know isn’t worth knowing.” To this day I still have the instinct that the treasure, what one needs to know for a profession, is necessarily what lies outside the corpus, as far away from the center as possible. But there is something central in following one’s own direction in the selection of readings: what I was given to study in school I have forgotten; what I decided to read on my own, I still remember.

On the primacy of tradition and Naive Rationalism:

FAT TONY: you are killing the things we can know but not express. And if I asked someone riding a bicycle just fine to give me the theory behind his bicycle riding, he would fall from it. By bullying and questioning people you confuse them and hurt them.”

FAT TONY: “My dear Socrates  …   you know why they are putting you to death? It is because you make people feel stupid for blindly following habits, instincts, and traditions. You may be occasionally right. But you may confuse them about things they’ve been doing just fine without getting in trouble. You are destroying people’s illusions about themselves. You are taking the joy of ignorance out of the things we don’t understand. And you have no answer; you have no answer to offer them.”

Things are too complicated to be expressed in words; by doing so, you kill humans. Or people— as with the green lumber— may be focusing on the right things but we are not good enough to figure it out intellectually.

The payoff, what happens to you (the benefits or harm from it), is always the most important thing, not the event itself. Philosophers talk about truth and falsehood. People in life talk about payoff, exposure, and consequences (risks and rewards), hence fragility and antifragility. And sometimes philosophers and thinkers and those who study conflate Truth with risks and rewards

You decide principally based on fragility, not probability. Or to rephrase, You decide principally based on fragility, not so much on True/ False.

If I tell you that some result is true with 95 percent confidence level, you would be quite satisfied. But what if I told you that the plane was safe with 95 percent confidence level? Even 99 percent confidence level would not do, as a 1 percent probability of a crash would be quite a bit alarming (today commercial planes operate with less than one in several hundred thousand probabilities of crashing, and the ratio is improving, as we saw that every error leads to the improvement of overall safety). So, to repeat, the probability (hence True/ False) does not work in the real world; it is the payoff that matters.

In spite of what is studied in business schools concerning “economies of scale,” size hurts you at times of stress; it is not a good idea to be large during difficult times.

There are many things without words, matters that we know and can act on but cannot describe directly, cannot capture in human language or within the narrow human concepts that are available to us. Almost anything around us of significance is hard to grasp linguistically— and in fact the more powerful, the more incomplete our linguistic grasp.

A wonderfully simple heuristic: charlatans are recognizable in that they will give you positive advice, and only positive advice, exploiting our gullibility and sucker-proneness for recipes that hit you in a flash as just obvious, then evaporate later as you forget them. Just look at the “how to” books with, in their title, “Ten Steps for—” (fill in: enrichment, weight loss, making friends, innovation, getting elected, building muscles, finding a husband, running an orphanage, etc.). Yet in practice it is the negative that’s used by the pros, those selected by evolution: chess grandmasters usually win by not losing; people become rich by not going bust (particularly when others do); religions are mostly about interdicts; the learning of life is about what to avoid. You reduce most of your personal risks of accident thanks to a small number of measures.

We are moving into the far more uneven distribution of 99/ 1 across many things that used to be 80/ 20: 99   percent of Internet traffic is attributable to less than 1   percent of sites, 99   percent of book sales come from less than 1   percent of authors  …   and I need to stop because numbers are emotionally stirring. Almost everything contemporary has winner-take-all effects, which includes sources of harm and benefits. Accordingly, as I will show, 1   percent modification of systems can lower fragility (or increase antifragility) by about 99   percent— and all it takes is a few steps, very few steps, often at low cost, to make things better and safer.

If someone has a long bio, I skip him— at a conference a friend invited me to have lunch with an overachieving hotshot whose résumé “can cover more than two or three lives”; I skipped to sit at a table with the trainees and stage engineers. Likewise when I am told that someone has three hundred academic papers and twenty-two honorary doctorates, but no other single compelling contribution or main idea behind it, I avoid him like the bubonic plague.

What survives must be good at serving some (mostly hidden) purpose that time can see but our eyes and logical faculties can’t capture. In this chapter we use the notion of fragility as a central driver of prediction. Recall the foundational asymmetry: the antifragile benefits from volatility and disorder, the fragile is harmed. Well, time is the same as disorder.

The prime error is as follows. When asked to imagine the future, we have the tendency to take the present as a baseline, then produce a speculative destiny by adding new technologies and products to it and what sort of makes sense, given an interpolation of past developments. We also represent society according to our utopia of the moment, largely driven by our wishes— except for a few people called doomsayers, the future will be largely inhabited by our desires. So we will tend to over-technologize it and underestimate the might of the equivalent of these small wheels on suitcases that will be staring at us for the next millennia.

I received an interesting letter from Paul Doolan from Zurich, who was wondering how we could teach children skills for the twenty-first century since we do not know which skills will be needed in the twenty-first century— he figured out an elegant application of the large problem that Karl Popper called the error of historicism. Effectively my answer would be to make them read the classics. The future is in the past. Actually there is an Arabic proverb to that effect: he who does not have a past has no future. 

Another mental bias causing the overhyping of technology comes from the fact that we notice change, not statics. The classic example, discovered by the psychologists Daniel Kahneman and Amos Tversky, applies to wealth. (The pair developed the idea that our brains like minimal effort and get trapped that way, and they pioneered a tradition of cataloging and mapping human biases with respect to perception of random outcomes and decision making under uncertainty). If you announce to someone “you lost $ 10,000,” he will be much more upset than if you tell him “your portfolio value, which was $ 785,000, is now $ 775,000.” Our brains have a predilection for shortcuts, and the variation is easier to notice (and store) than the entire record. It requires less memory storage. This psychological heuristic (often operating without our awareness), the error of variation in place of total, is quite pervasive, even with matters that are visual.

A rule on what to read. “As little as feasible from the last twenty years, except history books that are not about the last fifty years,”

The problem with lack of recursion in learning— lack of second-order thinking— is as follows. If those delivering some messages deemed valuable for the long term have been persecuted in past history, one would expect that there would be a correcting mechanism, that intelligent people would end up learning from such historical experience so those delivering new messages would be greeted with the new understanding in mind. But nothing of the sort takes place. This lack of recursive thinking applies not just to prophecy, but to other human activities as well: if you believe that what will work and do well is going to be a new idea that others did not think of, what we commonly call “innovation,” then you would expect people to pick up on it and have a clearer eye for new ideas without too much reference to the perception of others. But they don’t: something deemed “original” tends to be modeled on something that was new at the time but is no longer new, so being an Einstein for many scientists means solving a similar problem to the one Einstein solved when at the time Einstein was not solving a standard problem at all. The very idea of being an Einstein in physics is no longer original. I’ve detected in the area of risk management the similar error, made by scientists trying to be new in a standard way. People in risk management only consider risky things that have hurt them in the past (given their focus on “evidence”), not realizing that, in the past, before these events took place, these occurrences that hurt them severely were completely without precedent, escaping standards. And my personal efforts to make them step outside their shoes to consider these second-order considerations have failed— as have my efforts to make them aware of the notion of fragility.

Only resort to medical techniques when the health payoff is very large (say, saving a life) and visibly exceeds its potential harm, such as incontrovertibly needed surgery or lifesaving medicine (penicillin). It is the same as with government intervention. This is squarely Thalesian, not Aristotelian (that is, decision making based on payoffs, not knowledge). For in these cases medicine has positive asymmetries— convexity effects— and the outcome will be less likely to produce fragility. Otherwise, in situations in which the benefits of a particular medicine, procedure, or nutritional or lifestyle modification appear small— say, those aiming for comfort— we have a large potential sucker problem (hence putting us on the wrong side of convexity effects).

What we call diseases of civilization result from the attempt by humans to make life comfortable for ourselves against our own interest, since the comfortable is what fragilizes. 

Evolution proceeds by undirected, convex bricolage or tinkering, inherently robust, i.e., with the achievement of potential stochastic gains thanks to continuous, repetitive, small, localized mistakes. What men have done with top-down, command-and-control science has been exactly the reverse: interventions with negative convexity effects, i.e., the achievement of small certain gains through exposure to massive potential mistakes. Our record of understanding risks in complex systems (biology, economics, climate) has been pitiful, marred with retrospective distortions (we only understand the risks after the damage takes place, yet we keep making the mistake), and there is nothing to convince me that we have gotten better at risk management. In this particular case, because of the scalability of the errors, you are exposed to the wildest possible form of randomness. Simply, humans should not be given explosive toys (like atomic bombs, financial derivatives, or tools to create life).

If there is something in nature you don’t understand, odds are it makes sense in a deeper way that is beyond your understanding. So there is a logic to natural things that is much superior to our own. Just as there is a dichotomy in law: innocent until proven guilty as opposed to guilty until proven innocent, let me express my rule as follows: what Mother Nature does is rigorous until proven otherwise; what humans and science do is flawed until proven otherwise.

So the modus operandi in every venture is to remain as robust as possible to changes in theories (let me repeat that my deference to Mother Nature is entirely statistical and risk-management-based, i.e., again, grounded in the notion of fragility).

If true wealth consists in worriless sleeping, clear conscience, reciprocal gratitude, absence of envy, good appetite, muscle strength, physical energy, frequent laughs, no meals alone, no gym class, some physical labor (or hobby), good bowel movements, no meeting rooms, and periodic surprises, then it is largely subtractive (elimination of iatrogenics).

Look at it again, the way we looked at entrepreneurs. They are usually wrong and make “mistakes”— plenty of mistakes. They are convex. So what counts is the payoff from success. – It’s ok to be wrong as long as you’re wrong on a small scale and learn from it. If you’re right BIG enough then you only need to be right once or a few times.

Playing on one’s inner agency problem can go beyond symmetry: give soldiers no options and see how antifragile they can get. On Apri 29, 711, the armies of the Arab commander Tarek crossed the Strait of Gibraltar from Morocco into Spain with a small army (the name Gibraltar is derived from the Arabic Jabal Tarek, meaning “mount of Tarek”). Upon landing, Tarek had his ships put to the fire. He then made a famous speech every schoolchild memorized during my school days that I translate loosely: “Behind you is the sea, before you, the enemy. You are vastly outnumbered. All you have is sword and courage.” And Tarek and his small army took control of Spain. The same heuristic seems to have played out throughout history, from Cortés in Mexico, eight hundred years later, – No options means you have to succeed

Never listen to a leftist who does not give away his fortune or does not live the exact lifestyle he wants others to follow. What the French call “the caviar left,” la gauche caviar, or what Anglo-Saxons call champagne socialists, are people who advocate socialism, sometimes even communism, or some political system with sumptuary limitations, while overtly leading a lavish lifestyle, often financed by inheritance— not realizing the contradiction that they want others to avoid just such a lifestyle.

Let me make the point clearer: the version of “capitalism” or whatever economic system you need to have is with the minimum number of people in the left column of the Triad. Nobody realizes that the central problem of the Soviet system was that it put everyone in charge of economic life in that nasty fragilizing left column.

The problem of the commercial world is that it only works by addition (via positiva), not subtraction (via negativa): pharmaceutical companies don’t gain if you avoid sugar; the manufacturer of health club machines doesn’t benefit from your deciding to lift stones and walk on rocks (without a cell phone); your stockbroker doesn’t gain from your decision to limit your investments to what you see with your own eyes, say your cousin’s restaurant or an apartment building in your neighborhood; all these firms have to produce “growth in revenues” to satisfy the metric of some slow thinking or, at best, semi-slow thinking MBA analyst sitting in New York.

With the exception of, say, drug dealers, small companies and artisans tend to sell us healthy products, ones that seem naturally and spontaneously needed; larger ones— including pharmaceutical giants— are likely to be in the business of producing wholesale iatrogenics, taking our money, and then, to add insult to injury, hijacking the state thanks to their army of lobbyists. Further, anything that requires marketing appears to carry such side effects. You certainly need an advertising apparatus to convince people that Coke brings them “happiness”— and it works.

Anything one needs to market heavily is necessarily either an inferior product or an evil one. And it is highly unethical to portray something in a more favorable light than it actually is. One may make others aware of the existence of a product, say a new belly dancing belt, but I wonder why people don’t realize that, by definition, what is being marketed is necessarily inferior, otherwise it would not be advertised. – First key to marketing is having a good product so that all you have to do is make others aware.

The glass is dead; living things are long volatility. The best way to verify that you are alive is by checking if you like variations. Remember that food would not have a taste if it weren’t for hunger; results are meaningless without effort, joy without sadness, convictions without uncertainty, and an ethical life isn’t so when stripped of personal risks.

Things that I’m doing differently

A few people have asked me about things that I’m doing differently as a result of reading Antifragile. Part of the trouble with Antifragile in my my mind is that implications are so large and so contrary to everything that we’ve been conditioned to believe it’s hard to put it into action.

A few of the things I’m doing differently now:

I’m using the time heuristic for choosing what to consume. I’ve massively reduced the number of podcasts and blogs that I read in exchange for more books and audiobooks and even biasing those towards older options all other factors being the same.

I’m using the skin in the game heuristic for deciding what advice to listen to.  I did some research a few weeks ago on using LinkedIn for lead generation. I listened to one interview of a reporter that had been covering LinkedIn for 10 years and one of a guy that had started using LinkedIn 2 years ago when he was jobless and trying to make something happen living on his sister’s couch. The strategies and tactics of the guy who was trying to put his life together from LinkedIn were infinitely better. Why? Skin in the Game.

Understanding the Signal/Noise Ratio – I used to look in Google Analytics daily. Now I try to look at most once a week. There is very little to be learned by looking more frequently like that. I’m just looking at more noise.

Facilitating Collaboration – I’m trying to actively spend more time around other people and collaborate with them. As Taleb says, collaboration gives us huge amounts of optionality, but of course we can’t see it until it’s already happened. Some real world examples of this are living with interesting/cool people, co-working, and cocktail parties (or equivalent, less pretentious, get-togethers).

I think Social Media is an example of something that’s antifraglile. It facilitates collaboration that has the potential for large upside.

I’d love to hear from anyone else about things they’ve changed. As I said, I think the potential implications here are extraordinary and I’m looking for more ways to implement them in my life.

Posted in Uncategorized | 11 Comments

Time as the Best Judge of Empirical Value

If you’ve talked to me in the last two weeks, you probably caught on that I’m mildly (read:extremely) obsessed with a guy named Nicholas Nassim Taleb.

Taleb is the author The Black Swan, and more recently, Antifragile. The main idea behind Taleb’s philosophy in both books is that an increasingly large number of systems in our modern world are increasingly fragile. Fragile in the sense that when things go wrong, they suffer disproportionately large, negative consequences.

I did my senior thesis on American foreign policy during the Cuban Revolution and the whole structure of American Foreign Policy during the Cold War is a good example of Taleb’s theory. Washington and the CIA believed they could engineer capitalism in the 3rd world. A long list of examples including Cuba, Iran and Afghanistan make it pretty obvious how that worked out in the long run.

We have the delusion that we’re capable of understanding and making predictions when in reality the number of inputs and outputs into the systems we’re trying to make predictions about is far too great. Taleb instead advocated creating systems that are “Antifragile:” or ones that gain from disorder.

Entrepreneurship, as a system is an obvious example. While most start-ups fail, all other start-ups are able to learn from this and so they system as a whole benefits from the individuals failure. The Lean Startup movement seems to be an attempt to take this from a macro-level within the field of entrepreneurship to a micro-level within individual organizations.

He said something in Antifragile that I think is one of the most useful and immediately applicable heuristics I’ve read.

“The best filtering heuristic, therefore, consists in taking into account the age of books and scientific papers. Books that are one year old are usually not worth reading (a very low probability of having the qualities for “surviving”), no matter the hype and how “earth-shattering” they may seem to be. So I follow the Lindy effect as a guide in selecting what to read: books that have been around for ten years will be around for ten more; books that have been around for two millennia should be around for quite a bit of time, and so forth.”

The Lindy Effect is

“For the perishable, every additional day in its life translates into a shorter additional life expectancy. For the nonperishable, every additional day may imply a longer life expectancy. So the longer a technology lives, the longer it can be expected to live.”

So the idea being that if something has been around a long time, it’s because is is empirically valuable.

He gives the example of cooking pots and pans found in a Pompeii kitchen from two millenia ago looking nearly identical to more modern versions.

I’ll let Taleb explain further.

“Evolution is not a competition between ideas, but between humans and systems based on such ideas. An idea does not survive because it is better than the competition, but rather because the person who holds it has survived! Accordingly, wisdom you learn from your grandmother should be vastly superior (empirically, hence scientifically) to what you get from a class in business school (and, of course, considerably cheaper).

So if a book has been around for 50 years and a lot of people are still recommending it, it’s more likely to be worth reading than one that came out a few weeks ago. (Taleb’s book came out last year and thus does not currently pass his own heursitical test. Irony noted.)

We’re living in a world where the noise in the signal/noise ratio is growing at an exponentially increasing rate. There’s that statistic that in 24 hours, more content is published on the internet than was published in the history of mankind up until the invention of the internet.

The value is increasingly in being able to detect the signal from the noise. We’re plagued by neomania, an obsession with the new, even though on the whole, the new doesn’t provide nearly the same value as the old. It’s not time-tested.

Putting this whole philosophy into practice though is a lot easier said than done. I always feel compelled to click on each of 23 different blog articles with variations of the title “10 tips to double your productivity.”

Part of the failure of publishing platforms right now is that the things that most demand our attention (Blogs, Facebook, Twitter) are the things which provide the least value per unit of our time/attention.

I went back and read some of Paul Graham’s essays a few days ago. Holy Shit. That stuff is so good. (I really liked How to Lost Time and Money). But, you have to actively seek it out. Paul Graham isn’t spamming your Twitter feed with it.

The Guardian recently ran a really good piece on why reading the news is bad for you. I used to read the NYT everyday, until I had an epiphany one day that I couldn’t remember a single useful or meaningful thing in the last month.

I’ve been working on setting up systems that account for this.

Some that I’ve found useful:

  • Read a book for at least 30 minutes everyday – I like to do it right after I wake up. Take notes on the good ones.
  • Block out things that demand more attention than they provide value – I just did a ruthless RSS reader and Podcast subscription purge and replace that time with more books and audiobooks. I just signed up for Audible and started listening to Walter Isaacson’s Einstein biography. (which I finally caved to and is awesome). I also use this chrome plugin to block myself off of all social media until after 6pm.

There is the problem that with a lot of technology platforms, you do need more timely content. I’ve been playing around with some LinkedIn marketing lately and the time heuristic isn’t as useful since the platform is changing so rapidly.

I think the judgement in that case is that you should go towards whatever the most foundational piece of content in the field is.  Consume something that gives you a framework for understanding the platform and then go experiment with it instead of just reading a bunch of blog articles that provide tips instead of frameworks.

P.S. Since starting to write this post and actually publishing it, I’ve read Scientific Advertising and The Boron Letters. If you’re interested in foundational stuff for Copywriting, both are highly recommended.

Posted in Uncategorized | 7 Comments

Maybe There Isn’t a Right Answer

Dyed Baby Chickens in Ao Nang, Thailand. 

If you’re stuck on a problem and looking for the right answer, there probably isn’t one. Of course, it depends on what type of problem it is, but for any type of creative endeavor, you can safely assume that there really isn’t any “right” answer.

This mentality though of there being a “right” answer is so ingrained in us though. Seth Godin talks about this ad nauseam, but it’s still difficult for us to accept. We have this industrial age mentality that’s systematically ingrained in us by society. We’re trying to get the right answer, because in an industrial society, there is a right answer. There’s a way you can tweak the gears in the car so it goes a little bit faster. That’s direct and measurable. It’s better; so it’s “right.”

This is what we’re ingrained with in school. It’s part of the reason I suck so much at accepting that there isn’t a right answer. I was good at school. I got grades and everyone in my life was happy with me for that. It was reinforced. My parents were proud, my teachers were proud, the colleges I got into were pretty decent because of my good grades.

I recently heard the quote, “Perfection is the Enemy of Good.” You don’t need to find perfect, you just need to find something good and then make it work.

I can think of a lot of times where I’ve gotten hung up looking for the “right” answer. When I first moved to Chiang Mai, Thailand, I probably spent two days looking for an apartment. I had a list of criteria of things I wanted: price, location, near a gym, near a grocery store, preferred a kitchen, etc. I created a spreadsheet. I started looking at apartment buildings’ websites and calling them trying to find the perfect one.

I never found the perfect one. I gave up picked one that looked alright and rolled with it. It worked great. It didn’t have a kitchen, but I bought an electric grill for 20 bucks that worked perfect. It wasn’t that close to a gym, so I walked and got to listen to a bunch of podcasts.

I think one of the things that attracted me to study history in college was that it had that sense of uncertainty. There isn’t a “right” answer when you’re doing history. There is no right interpretation to history. But, at least in our educational system, you still get graded on it. You’re assigned some purportedly objective number that’s supposed to correspond with the quality of the history you did. So there’s a right way of doing history. Your paper is structured in such and such a way. And if you’re crafty, you adjust your paper to fit the professor’s biases and preferences.

If you don’t fit into this, then somehow you’re doing history wrong. There always seemed to be some disdain among a lot of professional historians for amateur historians (basically anyone shoe doesn’t have a PhD in history).

I remember the first time I listend to the Hardcore History podcast and thinking, “Oh, this is such shit history. This guy is just some amateur historian.” Listening to it now, that podcast is awesome. The guy actually cares. He’s a great storyteller. It’s obvious listening to the podcast that he’s done very real history. He’s gone back into the primary sources and formulated original thoughts and interpretations about them. That’s doing history. The fact that he didn’t spend a decade in some bullshit PhD program doesn’t change that.

This point really hit home with me a couple of weeks ago. I was putting together a newsletter recently for one of the companies I’ve been working with. It was one of my main goals for Febraury. I wanted to revamp the whole thing and create this awesome process for putting out a new newsletter every month. I read every blog post on B2B EMAIL. I read whitepapers, watched slideshares, signed up for newsletters ABOUT newsletters. I watched videos. I studied case studies. I was going to find the “right” answer.

I slaved over the copy of the email. There were maybe 200 words in the email. I spent hours writing and re-writing those. I was looking up synonyms in Words that Sell. I read it out loud.

I sent the email out. The result? The worst response of any newsletter in company history. The worst open rate. The worst clickthrough rate. We didn’t make one sale off the newsletter. And it wasn’t an informational Newsletter, it was for a product launch. A product that is selling like hotcakes right now. It’s not  a shit product. It’s a great product that people are buying. The same kind of people that I emailed.

What the Fuck?

I’m reading Mastery right now. One of the concepts he explores is how we’re limited by our language. There are certain things we can visualize and understand conceptually, but our langauge doesn’t allow us to express.

I noticed this when I was learning Spanish and Portuguese. In Spanish they have a word called “ganas.” You can use it with the verb “to have” as a way to express desire or interest. So, “tienes ganas de salir?” would translate as “Do you want to go out?” But that’s not really what it means. It means something different, it’s almost asking if you have a feeling of wanting to go out. But, this is my point entirely. I can’t explain what it means in English, because there isn’t a word in English. When I was hanging out with bilingual friends in Argentina, we always talked in Spanglish. Maybe it was 90% English/10% Spanish or Maybe it was 10% English/90% Spanish, but either way, no single langauge could let us describe our emotions and experiences as accurately as the two combined.

I was looking for what the perfect Newsletter would look like. Guess what? There isn’t a perfect newsletter. It doesn’t exist. There isn’t a “right” answer. You take what you know and you step into the unknown and take a stab at it. And guess what, you probably fuck it up. God knows I did. But you know what, that’s step 1. Step 1 is fucking it up. If you’re trying to figure out the “right” answer to something and you haven’t fucked it up yet, you’re number goal should be to fuck it up. After you’ve fucked it up, at least you’re in the game.

I love what I’m doing right now, but it’s not the first thing I set out to do. I didn’t wake up one day and decide I was passionate about online marketing. I tried interpreting. I wasn’t very good at it, it sucked, and it didn’t let me travel. Fuck up #1. Check.

So I traveled. I went to Brazil to teach English. I was a horrible English teacher. I don’t really like kids and I couldn’t care less about teaching people English that just want to learn it to inch their way up the corporate ladder. Fuck up #2. Check. But I learned something again. I did want to travel, but wasn’t willing to teach English.

I stumbled across the Adsense Flippers. I started building niche websites monetized by advertising.  I powered through it for a few months and learned a lot. I knew how to use WordPress. I understood the basics of SEO. The term cpanel didn’t sound like something from Star Trek.

That knowledge got me an internship at an online marketing agency. Hmmm, this is interesting. I’m working on interesting projects. The people, I’m working with are interesting. I’m learning valuable skills that I don’t hate.

This isn’t an original thought of course. It’s the whole lean startup methodology applied to life. Cal Newport has written a book and a massive blog about it.

You do a little bit of research and take a shot at something. Much better than trying to invest a ton of time and energy looking for the right answer when there probably isn’t one.

We’re trained to look for the right answer. That’s an industrial mindset. It doesn’t work anymore. Stop looking. You’re not going to find it. Live in a world of uncertainty and ambiguity. I’ve started hanging out there some. It’s scary as hell at first, but it’s real.

I live in hazard and infinity. The cosmos stretches around me, meadow on meadow of galaxies, reach on reach of dark space, steppes of stars, oceanic darkness and light. There is no amenable god in it, no particular concern or particular mercy. Yet everywhere I see a living balance, a rippling of tension, an enormous yet mysterious simplicity, an endless breathing of light. And I comprehend that being is understanding that I must exist in hazard but that the whole is not in hazard. Seeing and knowing this is being conscious; accepting it is being human.

-John Fowles

Posted in Biznass, Philosophy | 13 Comments