So Much Depends on the Red Wheelbarrow

Andy Manoske
9 min readMar 28, 2017


How asymmetric information around complex topics like Artificial Intelligence leads to problems between investors and entrepreneurs

A whiteboard at the end of the night from one of our computer science club parties in college

In college we had nicknames.

Josh was “Texas” because he was big, tall, and supposedly is from there somehow despite having spent most of his life in suburban Northern California. Flipper got his nickname because when he played Super Smash Brothers he had a tendency to chortle a laughter that sounded like the eponymous dolphin. Johan was…well Johan was just Johan.

But for those of us who had common first names, our nickname consisted of our 0-based index in an array of that name. This was computer science after all.

As the third Andy to join the SJSU Computer Science Club, my nickname was Andy[2]. I didn’t get to know the first Andy (Andy[0]) very well after he stopped coming around the club often — having discovered that his JDM car and unusually strong command of Vietnamese earned him great purchase off campus in San Jose. But I did get to know the second Andy, Andy[1], very well.

Andy[1] put all of us to shame. A young grad student with an undergraduate degree spread across math, computer science, and physics, Andy’s command over the dark arts was absolute. Even after exhausting all of SJSU’s undergrad curriculum for statistics, I struggled to keep up with Andy’s conversations on probability theory because he understood the theory that created the thing I was talking about. In the geek animal kingdom, Andy[1]’s skills made him king of the jungle.

One day in 2008 we got into a conversation on probabilistic statistics. At this point, I’d gone off the deep end and decided to pick up an economics major in the hopes of applying what I was learning in math/CS into studying how people worked. A few beers deep, I was trying to show him how we were using inferential statistics to model agent behavior. We were tossing around how neat it would be to use different techniques from things like graph theory and linear algebra to bisect the data set and gain new revelations.

Eventually Andy paused and pursed his lips. “You know,” he said with his characteristic long pause between statements that seemed to show the magnetic hard drive in his head spooling data, “…this is something you could totally use to power a hidden markov model.”

“…oh yeah,” I said remembering that I had heard that word before somewhere, “…uhh…totally. How do you…do that again?”

Andy laughed, seeing me tread water. “It’s fine 2,” he responded with my index number in the data structure of Andy[], “you don’t really do this stuff in Math 161A or 161B. Here, check this out.”

Andy then went through the process of showing me how to build a HMM, laying out an algorithmic way to look for distribution in the data outside of the painstaking regression and probability analysis approach we had been doing in econometrics.

As a fellow CS who would have rather preferred to have spent their time coding instead of drowning in Stata, I was enamored with how elegantly a HMM could be implemented on any kind of numerical data to a seemingly powerful effect. But even with Andy calmly explaining the stuff I struggled to keep up. I tried to square what Andy was teaching me in the context of its use, something I learned a few years before as a technique to make sense out of complex math

“Dude this is cool…but, uh, where do we use it? I mean I’ve never seen this stuff in any of the econ courses, and they barely brought it up in stats.”

Andy[1] stood back from the whiteboard as Josh and Johan loudly came back into the room with more beer. Josh took one look at what we were doing, sighed loudly, and reminded us we had to get to the bars soon and should finish getting ready by shouting and throwing his recently-plucked bottle cap from his Corona at us.

Getting his jacket from the couch, Andy[1] casually responded. “Artificial intelligence. This is all like the foundations of AI, which shows you just how fucking complicated that stuff can get in the real world.”

— —

Six years later I found myself again two beers deep talking about AI with a young VC at a major San Hill Road VC firm. Unquestionably brilliant, he had been thrust into becoming one of his firm’s first infrastructure investors and thrown into the deep end of focusing on sourcing and filtering the most technical investments for his firm. And as a former infrastructure VC who recently had stepped back to go back to operating, one of his friends had plugged us in to talk shop on things she was interested in.

Since 2013, AI had been riding a rocket ship of cool in the venture community. Major advances in GPU computing and algorithms had given the world the tools to no longer reinvent the wheel in building models like the HMM Andy once showed me in college. Now anyone who had the command over programming and math to know how to use AI could bring the dark arts to bare in a variety of different platforms, leading to exciting opportunities for investors across high tech.

But AI is hard. And like most other VCs, my partner in conversation didn’t have a CS, engineering, or math background. So like Andy[1] did in college, I started with the basics. I started explaining a HMM, attempting to abstract the math away into pure concepts. He seemed to grok what I was saying, and became rightfully excited that he was starting to get what some of the terminology being thrown at him by founders meant.

“So we just put all of this data into this model and it works right? This seems pretty straightforward. Why aren’t more people doing this? I mean, isn’t this easy.”

Remembering how much I struggled with the non-training wheels version of this conversation, I twitched for a moment. I eventually responded by paraphrasing the end to Andy[1]’s conversation with me seven years earlier.

“In the real world, when we’re actually doing math, this shit is fucking complicated.”

I once heard someone say that all of the simple problems had already been solved in computing, and that everything left to explore was the “hard stuff.”

AI is an area that exemplifies this. Built on the back of complex mathematics, simply teaching a system to read and interpret data to look for basic patterns requires a deep command of math and algorithms to use properly. While the value that AI can bring is unquestionably great, the intimately technical/mathematical nature of AI has made it difficult for people without STEM educational backgrounds to understand.

This has led to some problems in Silicon Valley.

On one end, many investors (who frequently don’t have technical degrees and come from areas like banking where deep math isn’t essential) have been put in the difficult position of trying to quickly understand the foundations of AI.

This is difficult to grok without a command of foundational math like statistics and algorithms — which not only provide conceptual understanding of the models that power areas like semi-structured and the much vaunted/nearly-impossible-to-use-do-but-fun-to-talk-about unstructured learning in AI, but also give us the terminology necessary to talk about advances in the fields and their applications.

VCs aren’t dumb. In fact, if there’s one thing that all good VCs learn, it’s how to learn complex topics very quickly. So for investors without STEM education who want to pick up AI without pouring over half of the CLRS, a good first step is to work backwards from conversation— listening and learning from people who do have these educational or professional backgrounds and later looking up words and concepts they don’t know with the context of that conversation and its applications /startups they’ve reviewed in mind.

This technique is called reflective observation (or RO). It’s basically the same thing as I was doing in college when I’d ask Andy[1] for background on the application of AI — I was trying to develop my understanding of what he was talking about based on other experiences and things I knew much better. Every good investor I’ve ever met and worked with is a master of reflective observation.

For most things, RO works great. But RO requires a very critical component to be successful: a good, well-formed basis of communication between parties to transfer information.

And therein lies another problem. While non-technical VC investors attempting to learn AI may have the raw intelligence and skills to learn by RO, the nature of their conversations with founders and other technologists are often colored by another opaque, asymmetric information problem that we don’t frequently talk enough about in Silicon Valley:

It’s fucking scary to talk to VCs.

The popular image of a venture capitalist is one of a daunting tycoon. He or she is capable of opening the doors to success or failure with their firm’s funds. And given that so few of these individuals exist in high tech, every moment that you have with them as an entrepreneur is critical.

For entrepreneurs in AI that don’t have a background in VC (or haven’t been in a boardroom situation / have taken funding themselves), the complex interweave of reflective observation and general socializing / establishing credibility can be very confusing. In fact, it’s not unlike the difficulty faced by investors who don’t have a technical background in artificial intelligence.

The result is a pretty interesting situation: both sides try to employ RO to learn each other’s craft and vocabulary while trying to establish strong credibility and leverage in their encounter. Unfortunately neither side knows they’re both employing reflective observation, and the result is usually less than ideal.

Usually this results in buzzwordism. Common phrases in AI like “Natural Language Processing” become repeated ad naueseum due to this real life race condition of non-technical VCs trying to employ RO to suss out what these things mean while entrepreneurs trump up the application of these fields (however tenuous it might seem) because VCs bring them up in conversation. The is leads to a comically dark circular loop, where both investors and founders repeatedly throw terminology at each other in order to derive some kind of contextual meaning from how the other side reacts.

Too much buzzwordism at scale can lead to serious negative externalities in the macreconomy of startup tech. As one of my good friends once highlighted, he felt like every pitch he saw at a conference, “involving basic if/then-style logic was framed in the context of artificial intelligence and machine learning.”

Over time, the fatigue of dealing with this race condition-esque buzzwordism from both investors and operators attempting to discreetly employ RO can lead cynicism on both sides of the fence.

This makes it much more difficult to get excited about certain companies if you’re a cynical investor. And as a cynical entrepreneur, it makes it difficult for you to want to spend the time highlighting your company’s unique technology and talents if you believe that the sole road to success is to chant phrases like “semi-structured learning” five times as if you’re trying to summon Bloody Mary.

All of this is by no means unique to artificial intelligence. Having been on both sides of the operating/investing fence in security, investing around complex technical topics like cryptography elicit a similar sense of buzzwordism and eventual cynical fatigue for both investors and entrepreneurs. Other topics in infrastructure like storage, advanced computing, and other math-heavy fields suffer similarly too from this accidental race condition of both investors and entrepreneurs trying to discreetly employ reflective observation at the same time.

While this is an old problem — as old as Silicon Valley itself — there is light at the end of the rainbow. For the first time in Silicon Valley’s history, there a large influx of STEM-educated technologists entering into the venture ecosystem. Thanks to the new vogue nature of CS, there is a boom in young associates entering Silicon Valley VC firms with professional and educational foundations in these technical topics (or at least the building blocks necessary to learn them).

Similarly, a wave of big VC exits in the last five years are powering lots of former technical entrepreneurs to “come back” to VC either as partners in firms or as advisors to help guide existing investors in understanding bleeding edge technologies like modern artificial intelligence.

Asymmetric information remains one of the biggest problems in VC investing. But things seem to be getting better. And as content marketing is now a critical part of a VC firm’s success in gaining access (see: A16Z), venture is becoming a much less opaque practice to technologists.

Overall, it’s always going to be difficult to bridge the technology divide on complex, technical topics with people who haven’t operated within or studied these fields. But as we talk more, and spend more time openly communicating between investors and operators, it’s going to get easier finding a common language to discuss how so much truly depends on the red wheelbarrow.



Andy Manoske

Principal PM for Cryptography and Security Products @HashiCorp. Formerly Defense/NatSec & Crypto @NetApp, VC @GGVCapital + @AmplifyPartners