Welcome to the 24th edition of Black Box. If you missed it, I recently completed a series on emerging research related to generative AI. This time, I trade theory for practice.
Nine months ago, I joined Nathan Baschez in building Lex as chief of staff and employee #1, starting on day 1. (Many of you probably know him as the founder of Every, the tech and business newsletter collective, or as the co-creator of Product Hunt.) I have worked with early stage startups for years as an investor and advisor, so I had ideas about what would happen. But the lessons that I have picked up are not at all what I expected. Here are some of the main ones:
Users are surprisingly tolerant of one-off issues even if they’re big, but surprisingly intolerant of persistent issues even if they’re small
Lex is a team of 4 today and it was just Nathan and me until late last year. We don’t have the bandwidth to make sure every feature works perfectly before we ship them, which inevitably means some will fail after release. The vast majority of resulting issues are small and only a couple people notice. But in a few rare instances, a whole function breaks, the emails come flooding in, and we go into firefighting mode.
I always fear that this could cause a spike in churn or otherwise negatively impact Lex, but this has (knock on wood!) yet to happen. As long as we fix the problem quickly — usually it takes us 1–3 hours — users usually give us a pass and forget about it. Perhaps this is thanks to the goodwill we’ve earned prior to the incident or because they understand (and even expect, to some extent) that a new product built by a small team will occasionally break.
Conversely, what I think are small issues are the reason some people leave Lex. For example, one user told me that they were frustrated by our failure to add support for a language that they wanted. We deprioritized it as this was the first time that it came up, especially since the main thing language impacts is the labels on the buttons. (Lex is an AI-powered word processor — the AI suggestions generally match the language in which the document is written automatically.) But after they requested it to no avail for a second time, it must have seemed that we were willfully ignoring them.
Onboarding and user education are actually product pitches
I used to think of onboarding as just teaching users how to use the product. Using the product well would help them understand its value and therefore improve retention and conversion. This was the point of all user education, right?
Not quite. Using a product well is correlated to understanding its value, but it is the effect rather than the cause. After manually onboarding dozens of users, I believe the goal of onboarding is to pitch why this product is better than whatever they are currently using. And as a pitch, it should explicitly explain the advantage and convince users by getting them to experience it firsthand as quickly as possible. Contrary to my original “teaching” view, a good onboarding sequence should not highlight too many features as this dilutes the pitch.
User education post-onboarding continues this pitch. It should prioritize secondary features which support the value proposition and explicitly tie relevant new features to it. Of course, there are also basic mechanics that user education does need to teach for the sake of teaching, but I find that users who “get it” have generally figured it out themselves by the time we show them. So focus first on why they should use you!
User research is about recreating an experience, thought process, or workflow — not the literal feedback
As chief of staff to a solo technical founder, I am responsible for everything that is not writing code. This is an extremely wide scope, but when I am not reaching out to design partners, helping users, paying contractors, and the rest, you will find me talking to users to understand both how we can make Lex better for them and how they can use Lex better.
Despite user research being the focus of my job, I was not initially good at it. I asked a lot of users the same questions and summarized the answers. But Nathan kept asking for specific examples, explaining that abstracting the feedback made it less helpful. I interpreted this advice as getting more detailed and increased my follow ups, but when we sat down to review my notes, Nathan still had questions. He would zero in on what I dismissed as an off-hand comment while skimming through an elaboration that went on for several emails. I was confused.
After many review sessions, I realized that what Nathan was really trying to get was a recreation of the user’s experience, thought process, or workflow. He wanted to know what a user thought, sure, but also what happened, why they were doing what they did, and how they had gotten there. That context is often complex and difficult to explain directly, so my role is to tease it out through with some alternative method. That might be follow up questions, or it might be a screen recording, a hypothetical (“what would you do if…”), or a step-by-step walkthrough. Getting feedback for the sake of it misses the point.
Leverage the fact that users look to software to figure out their own processes
Many successful software companies sell not just software but a complete workflow and philosophy. For example, Notion shows a new user different templates depending on the use case that they select in onboarding. Work users are greeted with kanban boards, product specs, and investor updates while academic users can get started with calendars, notes, and to-do lists. This provides structures on which a company could run its operations or a student could organize their life.
Linear goes even further with a manifesto of best practices for building and managing products. These ideas are not just a public resource but built into Linear itself such that teams which use Linear are directly buying into their method. (We at Lex are big fans— in fact, I use Linear as my personal to-do and prioritization tool.)
The same goes for Lex. Because we believe the best writing happens when AI writes with you, not for you, Lex is word processor instead of a chat. You can access the AI through a chatbot that can “see” your document so there is no need to paste it in, or by tagging “@Lex” in a comment like you would a human collaborator. And the Lex method is an iterative writing workflow that uses AI feedback and versions to help you from outlining to editing.
I could keep going, but I’ll save the other lessons for a part two in a couple of months. Things are moving fast so I’m sure that I’ll have new lessons to share then as well. Until then, please check out Lex and let me know what you think. (And yes, I did draft this in Lex!) ∎
If you work at an early stage startup, I’d love to hear if you resonate or disagree with any of these. Let me know @jwang_18 or over LinkedIn!