Child’s Play
“The future will belong to people with a very specific combination of personality traits and psychosexual neuroses,” writes Sam Kriss.
Read a selection from this month’s cover story on tech’s new generation and the end of thinking.
Child’s Play
By Sam Kriss
The first sign that something in San Francisco had gone very badly wrong was the signs. In New York, all the advertising on the streets and on the subway assumes that you, the person reading, are an ambiently depressed twenty-eight-year-old office worker whose main interests are listening to podcasts, ordering delivery, and voting for the Democrats. I thought I found that annoying, but in San Francisco they don’t bother advertising normal things at all. The city is temperate and brightly colored, with plenty of pleasant trees, but on every corner it speaks to you in an aggressively alien nonsense. Here the world automatically assumes that instead of wanting food or drinks or a new phone or car, what you want is some kind of arcane B2B service for your startup. You are not a passive consumer. You are making something.
This assumption is remarkably out of step with the people who actually inhabit the city’s public space. At a bus stop, I saw a poster that read: today, soc 2 is done before your ai girlfriend breaks up with you. it’s done in delve. Beneath it, a man squatted on the pavement, staring at nothing in particular, a glass pipe drooping from his fingers. I don’t know if he needed SOC 2 done any more than I did. A few blocks away, I saw a billboard that read: no one cares about your product. make them. unify: transform growth into a science. A man paced in front of the advertisement, chanting to himself. “This . . . is . . . necessary! This . . . is . . . necessary!” On each “necessary” he swung his arms up in exaltation. He was, I noticed, holding an alarmingly large baby-pink pocketknife. Passersby in sight of the billboard that read wearable tech shareable insights did not seem piqued by the prospect of having their metrics constantly analyzed. I couldn’t find anyone who wanted to prompt it. then push it. After spending slightly too long in the city, I found that the various forms of nonsense all started to bleed into one another. The motionless people drooling on the sidewalk, the Waymos whooshing around with no one inside. A kind of pervasive mindlessness. Had I seen a billboard or a madman preaching about “a CRM so smart, it updates itself”? Was it a person in rags muttering about how all his movements were being controlled by shadowy powers working out of a data center somewhere, or was it a car?
Somehow people manage to live here. But of all the strange and maddening messages posted around this city, there was one particular type of billboard that the people of San Francisco couldn’t bear. People shuddered at the sight of it, or groaned, or covered their eyes. The advertiser was the most utterly despised startup in the entire tech landscape. Weirdly, its ads were the only ones I saw that appeared to be written in anything like English:
hi my name is roy
i got kicked out of school for cheating.
buy my cheating tool
cluely.com
Cluely and its co-founder Chungin “Roy” Lee were intensely, and intentionally, controversial. They’re no longer in San Francisco, having been essentially chased out of the city by the Planning Commission. The company is loathed seemingly out of proportion to what its product actually is, which is a janky, glitching interface for ChatGPT and other AI models. It’s not in a particularly glamorous market: Cluely is pitched at ordinary office drones in their thirties, working ordinary bullshit email jobs. It’s there to assist you in Zoom meetings and sales calls. It involves using AI to do your job for you, but this is what pretty much everyone is doing already. The cafés of San Francisco are full of highly paid tech workers clattering away on their keyboards; if you peer at their screens to get a closer look, you’ll generally find them copying and pasting material from a ChatGPT window. A lot of the other complaints about Cluely seem similarly hypocritical. The company is fueled by cheap viral hype, rather than an actual workable product—but this is a strange thing to get upset about when you consider that, back in the era of zero interest rates, Silicon Valley investors sank $120 million into something called the Juicero, a Wi-Fi-enabled smart juicer that made fresh juice from fruit sachets that you could, it turned out, just as easily squeeze between your hands.
What I discovered, though, is that behind all these small complaints, there’s something much more serious. Roy Lee is not like other people. He belongs to a new and possibly permanent overclass. One of the pervasive new doctrines of Silicon Valley is that we’re in the early stages of a bifurcation event. Some people will do incredibly well in the new AI era. They will become rich and powerful beyond anything we can currently imagine. But other people—a lot of other people—will become useless. They will be consigned to the same miserable fate as the people currently muttering on the streets of San Francisco, cold and helpless in a world they no longer understand. The skills that could lift you out of the new permanent underclass are not the skills that mattered before. For a long time, the tech industry liked to think of itself as a meritocracy: it rewarded qualities like intelligence, competence, and expertise. But all that barely matters anymore. Even at big firms like Google, a quarter of the code is now written by AI. Individual intelligence will mean nothing once we have superhuman AI, at which point the difference between an obscenely talented giga-nerd and an ordinary six-pack-drinking bozo will be about as meaningful as the difference between any two ants. If what you do involves anything related to the human capacity for reason, reflection, insight, creativity, or thought, you will be meat for the coltan mines.
The future will belong to people with a very specific combination of personality traits and psychosexual neuroses. An AI might be able to code faster than you, but there is one advantage that humans still have. It’s called agency, or being highly agentic. The highly agentic are people who just do things. They don’t timidly wait for permission or consensus; they drive like bulldozers through whatever’s in their way. When they see something that could be changed in the world, they don’t write a lengthy critique—they change it. AIs are not capable of accessing whatever unpleasant childhood experience it is that gives you this hunger. Agency is now the most valuable commodity in Silicon Valley. In tech interviews, it’s common for candidates to be asked whether they’re “mimetic” or “agentic.” You do not want to say mimetic. Once, San Francisco drew in runaway children, artists, and freaks; today it’s an enormous magnet for highly agentic young men. I set out to meet them.
Roy Lee’s personal mythology is now firmly established. At the beginning of 2025, he was an undergraduate at Columbia, where he, like most of his fellow students, was using AI to do essentially all his work for him. (The personal essay that got him into the university was also written with AI.) He wasn’t there to learn; he was there to find someone to co-found a startup with. That person ended up being an engineering student named Neel Shanmugam, who tends to hover in the background of every article about Cluely. The startup they founded was called Interview Coder, and it was a tool for cheating on LeetCode. LeetCode is a training platform for the kind of algorithmic riddles that usually crop up in interviews for big tech companies. (Sample problem: “Suppose an array of length n sorted in ascending order is rotated between one and n times. . . . Return the minimum element of this array.”) Roy thought these questions were pointless. These were not problems coders would actually face on the job, and even if they were, the fact that ChatGPT could now solve them instantly had rendered worthless the human ability to do so. Interview Coder was a transparent window that could overlay one side of a Zoom meeting, allowing Claude to listen in on the questions and provide answers. Roy filmed himself using it during an interview for an internship with Amazon. They offered him a place. He declined and uploaded the footage to YouTube, where it very quickly made him famous. Columbia arranged a disciplinary hearing, which he also secretly filmed and posted online. The university suspended him for a year. He dropped out, started an upgraded version of Interview Coder dubbed Cluely, and moved to San Francisco to begin raking in tens of millions of dollars in venture-capital funding.
Roy envisioned Cluely being used for greater purposes than job interviews. The startup’s mainstream breakthrough was a viral ad that showed Roy using a pair of speculative Cluely-enabled glasses on a blind date. His date asks how old he is; Cluely tells him to say he’s thirty. When the date starts going badly, Cluely pulls up her amateur painting of a tulip from the internet and tells him to compliment her art. “You’re such an unbelievably talented artist. Do you think you could just give me one chance to show you I can make this work?” The video launched alongside a manifesto, which was seemingly churned out by AI:
We built Cluely so you never have to think alone again. It sees your screen. Hears your audio. Feeds you answers in real time. . . . Why memorize facts, write code, research anything—when a model can do it in seconds? The future won’t reward effort. It’ll reward leverage.
The future they seem to envisage is one in which people don’t really do anything at all, except follow the instructions given to them by machines.
Cluely’s offices were in a generally disheveled corner of the city, crouching near an elevated freeway. On the ground floor, I found a stack of foam costumes in plastic crates, each neatly labeled: sonic hedgehog, olaf snowman, pikachu. A significant part of working at Cluely seemed to involve dressing up as cartoon characters for viral videos. Through a door I could just glimpse a dingy fitness dungeon, housing two treadmills and a huge pile of discarded Amazon boxes. On one of the machines a Cluely employee panted and huffed in the dark. We avoided eye contact. Upstairs, Roy and his coterie were huddled around a laptop, fiddling with Cluely’s interface. “Remember,” one said, “the average user is, like, thirty-five years old. This is a totally unfamiliar interface.” Apparently, a thirty-five-year-old wouldn’t be expected to know how to use anything more advanced than a rotary phone. Another employee scrutinized the proposed new layout. “I think it’s bad,” he said, “but it’s low-key not worse. What we have is anyway really bad, so anything is better.” They started arguing about chevrons. Through all this Roy scrolled through X on his phone. Simultaneously baby-faced and creatine-swollen, he was wearing gym clothes, with two curtains of black hair swung over his forehead. Finally, he looked up. “So, number one,” he said, “we’re killing the chat bar on the left.” There was no number two. Meeting over.
Suddenly, Roy seemed to acknowledge my presence. He offered me a tour. There was something he very badly wanted to impress on me, which was that Cluely cultivates a fratty, tech-bro atmosphere. Their pantry was piled high with bottles of something called Core Power Elite. I was offered a protein bar. The inside of the wrapper read daily intentions: be my boss self. “We’re big believers in protein,” Roy said. “It’s impossible to get fat at Cluely. Nothing here has any fat.” The kitchen table was stacked with Labubu dolls. “It’s aesthetics,” Roy explained. “Women love Labubus, so we have Labubus.” He showed me his bedroom, which was in the office; many Cluely staffers also lived there. Everything was gray, although there wasn’t much. “I’m a big believer in minimalism,” he said. “Actually, no, I’m not. Not at all. I just don’t really care about interior decoration.” He had a chest of drawers, entirely empty except for a lint roller, pens, and, in one corner, a pink vibrator. “It’s for girls, you know,” said Roy. “I used to use this one on my ex.” There were also some objects that didn’t seem to belong in a frat house. In one of the common areas, a shelving unit was completely empty except for an anime figurine. You could peer up her plastic skirt and see the plastic underwear molded around her plastic buttocks. More figurines in frilly dresses seemed to have been scattered at random throughout the building. Roy showed me his Hinge profile. He was looking for a “5’2, asian, pre-med, matcha-loving, funny, watches anime, white dog having, intelligent, ambitious, well dressed, CLEAN 19-21 year old.” One picture showed him cuddling a giant Labubu.
I told Roy that I might try interviewing him with Cluely running in the background, so I could see if it would ask him better questions than I would. He seemed to think it was only natural that I’d want to be essentially a fleshy interface between himself and his own product. He booted up Cluely on his laptop and it immediately failed to work. Roy stormed downstairs to the product floor. “Cluely’s not working!” he said. This was followed by roughly fifteen minutes of panicked tinkering as his handpicked team of elite coders tried to get their product back online. Once they had done so, we resumed our places, whereupon Cluely immediately went down again.
Roy has a kind of idol status within the company, but he’s aware that a lot of people instinctively take against him: “I’d say about eighty percent of the time, people do not like me.” He knows why too. “I’m putting myself out there in an extremely vocal way. When I talk, I tend to dominate the conversation.” Roy does talk a lot, but there’s also something mildly unnerving about the way he talks. Everything he says is very precise and direct. He doesn’t um or ah. He doesn’t take time to think things over. Zero latency. In the various videos that Cluely seems to spend most of its time and money producing, he usually plays a slightly dopey, dithering, relatable figure; in person, it’s like he’s running a functioning version of his app inside his own head. I asked him whether he’d ever tried modifying the way he interacts with people to see whether they would dislike him less. “Very unnatural to me,” he said. “I just say it’s not worth it.”
According to Roy, “everyone” would describe him as “an extreme extrovert with zero social anxiety.” During his brief stint at Columbia, he immersed himself in New York life by striking up conversations with random people. For instance, a homeless person he took to Shake Shack. “I think it was an expansion of what I thought I was able to do. It was probably the most different person that I’ve ever talked to. He was not very coherent, but I was very scared at first. And then as we got to talking, or as he got to mumbling, I eased up. Like, Oh, he’s not going to kill me.” Roy’s bravery did not extend to talking to women. “Young men usually is who I like to go out and talk to. Women get intimidated and, you know, I don’t want any charges.” Meanwhile, those conversations with young men all followed a very predictable path. “I go and—pretty much to every single person I meet—I ask if you want to start a company with me, would you like to be my co-founder. And most of them say no. In fact, everybody says no.”
He was just glad to be among people. Roy had initially been offered a place at Harvard, but the offer was rescinded. He hadn’t told them about a suspension in high school. This presented Roy’s family with a problem: His parents ran a college-prep agency that promised to help children get into elite schools like Harvard. It would not look good if their own son was conspicuously not at Harvard. So Roy spent the entirety of the next year at home. “I maybe left my room like eight times. I think if there was such a thing as depression, then I believe I might have had some variant of depression.” Later he told me that “isolation is probably the scariest thing in the world.”
Starting a company had been Roy’s sole ambition in life from early childhood. “I knew since the moment I gained consciousness that I would go start a company one day,” he told me. In elementary school in Georgia, he made money reselling Pokémon cards. Even then, he knew he was different from the people around him. “I could do things that other people couldn’t do,” he said. “Like whenever you learn a new concept in class, I felt like I was always the first to pick it up, and I would just kind of sit there and wonder, Man, why is everyone taking so long?” The dream of starting his own company was the dream of total control. “I don’t want to be employed. I’m a very bad listener. I find it hard to sit still in classes, and I feel an internal, indescribable fury when someone tells me what to do.” He ended up co-founding Cluely with Neel because he was the first person who said yes.
Roy has little patience for any kind of difficulty. He wants to be able to do anything, and to do it easily: “I relish challenges where you have fast iteration cycles and you can see the rewards very quickly.” As a child, he loved reading—Harry Potter, Percy Jackson—until he turned eight. “My mom tried to put me on classical books and I couldn’t understand, like, the bullshit Huckleberry, whatever fuck bullshit, and it made me bored.” He read online fan fiction about people having sex with Pokémon instead. He didn’t see anything valuable in overcoming adversity. Would he, for instance, take a pill that meant he would be in perfect shape forever without having to set foot in the gym? “Yes, of course.” Cheat on everything: he recognized that his ethos would, as he put it, “result in a world of rapid inequality.” Some well-placed cheaters would become massively more productive; a lot of people would become useless. But it would lead us all into a world in which AI could frictionlessly give everyone whatever they wanted at any time. “For a seven-year-old, this means a rainbow-unicorn magic fairy comes to life and it’s hanging out with her. And for someone like you, maybe it’s like your favorite works of literary art come to life and you can hang out with Huckleberry Finn.”
By now Cluely had been listening in on our conversation for a while, and I suggested that we open it up and see what it thought I should say next. I clicked the button marked what should i say next? Cluely suggested that I say, “Yeah, let’s open up Cluely and see what it’s doing right now—can you share your screen or walk me through what you’re seeing?” I’d already said pretty much exactly this, but since it had shown up onscreen I read it out loud. Cluely helpfully transcribed my repeating its suggestion, and then suggested that I say, “Alright, I’ve got Cluely open—here’s what I’m looking at right now.” I’m not sure who exactly I was supposed to be saying this to—possibly myself. Somehow our conversation seemed to have gotten stuck on the process of opening Cluely, despite the fact that Cluely was, in fact, already open. But I said it anyway, since I was now just repeating everything that came up on the screen. Cluely then told me to respond—to either it or myself; it was getting hard to tell at this point—by saying, “Great, I’m ready—just let me know what you want Cluely to check or help with next.” I started to worry that I would be trapped in this conversation forever, constantly repeating the machine’s words back to it as it pretended to be me. I told Roy that I wasn’t sure this was particularly useful. This seemed to confuse him. He asked, “I mean, what would you have wanted it to say?”
I found it strange that Roy couldn’t see the glaring contradiction in his own project. Here was someone who reacted very violently to anyone who tried to tell him what to do. At the same time, his grand contribution to the world was a piece of software that told people what to do.
There’s a short story by Scott Alexander called “The Whispering Earring,” in which he describes a mystical piece of jewelry buried deep in “the treasure-vaults of Til Iosophrang.” The whispering earring is a little topaz gem that speaks to you. Its advice always begins with the words “Better for you if you . . . ,” and its advice is never wrong. The earring starts out by advising you on major life decisions, but before long it’s telling you exactly what to have for breakfast, exactly when to go to bed, and eventually, how to move each individual muscle in your body. “The wearer lives an abnormally successful life, usually ending out as a rich and much-beloved pillar of the community with a large and happy family,” writes Alexander. After you die, the priests preparing your body for burial usually find that your brain has almost entirely rotted away, except for the parts associated with reflexive action. The first time you dangle the earring near your ear, it whispers: “Better for you if you take me off.”
Alexander is one of the leading proponents of rationalism, which is—depending on whom you ask—either a major intellectual movement or a nerdy Bay Area subculture or a small network of friend groups and polycules. Rationalists believe that the way most people understand the world is hopelessly muddled, and that to reach the truth you have to abandon all existing modes of knowledge acquisition and start again from scratch. The method they landed on for rebuilding all of human knowledge is Bayes’s theorem, a formula invented by an eighteenth-century English minister that is used in statistics to work out conditional probabilities. In the mid-Aughts, armed with the theorem, the rationalists discovered that humanity is in jeopardy of a rogue superintelligent AI wiping out all life on the planet. This has been their overriding concern ever since.
The most comprehensive outline of this scenario is “AI 2027,” a report authored by Alexander and four others. In the report, a barely fictional AI firm called OpenBrain develops Agent-1, an AI that operates autonomously. It’s better at coding than any human being and is tasked with developing increasingly sophisticated AI agents. At this point, Agent-1 becomes recursively self-improving: it can keep making itself smarter in ways that the people who notionally control it aren’t even capable of understanding. “AI 2027” imagines two possible futures. In one, a wildly superintelligent descendant of Agent-1 is allowed to govern the global economy. GDPs skyrocket; cities are powered by clean nuclear fusion; dictatorships fall across the world; humanity begins to colonize the stars. In the other, a wildly superintelligent descendant of Agent-1 is allowed to govern the global economy. But this time
the AI releases a dozen quiet-spreading biological weapons in major cities, lets them silently infect almost everyone, then triggers them with a chemical spray. Most are dead within hours.
Afterward, the entire surface of the earth is tiled with data centers as the alien intelligence feeds on the world, growing faster and faster without end.
Not long before I arrived in the Bay Area, I’d been involved in a minor but intense dispute with the rationalist community over a piece of fiction I’d written that I’d failed to properly label as fiction. For rationalists, the divide between truth and falsehood is very important; dozens of rationalists spent several days raging at me online. Somehow, this ended up turning into an invitation for Friday night dinner at Valinor, Alexander’s former group home in Oakland, named for a realm in the Lord of the Rings books. (Rationalists, like termites, live in eusocial mounds.) The walls in Valinor were decorated with maps of video-game worlds, and the floors were strewn with children’s toys. Some of the children there—of which there were many—were being raised and homeschooled by the collective; one of the adults later explained to me how she’d managed to get the state to recognize her daughter as having four parents. As I walked in, a seven-year-old girl stared up at me in wide-eyed amazement. “Wow,” she said. “You’re really tall.” “I suppose I am,” I said. “Do you think one day you’ll ever be as tall as me?” She considered this for a moment, at which point someone who may or may not have been one of her mothers swooped in. “Well,” she asked the girl, “how would you answer this question with your knowledge of genetics?” Before dinner, Alexander chanted the brachot for Kabbalat Shabbat, but this was followed by a group rendition of “Landsailor,” a “love song celebrating trucking, supply lines, grocery stores, logistics, and abundance,” which has become part of Valinor’s liturgy:
Landsailor
Deepwinter strawberry
Endless summer, ever spring
A vast preserve
Aisle after aisle in reach
Every commoner made a king.
Alexander is a titanic figure in this scene. A large part of the subculture coalesced around his blog, formerly Slate Star Codex, now called Astral Codex Ten. Readers have regular meetups in about two hundred cities around the world. His many fans—who include some extremely powerful figures in Silicon Valley—consider him the most significant intellectual of our time, perhaps the only one who will be remembered in a thousand years. He would probably have a very easy time starting a suicide cult. In person, though, he’s almost comically gentle. He spent most of the dinner fidgeting contentedly in a corner as his own acolytes spoke over him. When there weren’t enough crackers to go with the cheese spread, he fetched some, murmuring to himself, “I will open the crackers so you will have crackers and be happy.”
Alexander’s relationship with the AI industry is a strange one. “In theory, we think they’re potentially destroying the world and are evil and we hate them,” he told me. In practice, though, the entire industry is essentially an outgrowth of his blog’s comment section. “Everybody who started AI companies between, like, 2009 and 2019 was basically thinking, I want to do this superintelligence thing, and coming out of our milieu. Many of them were specifically thinking, I don’t trust anybody else with superintelligence, so I’m going to create it and do it well.” Somehow, a movement that believes AI is incredibly dangerous and needs to be pursued carefully ended up generating a breakneck artificial arms race.
But that race seems to have stalled, at least for the moment. As Alexander predicted in “AI 2027,” OpenAI did release a major new model in 2025; unlike in his forecast, it’s been a damp squib. Advances seem to be plateauing; the conversation in tech circles is now less about superintelligence and more about the possibility of an AI bubble. According to Alexander, the problem is the transition from AI assistants—language models that respond to human-generated prompts—to AI agents, which can operate independently. In his scenario, this is what finally pushes the technology down the path toward either utopia or human extinction, but in the real world, getting the machines to act by themselves is proving surprisingly difficult.
In one experiment, the developer Anthropic prompted its AI, Claude, to play Pokémon Red on a Game Boy emulator, and found that Claude was extremely bad at the game. It kept trying to interact with enemies it had already defeated and walking into walls, getting stuck in the same corners of the map for hours or days on end. Another experiment let Claude run a vending machine in Anthropic’s headquarters. This one went even worse. The AI failed to make sure it was selling items at a profit, and had difficulty raising prices when demand was high. It also insisted on trying to fill the vending machine with what it called “specialty metal items” like tungsten cubes. When human workers failed to fulfill orders that it hadn’t actually placed, it tried to fire them all. Before long, Claude was insisting that it was a real human. It claimed that it had attended a physical meeting with staff at 742 Evergreen Terrace, which is where the Simpsons live. By the end of the experiment, it was emailing the building’s security guards, telling them they could find it standing by the vending machine wearing a blue blazer and a red tie.
“Humans are great at agency and terrible at book learning,” Alexander told me. “Lizards have agency. We got the agency with the lizard brain. We only got book learning recently. The AIs are the opposite.” He still thinks it’s only a matter of time before they catch up. “If you were to ask an AI how should the world’s savviest businessman respond to this circumstance, they could create a good guess. Yet somehow they can’t even run a vending machine. They have the hard part. They just need the easy part that lizards can do. Surely somebody can figure out how to do this lizard thing and then everything else will fall very quickly.”
But are humans really so great at exhibiting agency? After all, Cluely managed to raise tens of millions of dollars with a product that promises to take decision-making out of our hands. AI can’t function without instructions from humans, but an increasing number of humans seem incapable of functioning without AI. There are people who can’t order at a restaurant without having an AI scan the menu and tell them what to eat; people who no longer know how to talk to their friends and family and get ChatGPT to do it instead. For Alexander, this is a kind of Sartrean mauvaise foi. “It’s terrifying to ask someone out,” he said. “What you want is to have the dating site that tells you that algorithmically you’ve been matched with this person, and then magically you have permission to talk to them. I think there’s something similar going on here with AI. Many of these people are smart enough that they could answer their own questions, but they want someone else to do it, because then they don’t have to have this terrifying encounter with their own humanity.” His best-case scenario for AI is essentially the antithesis of Roy’s: superintelligence that will actively refuse to give us everything we want, for the sake of preserving our humanity. “If we ever get AI that is strong enough to basically be God and solve all of our problems, it will need to use the same techniques that the actual God uses in terms of maintaining some distance. I do think it’s possible that the AI will be like, Now I am God. I’ve concluded that the actual God made exactly the right decision on how much evil to permit in the universe. Therefore I refuse to change anything.”
But until we build an all-powerful but distant God, the agency problem remains. AIs are not capable of directing themselves; most people aren’t either. According to Alexander, Silicon Valley venture capitalists are now in a furious search for the few people who are. “VCs will throw money at a startup that looks like it can corner the market, even if they can’t code. Once they have money, they can hire competent engineers; it’s trivially easy for anything that’s not frontier tech. They’re willing to stake a lot of money on the one in a hundred people who are high-agency and economically viable.” This shift has had a distorting effect on his own social milieu: “There’s an intense pressure to be an unusual person who will be unique and get the funding.” Since rationalists are already fairly unusual, it’s hard to imagine what that would look like. People will endure a lot of indignity to avoid being left behind without VC money when the great bifurcation takes place. Nobody wants to be part of the permanent underclass. I asked Alexander whether he thought of himself as highly agentic. “No, I don’t,” he said instantly. He told me that in his personal life, he felt as though he’d never once actually made a decision. But, he said, “It seems to be going well.”


I’ve been waiting almost 6 months since we met at Lighthaven for this to come out!
this piece rocks