What I’m reading
A few reads from the weekend.
Where did all the affordable cars go?
What started in 1964 as a retaliatory strike against European duties on American poultry grew over time into an impenetrable shield to safeguard domestic automakers’ sales of light trucks and United Auto Workers’ jobs from a rising tide of foreign imports. Both political parties participated; in 1981 the Reagan administration pressured the Japanese government to cap vehicle exports, leading the Japanese to shift to more expensive vehicles that would increase profit. Detroit, naturally, raised prices as well.
Even during the free trade era of NAFTA — initially proposed by President Ronald Reagan, negotiated by President George H.W. Bush and ultimately pushed through by President Bill Clinton — the United States maintained a tariff on passenger cars from outside North America. During this period, lawmakers set fuel economy standards for trucks and S.U.V.s that were roughly six to eight miles per gallon less stringent than those for cars. They’d hoped the change would keep costs low for farmers and tradespeople who needed larger engines for heavy work, but it ultimately helped drive Detroit to dump the fuel-efficient sedan for the large, high-profit-margin S.U.V.
Decades of protectionism shielded Detroit from the robust global competition that would have forced it to match the quality, fuel efficiency and pricing of its foreign rivals — and had the unintended consequence of forcing millions of Americans to pay well above market prices elsewhere in the world.
Looking back at an old Kurzweil post on personal AI companions
There have been other attempts to show AIs as humans (albeit not biological) that you can have a relationship with; for example, Steven Spielberg’s 2001 film AI. That movie suffered from an all-too-common flaw of science futurism movies: it introduced a single futuristic technology — human-level cyborgs — onto an otherwise unchanged world. Her is better in this dimension, although not completely successful. It does portray a somewhat futuristic world in which the leap to human-level AIs is not so implausible.
I would place some of the elements in Jonze’s depiction at around 2020, give or take a couple of years, such as the diffident and insulting videogame character he interacts with, and the pin-sized cameras that one can place like a freckle on one’s face. Other elements seem more like 2014, such as the flat-panel displays, notebooks and mobile devices.
Samantha herself I would place at 2029, when the leap to human-level AI would be reasonably believable.
Paying to get into college – but also paying to get your kids a job after
Career coaching for college students can cost a few hundred dollars an hour for interview rehearsals and application strategies, with more comprehensive packages typically ranging from $3,000 to $10,000. But New York City-based Priority Candidates says some parents are paying upwards of $30,000 for intensive support and subject-matter experts to prepare their children for entry-level jobs in finance and similarly ultra-competitive industries; the price tags at other companies go up from there.
People are paying more and getting less. This is what I call the upper middle class trap.
Right now, the upper middle class is in fierce competition for a marginal improvement in lifestyle. They’re working more and relaxing less to purchase products and services with clearly declining quality. It’s a financial arms race that doesn’t make any sense.
You have people making six-figure incomes going into a frenzy for nicer homes, better schools, and more luxurious travel experiences. What’s the end result of this status contest? Overpaying, and by a lot.
This same competitiveness partially explains why college tuition and private school costs have grown twice as fast as overall inflation over the past few decades. With more students applying to roughly the same number of spots, you can keep raising prices.
This is especially true at the top universities. Since 2015, the number of college applicants has gone up 78% while acceptance rates at elite colleges have plummeted:
This increasing struggle for scarce positional goods keeps the upper middle class overworked and trapped in the rat race.
The study by Ataman, Beyersmann, Castles, and Wegener, explores a simple question; when we hear a new word, do we start forming a guess about how it is spelled before we ever see it written down? The authors call these guesses “orthographic skeletons”.
The concept, first proposed by Wegener and colleagues in 2018, is based on a deceptively simple insight: when a child learns a new word orally, hearing it spoken, understanding its meaning, using it in conversation, their knowledge of how sounds map onto letters (phoneme-grapheme correspondences) allows them to generate an expectation about how that word might be spelled. Not a complete, fully formed spelling, but a partial sketch; a skeleton.
To test this, researchers use invented nonsense words like “vish” or “jayf,” words that no participant has ever encountered before, so the experimenters can be certain that any spelling expectations were formed purely from oral training rather than prior reading experience. So take a reader who has been taught the spoken word “vish,” its meaning and its use in sentences, but has never seen it written down. Their knowledge of English spelling patterns tells them that the /v/ sound is typically written as “v,” the /ɪ/ sound as “i,” and the /ʃ/ sound as “sh.” Without ever seeing the word in print, they have already begun to assemble its orthographic form.
When that reader later encounters “vish” written on a page, the word is not entirely novel. It arrives into a cognitive space that has been prepared for it, a space where expectation meets confirmation. The result, demonstrated across multiple studies using both lexical recognition tasks and eye-tracking, is faster processing, shorter fixation times, and more efficient reading. The skeleton has done its invisible work.
macOS Battery Notifications
Maybe it’s just me, but I move around so much all day at the office that I often find myself with low single-digit battery levels on my Macbook daily. Things slow to a painful crawl. Maybe all these AI tools I have running are big drains on the battery. (Or maybe I just need a newer Macbook!)
I wanted simple iOS-like low battery notifications on macOS, so I vibed a quick script to do exactly that. It will remind you to find a charger at the 20% mark and then again at 10%. Find it here: https://github.com/naveen/battery_monitor.
maps.naveen.com
I’ve been wanting to pull together all my lists & favorite places from different sources: mainly, foursquare going back years and, more recently, items in Google Maps. So I built an aggregated place for all of them. The site has built-in search and is mobile-friendly (you know, for when you need to look up one of my recommendations on the go).
Have a play at maps.naveen.com.
ClawCon LA

ClawCon in LA was a pleasant surprise! I only realized yesterday around 3 PM that it was happening, and I’m glad I caught it. It was inspiring to see the vibrant community that @msg and the team have put together in such a short time.
A few highlights:
- here.now by Adam Ludwin. This is one of the first “agent-first” tools I started using months ago—a fast way for your agents to host a webpage. The most fun part about his presentation was that it reminded me very much of John Britton’s first-ever Twilio Live Demo at New York Tech Meetup (2010): get on stage, fire up a terminal window, prompt the crowd with a question and show off how your product solves it in minutes – live!
- Friend Jonathan Wegener (of Timehop fame! disclosure: i’m one of the first investors) showed off how Claude led him to finding a radio device that could remotely read his electricity monitor. That reignited my interest in ADS-B – turns out some of these devices can also read and report back on those signals. (I’ve been meaning to spin up a quick hack around this so that I can quickly get alerted to helicopters and planes flying over my house).
- seafloor.bot – A quick way to host an openclaw in the cloud (reminds me of exe.dev) – I mention it because it probably is a very easy way for a newbie (who doesn’t have much tech experience and who doesn’t want to spin up a Mac Mini) to start exploring an agent.
- chaosmarkets.ai – A few people are wondering what arbitrage opportunities are there: what if I can feed an agent all sorts of data about a particular vertical, allow it to keep crawling while I’m asleep, and then derive insights/edges that I can use to invest? He’s building a cool platform where he can put together multiple agents – each with a focus on a particular data set and vertical. It was a very polished pitch and it made me wonder, if your regular hacker is doing these things in his house, imagine the stuff the teams on Wall Street are doing right now with all these new tools. Or, have they always had all this and us regular folks are just now tapping into it because we can fire up a team of agents and point them somewhere?
I ran into four or five friends, who each introduced me to a few more. It’s rare for me in LA to have five friends from different parts of the city all in one room (without our kids!). It was genuinely great to feel that kind of spontaneous, buzzing crowd energy again.
Great job to @msg, Wegener and team for pulling this one off.
Exporting Chrome’s reading list
I found that I had a few hundred saved links in my Chrome reading list, so I vibe-coded a quick way to export the links so that I could crawl each one, sort them and actually figure out which ones to read. I also use multiple Chrome profiles (personal, family, work, investments), so the script shows a summary of your profiles and allows you to choose which ones you want to export.
┌───────────────────────────────────┐
│ │
│ Export Chrome Reading List │
│ │
└───────────────────────────────────┘
↑/↓ navigate • enter select • esc quit
▸ Profile 1 (12 items)
Profile 5 (150 items)
Profile 8 (3 items)
✓ Exported 150 URLs to reading_list.csv
Find it at: https://github.com/naveen/export-chrome-reading-list
In need of stories
I loved this recent post by @ashleymayer on how we need better stories about the future in tech – and small companies should be the ones to step up to tell them.
Just because capital is concentrated in a few of the biggest startups (nearly all AI companies) doesn’t mean they get to be the only ones to tell the big stories and use their larger megaphones. They keep telling stories about which model is the best, which one is growing the fastest, who has the most Github stars and so on.
There are all sorts of great insights in her post, but one in particular stood out the most to me:
Many of this technology wave’s most impressive companies have also made what I believe is a profound narrative error. They’ve cast themselves as the heroes in their own stories, and in doing so, risk becoming the villain in everyone else’s.
Historically, the best brands have made someone else the hero of their story. Apple was in service of the creative misfit, Nike celebrated the everyday athlete. When you build a story around your company as the hero, you risk turning your customers or users into NPCs. It signals an inherently transactional relationship, or worse, predatory (in the case of AI or robotics: we’ll replace you, just give us time).
The best brands make someone else the protagonist. Somewhere, tech (and, sometimes, those that write about tech) have lost that idea.
Additionally, the AI wave right now is perhaps in need of the same type of storytelling that the climate wants:
Telling this story requires a different way to tell a story […] As Wallace-Wells writes, we need an alternative: many problems we face now aren’t just one person’s problems where they go out into the world, selfishly solve it for themselves and come back home victorious. Most big problems are hard to define and hard to tell stories about. Global climate change, in particular, is known as a super wicked problem. We just may need some super wicked stories.
We all want to know what comes next: what happens to our jobs, what will we be doing, what does a new kind of information abundance mean, how does creativity, taste and the human side of things fit into it?
Terminal romantics (It feels like play)

There’s a specific feeling I remember from the early days of the internet — maybe 1993, 1994, somewhere in there. It was shortly after we moved to the US and bought our first computer. People were making things and trying things online just because: ASCII art. Chat bots. Personal homepages about, well, whatever, because you knew you just had to have a presence online, you knew you had to play in order to be a part of it all, to not get left behind. It was early enough that you got to try it all – BBSes, Gopher, WWW – so early that you didn’t know which of those methods to connect online was going to “win” (or, which would still be a cool, second place gathering spot). A lot of it was text-based and inside terminal interfaces.
It felt like play – a game.
I got that a little bit of that same feeling during the crypto years of 2020-2022. Everyone stuck inside during COVID, playing with money that didn’t feel quite real. (What’s the harm in trying stuff with house money?). Most of it seemed crazy (apes on a (blockchain) plane?) and some of it mattered (stablecoins). All of it had that same energy: people doing weird things because it was fun and the ceiling wasn’t visible yet.
The state of AI feels exactly like that for the past few months. Open a terminal, fire up claude or codex and start playing. Take cool ideas, half-baked concepts and try them out, just because. Text your openclaw agent anything and everything. You don’t necessarily know which approach or model or framework is the one that’s going to win, but you may as well play with them all. The cost to trying new ideas is low and so much fun to boot.
The only difference this time is the play is also the work.
The early internet was playful but the “useful” took probably the rest of the decade to arrive for everyone. Crypto was playful but for most people the useful arguably never really came. With AI, both are happening at the same time. We’re actually shipping ideas and features faster. Not a day goes by where a friends/parents group thread or team conversation isn’t talking about how to make the most of it. The game is producing real output.
By the way, given a lot of it is now happening in the terminal, I’d get your prompts to use Bubble Tea (or Gum) from the team at Charm. They make some really cool open-source tools for building beautiful terminal UIs. (* I am a small investor.)
West side space

Anyone I know have cool office space in Culver City or thereabouts? I want to spend one day a week on the west side and would be great to have a desk.
(Culver reminds me of Dogpatch; feels like the cool place for startups & tech right now)
In return: jam/hack sessions; help you with something you’ve got going on; welcome to hang with us in WeHo.
What I’m reading (Work)
Kalina on (endurance) durational art
Last week I went to see Tehching Hsieh’s retrospective at Dia Beacon. The show is called Lifeworks 1978-1999 and it’s up through 2027.
Have you ever heard of Tehching Hsieh? He is, in my opinion, one of the greatest durational artists of the 20th century.I had never heard of Tehching Hsieh when I started my “everyday” project. I was 20 years old and I just thought it would be interesting to take a picture of my face every day. That was basically the whole idea. It wasn’t influenced or inspired by anyone.
I would only learn about Hsieh later. First, I learned about the Time Clock Piece. From April 11, 1980 to April 11, 1981, he punched a time clock every hour, on the hour, and photographed himself each time. 8,760 punches in a year.
Karpathy on AI exposure by occupation
You are an expert analyst evaluating how exposed different occupations are to AI. You will be given a detailed description of an occupation from the Bureau of Labor Statistics.
Rate the occupation’s overall AI Exposure on a scale from 0 to 10.
AI Exposure measures: how much will AI reshape this occupation? Consider both direct effects (AI automating tasks currently done by humans) and indirect effects (AI making each worker so productive that fewer are needed).
A key signal is whether the job’s work product is fundamentally digital. If the job can be done entirely from a home office on a computer — writing, coding, analyzing, communicating — then AI exposure is inherently high (7+), because AI capabilities in digital domains are advancing rapidly. Even if today’s AI can’t handle every aspect of such a job, the trajectory is steep and the ceiling is very high. Conversely, jobs requiring physical presence, manual skill, or real-time human interaction in the physical world have a natural barrier to AI exposure.
(AI) Power to the people.
This is why retirees are lining up in Shenzhen. This is why people with no GitHub account are showing up at ClawCons. For the first time, they can feel AI’s intelligence, even if it is not very good. Yet. Not a demo. Not a keynote promise. Not big boys burning billion dollars a month. A thing that actually does things on their behalf. The gap between what you want done and what gets done has always required either your own time or someone else’s labor. OpenClaw makes that gap feel smaller. That feeling, even in its rough and half-broken form, is new.
It has been almost a month since I published How AI Goes To Work. “What OpenClaw shows is how AI will work in the background,” is what I wrote. “And that is what the ‘AI’ future looks like for normal people. Not a separate AI app. Intelligence woven into tools you already use. Doing work you used to do yourself. Or used to hire someone to do, done by software.”
MicCheck (Testing 1 2 1 2)

I wanted a quick system-wide menu item that would show my microphone’s current muted state and I wanted an easy way to change the state globally from this very same menu item. Sure, I can do this from a specific app (Zoom, Teams, Meet, …) but I didn’t want to go hunting from app-to-app looking for the mute button.
So: MicCheck. It is a tiny macOS menu bar app with one job: keep your microphone muted until you decide otherwise. It sits up in the menu bar showing ON AIR or OFF AIR. When something tries to unmute your mic without permission — a browser, a background app, whatever — MicCheck catches it at the system (CoreAudio) level and reverts it within milliseconds.
I built it with one feature that I couldn’t find elsewhere: a whitelist for allowed apps. For example, I use Superwhisper constantly for voice-to-text. The problem with a hard mute enforcer is that it would block Superwhisper too. So MicCheck has an allowed apps list — you add Superwhisper (or any other app you trust), and MicCheck steps aside when that app needs the mic. The moment your whitelisted app finishes recording and releases the input device, MicCheck re-mutes automatically. No button press. No forgetting to mute again. It just goes back to where it was.
The whitelist works at the audio session level, not just the mute property level. Most apps don’t touch the system mute flag — they open an audio stream and expect audio to flow. MicCheck watches for that too, so whitelisted apps get real audio while everything else gets silence.
It’s built with SwiftUI and CoreAudio, targets macOS 13+, and lives entirely in the menu bar — no Dock icon, no windows unless you open Preferences. Global hotkey (⌥⇧M by default, fully remappable), optional sounds and notifications, launch at login.
The source is on GitHub. Build it yourself or just grab the app download.