
Ten years after publishing The Subtle Art of Not Giving a F*ck, bestselling author and blogger mark manson is turning to A.I. to tackle some of his audience’s toughest life questions. He recently co-founded Purpose, an A.I.-powered mentor designed to deliver practical life advice—something Manson says most general chatbots, like ChatGPT, aren’t built to do.
Manson is also known for Everything Is F*cked: A Book About Hope and for co-authoring Will with actor Will Smith, a memoir chronicling the celebrity’s personal struggles and growth. He began his career in 2008, launching a blog shortly after graduating from Boston University. What started as a dating advice column quickly evolved into a platform for deeper reflections on happiness, success and modern self-help. That blog would launch Manson’s publishing career and, over time, earn him nearly two million followers on Instagram.
Since A.I. entered the mainstream, Manson has been bullish on its potential to enhance the way people seek guidance. After exploring ways to enter the market, including the possibility of acquiring an existing company, he chose instead to build something new with tech entrepreneur Raj Singh, founder of the Google-backed hospitality startup Go Moment. Singh’s company was later acqui-hired by Revinate in 2021; after leaving in 2024, he turned his focus to mental health technology. Purpose’s engineering lead, William Kearns, formerly headed A.I. at meditation and wellness platform Headspace.
Purpose has launched both a website and an iOS app, with an Android version expected later this month. So far, roughly 50,000 people have joined the platform, with about one in four paying for a premium subscription that costs $20 per month or $150 annually.
Observer spoke with Manson about mental health safety, what A.I. gets right and wrong in the advice space, and where the line truly lies between mentorship and therapy.
The following conversation has been edited for length and clarity.
How did you and your co-founder, Raj, connect? Who came to whom with this problem that they wanted to solve?
We sat next to each other at a poker game, so it was completely random. I was actually trying to buy another A.I. startup, and I hit a roadblock. Raj had just exited his previous company and had already decided independently that, whatever he did next, he wanted it to be in mental health and A.I. We both realized that we were very bullish on A.I. in terms of helping people. I’d say, a month later in March 2025, we had a business.
How do you use A.I. chatbots in your own life, and what are your favorites?
I use A.I. all the time instead of Googling things or asking business questions, health questions. I was watching the movie Hamnet the other night and paused it to have a conversation with Claude about Shakespeare, and it was absolutely riveting. Claude is definitely a favorite in terms of taste and the quality of writing. Being a writer, the quality of writing matters a lot to me.
I’ve had a lot of fun messing around with some of the Character.AI-type products. It’s almost like fan fiction. But for daily use cases, I mostly use Claude and Gemini.
You mentioned that the Purpose team cares about mental health. I have written about A.I. psychosis and related issues. Purpose does clarify it’s not a therapist and limits access while results “sink in,” so I see you’re placing constraints on communication. I’m curious about the concerns you have about A.I. companions creating dependencies or reinforcing unhealthy thought patterns, and how you’ve tried to mitigate that in your app.
If you look at A.I. psychosis cases, a lot of it seems to be driven by sycophancy. The A.I. is just agreeing with whatever you say. It’s like, “Oh, you think you’re the queen of England. That’s awesome. Tell me more about that.” They’re not disagreeable enough; they’re not willing to challenge you, to kind of keep you grounded in reality.
One of the first things we considered when designing Purpose was that it needs to challenge the user. It can’t just agree with everything the user says. That also fits our mission. You grow from being wrong about things. You grow from reevaluating your beliefs and questioning your assumptions. That was hugely important for us to make sure that we are challenging the users actively and forcing them to reevaluate some of their preconceived notions.
On top of that, we have some pretty strict guardrails. Anything that seems like it could potentially be a clinical-level situation, Purpose is designed to refer the user to a route to find a local professional.
There’s actually a new industry benchmark for mental health, safety and A.I. It’s called Vera MH, and it conducts 400 simulated clinical conversations, and they judge whether the A.I. is safe or not. We scored 100 percent risk detection across all 400 conversations, and we scored in the top 0.5 percent of A.I. systems that have been evaluated with that benchmark.
How skeptical are you about A.I. for emotional support, relationships or life advice? And how are you attempting to eschew these concerns with your own product?
The large A.I. companies were woken up last year around safety precautions and negative side effects. I do think that A.I. has a ton of potential to create value for people in this space. The technology is not there yet, but it’s getting better.
What would it take for the technology to get there?
At Purpose, we’ve modified A.I.’s mission. That’s not that hard. I think anybody with six months to develop an app can probably do something similar. What’s really hard is when you get into memory and pattern matching.
The way LLMs work is that the more information you give them, the less accurate they become, and this is why ChatGPT’s memory, or Claude’s memory, is not very good, because they have so much random information on you that it’s hard for them to keep track of what’s useful for this conversation and what’s not.
The second piece of it is salience. Obviously, if a user is talking about their mother, that’s probably a very important thing in their life, and it’s definitely more important than what they had for breakfast or what kind of car they drive, but right now, A.I. doesn’t know how to prioritize one fact about somebody over another. You have to find ways to programmatically do that. Otherwise, A.I. will fixate on a random fact about you.
I don’t think memory has really been solved by anybody, especially the big A.I. companies. When you think about personal growth and life advice, memory is so important. If you have a conversation with Purpose about something that happened when you were 17, that’s probably a really important thing to remember when you come back three months later. I would say right now the biggest hurdle is memory.
Where do you think we should draw the line on using A.I. in intimate parts of our lives, and in what ways are we seeing A.I. companies around the world miss the mark on this front?
It’s inevitable that people are going to use A.I. for personal stuff. If you’re stressed out and lying awake at one in the morning, you’re not gonna call a therapist, you’re not gonna call a friend on a Tuesday in the middle of the night, but an A.I. is there. To me, the biggest thing is privacy and making sure that user data is anonymized and respected.
While Purpose says it’s not a therapist, when I used it, it did remind me of therapy in the sense that it doesn’t tell you what to do, but asks you questions that lead you to your decision about how to move forward in your life. How are you toeing the line in regards to therapy versus simple advice?
There are two different therapy use cases. Some people go to therapy because they’re in crisis and they’ve got a major life issue. Others go to therapy for maintenance or mental hygiene. A.I. can do a good job with the latter use case. Like, “I had a fight with my partner. What do you think about this?” You can get a lot of mileage out of an A.I. in those situations, especially given the accessibility, the affordability, the consistency.
Where we draw the line is when people are in that crisis category and are exhibiting very severe signs of distress or depression. That’s where we direct them to go seek a professional. I would not feel comfortable using A.I. for that use case yet.
I have a person in my life who, in the past, has struggled with eating disorders. They were using Purpose, and when they started talking about some of the issues they’ve been going through, not only did it correctly identify that they were probably more likely to have an eating disorder, but they sent them a directory of clinicians who specialize in those disorders in their area. I was very happy when I heard that. It’s doing exactly what it should be doing.
Would the version of you that wrote The Subtle Art of Not Giving a F*ck be surprised at this venture that you’re doing?
I actually don’t think so. I launched my first online course around 2010, and around the time the book came out in 2016, I had this dream of doing a choose-your-own-adventure self-help course. It frustrated me that every course was on rails, like you had to start here, and you had to go in order. So many people would drop off because it didn’t relate to them anymore. I actually started designing one around 2017 and got maybe a month in before it was clear that it was going to be so complicated and impractical that I abandoned it.
When ChatGPT blew up, and I started messing around with it, I realized this is the technology that makes a choose-your-own-adventure course possible.
