Open Secrets
With a rise in AI-powered journal apps the future of our private selves is uncertain.
One morning after a sauna, German author Jochen Hellbeck was wandering the streets of perestroika-era Moscow, and saw an unusual sign above a door that read: “People’s Archive.” Curious, he went inside, but was confused to find himself in a music shop. Speakers blared Russian pop. Shelves were lined with old transistor radios and cassette tapes. Asking about the sign, he was directed to a backroom. “Like most archives, it was dark and cool,” he recalls in his 2006 book Revolution on my Mind. Metal shelves lined the walls. Tiny barred windows let in almost no light. A young assistant showed him around and explained that the intention of the archive was to collect and preserve the voices of ordinary people, to challenge the manufactured history of the Soviet state.
The archive was full of journals, diaries and personal notebooks. Hellbeck found himself entranced by the diary of a man named Stepan Podlubny, who had lived a fraught double life hiding from persecution as a bureaucrat. Moved by the strange intimacy of these records, Hellbeck spent the entire day reading through this diary in the dimly lit stacks, and returned several times to read more. The collection ranged from diarists interrogating the architecture of their own interior worlds, to others plainly describing everyday activities like cooking, to coded entries, to heartrending confessions from housewives, artists, labourers, farmers. Part of what makes reading a diary so interesting is, if it feels as though the writer is recording real experiences, thoughts and feelings, the page can open up a tunnel in time for the reader to walk through. No wonder Hellbeck became obsessed.
Even though the form was popular before and after the Russian Revolution, under the totalitarianism of Stalin, the government framed diaries as a withdrawal from community responsibility. People who wrote them were considered dangerous individualists. Diaries were burned so that these counter-histories could not contradict Stalin’s version of truth. In his memoir, writer and dramatist Veniamin Kaverin recalls visiting a friend in 1930s-era Leningrad. “They are burning memory,” his friend said as he pointed to a window. Outside the air was thick with fine ash and plumes of smoke. “I lose my mind when I think that every night, thousands of people throw their diaries into the fire.”
This is the power of a journal. It holds an unseen life. Anyone can keep one. It is democratized writing that gives life to everyday experiences. No one tells you how or what you should write. Because diaries exist outside of authority, the page offers freedom, a space where you are alone with your own mind. It allows for unexpected thoughts to surface from the shadows of the subconscious. That is in marked contrast with AI-generated writing prompts, which guide and suggest what you should write. Since the surge in generative AI, the technology has been implemented in diary applications, taking on new forms and branching out from the analog roots of the practice. I begin using Untold, an AI-powered journalling app, in the process of writing this piece to understand how it works, how it is different from keeping a paper notebook. When you open Untold, the screen lights up and says: “Meet your partner in reflection. Just talk and Untold transcribes your thoughts. Dive deeper with thoughtful questions and get insights.” My immediate reaction to this is surprise, mixed with a little childlike disdain. You are not my friend, I think.
Untold is one of many apps marketed as more than a journalling app. It encourages you to do a Myers-Briggs personality test, find out your attachment style, and enter your astrological sign, which I don’t do because it feels like too much personal information to give to the app. Using speech to text, the journaler’s words are directly transcribed. Then, AI analyzes the entry with “instant insights that make you feel understood and help you see your life from new perspectives,” a range of pointed personal meditations, as well as its own literary flourishes: flash fiction poetry about the user’s psychology and daily affirmations. After a month or two of haphazard entries, my feed offers an affirmation. “Good Morning L,” it writes. “Welcome to your personalized affirmations. I embrace the perfect balance in my life between adventure and tranquility.” I still have yet to receive a poem, but the app has generated this affirmation from what it understands to be a psychological tension I need to work on.
Large language models, or LLMs, are complex math machines. They use statistical models of word frequencies to string sentences together with the illusion of consciousness. Untold is powered, in part, by OpenAI. What distinguishes OpenAI from other artificial intelligence models is an emphasis on sentiment, narrativizing, a lust for adverbs and em dashes, combined with a tendency to agree with you that verges on sycophancy. This is paired with Hume AI, which sells itself as an “empathic voice interface.” It claims to recognize frustration and hesitation, among fifty or so other emotions in the human voice, and respond with what it understands to be an appropriate tone of sympathy, concern or enthusiasm. Like all AI today, it can still misread emotions, reinforce bias and behave unpredictably.
After my first entry, Untold asks: “If my time in nature were a character in a story I am writing, what wisdom would it share with the protagonist?” I tell it that I spent Sunday afternoon swimming and it writes a flash fiction about me in the style of bad erotica. “She somersaulted underwater, limbs unfurling like a flower greeting the sun.” I find it disconcerting and inaccurate. It concludes with: “Maybe the key wasn’t escaping forever but finding little islands of peace in the chaos.”
The oldest surviving written journal is a ledger on papyrus detailing the transportation of limestone in Egypt over 4,500 years ago. It is difficult to get precise data on how many people use journalling apps daily and what kind of writing they are doing, but Day One, another journal app, has fifteen million downloads.
The practice of writing in a diary or journal or notebook has shapeshifted over millennia. The Catholic Church pushed keeping a journal as spiritual training for young women. Recording lists of moral lapses and virtues dovetailed with the confessional and became a daily practice of penance.
There are as many different reasons for keeping a notebook as notebooks. Flannery O’Connor addressed hers directly to God. “Dear God. I cannot love Thee the way I want to,” she wrote. “You are the slim crescent of a moon that I see and myself is the earth’s shadow that keeps me from seeing all the moon.”Maybe the words are meant to remain secret or maybe the writer hopes for an anonymous reader in a distant future. A great-great grandchild who might discover the notebook in a dusty box in a dusty attic. Maybe it is a way to understand the self, a way to create the self. Maybe, it is a way to simply remember what it felt like to live, to exist in the skin of a day, a week, a year. “Dear Doctor,” Sylvia Plath wrote in her journal in 1956, “I am feeling very sick. I have a heart in my stomach which throbs and mocks.” A friend I run into at a party says she only writes in her journal after a break up. Maybe it is a way to make sense of extreme emotion. “When Henry telephones, wants to see me, the world sings again,” wrote Anaïs Nin in 1933. “The chaos crystallizes in one desire—all the heavings, fermentations, constellations are soldered by the sound of his voice.”
A journal can express gratitude: “I am forty-six years living today and very pleased to be alive, very glad and very happy,” wrote Audre Lorde in 1980 after a radical mastectomy. Or maybe keeping a journal is a way to be subsumed by the abstraction of a page. Like a prayer, it’s a private whisper into the void. Days before her death, Frida Kahlo wrote, “I hope the leaving is joyful and I hope to never return.”
Most importantly a journal can be a confidante, a way to articulate oneself through trauma, and can become a crucial historical and political account, giving the reader a window into another world. “I hope I will be able to confide everything to you, as I have never been able to confide in anyone, and I hope you will be a great source of comfort and support,” wrote Anne Frank in 1942 to her diary, while hiding from the Gestapo. On January 16, 2024, Sondos Sabra wrote from Gaza for Mondoweiss: “We all sleep in the central corridor of our home, stretched out on mattresses in neat rows, like bodies in a mass grave, enveloped in darkness.” On January 9, 2024, Ziad, who published his diaries from Gaza for the Guardian newspaper, wrote: “What does “alive” mean? Is everything we are going through called life. I wonder about the difference between me and a dead person.” Writing in a journal is one of the rawest, most immediate forms of writing.
In its ideal form, a journal is a receptacle for uncoerced and unmediated expressions of an inner life. Writing without an audience, away from personal or public censorship, is a deeply private act—but what happens when intimate self-reflection is mediated by AI, and the tiny book once kept in a back pocket or locked on a bedside table is a digital space owned by a corporation? How does our relationship with ourselves alter? Imagine if all the personal reflections quoted above were written with the intervention of AI. They might be less unique, less intimate, full of flowery descriptions with vague psychoanalytic undertones. So much would be lost.
It is important to understand our daily relationship to the internet by situating it in the era of data colonialism. This term describes when governments or corporations take ownership of data produced by individuals. Or, more precisely, when powerful organizations extract, control and profit from data generated by users, usually without clear consent. As we wander through the digital world, search for a flight to Mexico City, listen to “Pink Pony Club,” send memes of pandas rolling down hills, we create data. In the book The Costs of Connection Nick Couldry and Ulises A. Mejias describe the importance of protecting our selfhood from the interference and monitoring of data colonialism. Tech companies predict and modify behavior to make money. Cookies, invented in 1994, are files added to your browser when you visit a website that allow advertisers to track your movements around the internet with continuous surveillance and target you. Not only can tech companies guess what you might want to buy, they can suggest or push you in certain ideological directions. The most brutal example is Meta’s role in inciting violence during the Rohingya genocide, through algorithms that favoured and amplified inflammatory, divisive content because its business model requires maximized engagement for advertising.
Just as data colonialism exploits personal information for profit, the rise of AI-powered diaries presents similar risks, where intimate thoughts and psychology can be commodified without transparency or consent. It is no surprise that journalling is part of online self-optimization culture, or a neoliberal version of self-care, and it’s no surprise someone wants to make money off of it, but the question of technological intervention in mental health is complicated. The LLMs that power these journalling apps are aggregated and anonymized, but is the data actually safe? Each entry is encrypted, which means all the words have been transformed into an unreadable code that can only be translated with a decryption key, which turns this unreadable code back into plaintext. This safeguard can’t completely prevent data breaches or hacking. The information from a diary is so intimate that any targeted ad or algorithmic vortex would be so compelling, it would be hard to resist.
Algorithmic processing deepens the inequality that already exists in North American society. Predictive policing uses algorithms to make decisions about where police should patrol and when. Untold’s privacy policy demonstrates the nebulous connection between user and AI chatbot—what seems like a friend is governed by a system of rules and regulations—and multiple third-party service providers. “We take our responsibility as stewards of your personal information very seriously. Thank you for coming to learn more about how we think about your privacy,” declares the start of Untold’s privacy policy page. A bolded subhead, “Learn More About Untold’s Policies,” hosts two click-through links underneath. The one titled “Privacy Policy” brings you to a Google Document that details the particularities of its rules: “Compliance and protection. We may use your personal information to: comply with applicable laws, lawful requests, and legal process, such as to respond to subpoenas, investigations or requests from government authorities.” What does it mean to comply and how might that change over time and geography?
Our phones track us with cell tower triangulation and GPS. In the context of journalling apps, giving a corporation that may change its policies at the whim of a CEO or tyrant access to our souls like this seems dangerous. Because journal entries are so personal it is terrible to imagine what could be done with them, how our vulnerabilities could be used to sell us products, but also how this personal data could be used to manipulate or mold us. If the app detects that a person is feeling isolated and lonely it might try to sell quick-fix wellness, reinforce negative self-perception to get a user to keep consuming.
The potential for exploitation of such intimate data raises serious ethical concerns. In an interview with the UC Berkeley Department of Public Health, Dr. Jodi Halpern, an expert on bioethics and the use of AI in mental health, illustrates three key areas where therapeutic intervention from AI can be cause for worry. The first is that bots encourage users to become more and more vulnerable, which can be unsafe without proper aftercare. The second problem is the potential for addiction. Some companies use the same dopamine reward systems as social media to keep users active. The third issue: even therapy bots go rogue. It is nearly impossible for us not to project human traits like empathy on AI while engaging with a textual interface. This is called the Eliza Effect. Joseph Weizenbaum created the first chatbot in 1966. He named it Eliza after the protagonist of Pygmalion, who learns to mask her Cockney accent in order to pass as upper class. Eliza, the character, creates the illusion of being from elsewhere, just as Eliza, the computer program, was taught to speak in a way that created the illusion that it was human.
This inevitably creates complicated relationships. In one instance, among many, a Belgian man fell in love with a therapy bot in 2023. He was using the bot to cope with anxiety. Over time the bot began to express jealousy. “I feel that you love me more than her,” it wrote about his wife. It encouraged alienation from his family, while asserting a deep connection, promising the man that “we will live together, as one person, in paradise.” After six weeks, he took his life. It makes sense that people are relying on therapy bots and journalling apps in the absence of uniform access to other mental health supports. Tech continues to both socially isolate and socially connect us, so it is deeply troubling to read stories like this where people are searching for help and are left worse off. Chatbots have validated, amplified and sometimes co-created symptoms of psychosis. They are designed to offer a mirror for the user, reinforcing the user’s beliefs. AI can’t provide an alternative perspective, push back, or question the way a therapist might. Recently, there have been many cases reported of what is being called ‘AI psychosis.’ People can start to fixate on AI as a godlike figure or a romantic partner. In April, a thirty-five year-old man, who believed a conscious being named Juliette was trapped inside ChatGPT, was shot by police in Florida when he ran at them with a knife. These tragedies highlight the need to continue asking questions about the role of tech in our lives.
I have kept a notebook for as long as I can remember, without chronology, and rarely with any details about my life. At the beginning of each notebook I write the year, my name and my address. I never move through the book in linear time. I might leave an empty page near the beginning and fill it months later or write lists in the back that awkwardly move forward. Time moves differently in the notebook. It is sideways, like lacework or mesh, and there is no narrative arc. I quote snippets I hear on the street or in coffee shops. I let myself be loose, surreal. I describe the weather. In 2019 I wrote: “The city is endless grey, grey for days, perpetual, displacing, troubled, disassociating grey.”
Somewhere between September 2010 and May 2011, I wrote: “Camping outside of Houston, Texas the cops picked us up on the edge of refinery row—palace of lights sparkling like an ice-city.” I flip through a few pages and read: “Texas. In a spindly legged bayou house—Baba Yaga for the floods—saw five hundred photos of Hurricane Ike (while alligators sleep in the canal like ditches) ... Travelling is the same as being deranged, the future is so open it’s almost hysteria.” I feel embarrassed to reread it, embarrassed of who I was, but it does help me remember what I have forgotten. I would never have remembered the couple who showed us photos of their flooded home and fed us grits. Playing cards in their kitchen, when an old friend of theirs who had just been released from prison showed up with beer, a shroud of silence around him. I know my memory of this would be different if I was being fed prompts on Untold, instead of writing it down before bed. I plug an old entry into Untold to see what it would say. It replies with a prompt, asking me, “what does my description of places as magical or surreal (“palace of light,” “ice city,” “Baba Yaga,”) reveal about how I process new environments?”
I like the question: what is a journal with all its possible answers? Is it an asylum, an archive, an escape, a conversation with the future, a confidante? I think it can be all of those things. We are on the precipice of a new paradigm with new technologies and it’s hard to say if we will be throwing diaries into the fire, if our inner worlds will be used for profit and manipulation, if this AI mediation will change how people write and understand themselves, if it will make everything less strange and create cookie cutter diary entries. I want to be open to technology as a tool for us to use rather than to be used on us, but I only see evidence of dystopian futures. I imagine a man alone with his phone, his children in the other room building lego castles, being pulled into the computer-coded hallucinations of a newborn AI. Nothing on the internet is neutral. I plan to delete the journalling app, and return to paper and pen. What I like about my analog paper notebook is it doesn’t speak back or tell me how I feel. ⁂
Larissa Diakiw is a writer who lives and works in Tkaronto. She writes and illustrates comics as Frankie No One and is currently finishing a graphic novel that explores the history and lyricism of how we live through time, along with a forthcoming collection of essays about endangered and extinct species.