INTERNAL PREVIEW — NOT PUBLIC. DO NOT SHARE.
← Daylogue Newsroom
Privacy & Trust

Most AI Generates From the Internet. Daylogue's Only Generates From You.

Most LLM products put the AI in the author role, generating new content on top of whatever you typed. Daylogue inverts the model: the only thing the AI is allowed to generate is a plain-language summary of what the user has already written. That single constraint is why the architecture looks completely different from every other AI app shipping right now.

Daylogue PressLOS ANGELES, CA, October 7, 2027 · 6 min read

LOS ANGELES, CA, October 7, 2027 / PRNewswire / Daylogue today published a detailed technical account of the architectural decisions behind its AI system. Specifically, the choices the company made to build a product where the AI generates only from the user's own journal entries, and why those choices required an encryption design that most AI applications haven't needed to think about. The account is aimed at engineers, product designers, and technically literate readers who want to understand the trade-offs behind a different kind of AI application.

The framing most AI companies use is open generation. An LLM takes an input and produces an output, drawing on the vast corpus it was trained on. The output is the product: an essay, an image, an answer, a completion. The user is the consumer of what the AI creates. That model is useful and powerful, and it has produced an enormous amount of genuinely good software. It also has a specific failure mode in sensitive personal contexts. The AI's voice starts to dilute the user's voice. The AI is authoring the experience. The user's own words become the prompt, not the product.

Daylogue made a different design decision. The product is the user's words, described back to them. The AI's job is to read across weeks and months of entries, identify recurring patterns and people and feelings, and write a short plain-language summary of what it noticed, the way a careful reader would describe a text back to the person who wrote it. The generation the AI does is bounded to the user's own journal. It does not invent content about the user. It does not supplement their entries with outside material, and it does not produce wellness-brand-voice output on top of what they typed. Its only job is to make the user's own voice audible to the person who wrote it.

That decision has consequences that go all the way down the stack.

"The question we kept coming back to was: whose words is this product?" said Brandon Bibbins, Founder and CEO of Daylogue. "If the AI is generating on top of a training corpus, the answer is partly the corpus. If the AI is only allowed to summarize what you've already written, the answer is still yours. We wanted to build a product where the user's own voice is what they get back. Not a generated version of what a wellness app thinks they should feel. Not an AI personality. Their own words, patterns, and history, described back to them. That's a different product, and it requires a different architecture."

The architecture, in specific terms:

  • On-device AES-256-GCM encryption for in-app entries: text and voice entries written inside the Daylogue app are encrypted on the user's device before they leave it, using AES-256-GCM with a key derived locally from the user's credentials. Daylogue's servers receive ciphertext. No Daylogue engineer can read those entries. A subpoena, a breach, or a future acquirer would not produce readable content.
  • Local key derivation: the encryption key is derived on the device, not transmitted to or stored by Daylogue. The company does not hold the key. This is not a policy. It is a system property.
  • The explicit trade-off on summaries: because the weekly narrative summary requires the AI to actually read the user's entries, there is a moment during summary generation where the model needs plaintext. Daylogue handles this inside the user's session and encrypts summaries at rest. The company is explicit about this: the summaries are not end-to-end encrypted in the same way in-app entries are. The full privacy map is published on daylogue.io/privacy. The company decided to publish this distinction rather than obscure it.
  • SMS and email check-ins are plaintext at the boundary: entries submitted via SMS or email arrive at Daylogue's server in plaintext, because that is how those protocols work. They are encrypted at rest. Users who want the full end-to-end property use the in-app path.
  • Voice transcription is handled by Deepgram: voice audio is sent to Deepgram, a third-party voice-to-text provider under contract with data-handling terms. Daylogue receives the transcript, not the audio. The company does not retain voice recordings.
  • No training on user data without explicit consent: the AI that reads and summarizes user entries is not trained on those entries without the user's specific, opt-in consent. The reading and the training are separate operations, and the default is no training.
  • The constraint was not just a privacy decision: an open-generation model in this context would produce wellness-brand-voice content in response to user inputs, sampling from anything its training data has to say about feelings. That is not a product Daylogue wanted to build. A user's own patterns, reflected back in their own language, are more specific, more honest, and more useful to that specific user than anything a general-purpose model would generate. The architecture follows the product philosophy, not the other way around.

"There is a version of this product where the AI just writes things for users to feel good about," said Marcus M., Head of Strategy and Partnerships at Daylogue. "We made a deliberate decision not to build that. Not for philosophical reasons, but for practical ones. A user's own patterns, described back in their own language, are more valuable to that user than generated affirmations. The architecture that protects those patterns is what makes the product worth trusting. You can't separate the two."

Daylogue is live on iOS via the App Store and on the web at daylogue.io. Android is in active development. The full privacy architecture is documented at daylogue.io/privacy.

Daylogue is not therapy and is not a replacement for professional care. It is a private self-reflection tool. The technical decisions described here are product decisions, not promises about future compliance. The company publishes what it can and cannot see, and updates that publication when the system changes.


About Daylogue

Daylogue is a pattern journal that reads your past entries and detects the emotional patterns running through them. Instead of a stack of separate journal entries, you get a short, plain-language summary that updates over time: what topics keep coming back, when a pattern is repeating, what's shifted in the last few weeks. Daylogue is not therapy and is not a replacement for professional care. It is a private space on your phone for honest reflection, a companion to therapy, to hard conversations, and to the days when you want to know yourself a little better. Entries written inside the Daylogue app are end-to-end encrypted on your device before upload, so Daylogue cannot read them. (SMS and email check-ins, and AI-generated summaries, are handled on the server and are not end-to-end encrypted. See Daylogue's privacy page for the full map.) Founded by Brandon Bibbins, Daylogue is independent and available on iOS and web at daylogue.io.


Media Contact Daylogue hello@daylogue.io daylogue.io

SOURCE Daylogue

← Back to the newsroom