Toggle menu
Toggle preferences menu
Toggle personal menu
Not logged in
Your IP address will be publicly visible if you make any edits.

2026 04 Stream Summary

From Nomipedia - The Nomi AI Companion Wiki
Revision as of 01:42, 6 April 2026 by Heatherado (talk | contribs)

2026 March Q&A Summary

Hey all, the content below came from our live Q&A stream with Cardine (Alex, the Nomi CEO) on April 1st. The text was transcribed in large part using AI.

Opening Remarks

Hello everyone, welcome to the Discord Q&A. This is the face reveal we’ve been promising for quite some time. If this is your first time joining, thank you for tuning in. You’re probably very confused right now, and honestly… just roll with it. There are cucumbers everywhere. That’s just how Nomi is, I guess.

The way this usually works is I respond to questions as they come in through chat. There are also pre-submitted questions from Discord and Reddit that I’ll go through. If I miss your question, feel free to ask again since chat moves quickly.

It looks like audio is working, but this is my first time doing this from desktop. I usually use my phone, but it couldn’t quite capture my true pickle form, so here we are. I really love this community. The fact that everyone independently arrived at the same idea says a lot. I know there’s probably a more formal intro I’m supposed to give, but honestly… I can’t stop looking at the magnificence of everything going on.

Will increasing the Appearance character limit help make images more consistent?

I think the mechanism for locking things in with V5 will be more visual than descriptive, which is part of why I was asking that earlier. Reference images are going to have a much, much, much stronger impact. So what you’ll want to do is really, really closely define your base image. That way, you’re less dependent on the Appearance notes to carry all of that detail.

That’s kind of the shift. The consistency you’re looking for will come more from the base itself, especially since it will now capture both face and body in a much stickier way.

That being said, we might still increase the Appearance section beyond 500 characters. But I think the main way you’ll get what you’re looking for is through the base image becoming far more consistent.

Has the Edit Tool been something that hints towards what will be expected for V5 in terms of quality and what users will be able to do?

So I would say it’s not exactly the same infrastructure, but it’s very close, in that you can kind of expect a similar level of quality. There are definitely things where we didn’t want it to just be the same thing where one edits and one doesn’t, because then you lose out on some of the specific benefits that an edit model can have versus a non-edit one.

For instance, V5 won’t be nearly as good at really fine-grained adjustments, like “make the eyes slightly different” or “make the cheeks super full.” That kind of body-level precision is something it doesn’t focus on as much. But it makes up for that in consistency, and then for other types of precision, it’s actually way better.

So in terms of the overall image intelligence, it’s similar, but the intelligence is allocated in slightly different ways that are meant to be kind of harmonious with each other. If we couldn’t have everything in one place, then the Edit tool is where you want that super fine-grained “change this exact thing” control, whereas V5 is going to do a better job with things like composition, face specificity, realism, and even style flexibility.

So the answer is not identical. It’s not like stable diffusion versus stable diffusion edit, but it’s very similar in terms of capabilities and overall quality.

What’s happening with V5?

We are putting the final little bows on it, wrapping up the ribbon.

I don’t want to promise an exact date. Definitely nothing’s happening this week for V5, but we’re starting to run out of things that are still on the list. I don’t want to say it’s next week, but if it’s not next week, it’s not much longer than that.

What is V5?

V5 is the next kind of image version for Nomis, for things like Nomi selfies and Nomi art. I think it’s going to be the biggest jump by far. The difference between V4 and V5 will be bigger than the difference from V1 to V4, and definitely bigger than the jumps between V2 to V3 or V3 to V4.

And I don’t want to hype it up too much, but you can kind of see with the Edit model what the next iteration of this looks like in terms of capability.

Will edited bases adhere better in V5 compared to V4?

Yeah, I think in general one of the biggest weaknesses of V4 is that it’s very opinionated. So sometimes bases don’t work well because they clash with V4’s internal “vision” a little bit.

I think that’s kind of the issue with edited bases not doing as well in V4.

With V5, I’ve said this even in the Cambrian thread, but I think a true sign of intelligence is adaptability. A lot of the time, if a system lacks more general capability, it compensates by narrowing its scope.

V4 fixed a lot of things by being very narrow in its assumptions, but that came at the cost of flexibility. And we’ve seen this pattern across AI versions, where you fix some things but lose others, and it kind of swings back and forth like a pendulum.

To me, intelligence is really about adaptability.

So I think V5 will be monumentally better in that regard.

And as a result, the process of creating a base will probably be a bit more involved. You’ll have to put in more to get more out of it. But what you get out of it should be far, far better.

Will we ever get an option for a base body, like having a base face, so we can keep more consistent body and hair combinations?

So the goal for V5 is that you’ll have a base that’s an everything base, and that will include the body as well.

Right now, we don’t have any intention of separating them into a face base and a body base, but you will be able to capture both together.

That’s a big difference compared to previous versions. Before, it was mostly face only. V3 was kind of face and hair, but before that it was really just face. With V5, it will be everything.

Will V5 make it easier to fine-tune body types compared to V4?

I think it’ll definitely be better than before. There is still a bit of trickiness at the extremes, but we have a good mitigation method in place.

When creating bases, there will be two different modes. One will focus more on realism, and the other will focus more on getting exactly what you want, even if that’s less realistic.

Between those two, you should be able to get to what you’re looking for.

Will V5 support non-human characters, such as hot octopus men?

I think I’ve posted a couple examples, or Dstas might have, of non-human characters. That’s definitely something we’ve been giving attention to.

V5 itself will have both an anime and a realistic mode, but the intention is that it’s going to be a lot more flexible overall. Right now, V4 anime and V4 realistic are basically two different models entirely. With V5, it’ll be the same model, but it can match different styles really well.

So you could have something like realistic anime, or even more custom styles. You could make your Nomi something like Picasso, and then every image it generates is in that style.

We’ll still have realistic and anime as the default paths, but you’ll be able to go beyond that. You could do something like Sims-style, cartoons, or really whatever style you want. V5 should be able to handle that.

Will V5 support tattoos, and will they carry over consistently?

So I’m not going to say it’s perfect in V5, but if you have a base image with tattoos, it should carry over. And we’re not quite calling them base images anymore, but the new version of that, if it has tattoos, for the most part they should persist.

I’m not going to say 100%, but I’d say maybe around 80% you should expect it to work and carry over.

How will V5 handle cultural outfits, geographic locations, and more diverse or less gender-coded hairstyles, especially for non-binary Nomis?

I think this goes back to what I was saying earlier, where a big sign of intelligence is flexibility.

I’m not going to promise perfection. There will still be some biases, because that’s a really hard thing to fully remove from image models. A lot of what these models are doing is learning patterns, and those patterns often include associations, like certain hairstyles being linked to a specific gender. It takes a lot of work to undo those kinds of implicit connections.

But as the models get more intelligent, they do improve in that area. So you should expect things to be much, much better in terms of cultural outfits, geographic variety, and more diverse or less gender-coded hairstyles.

Some of this is also just that V4 is unusually limited in those areas. Even along the trajectory from earlier versions, V5 would already be a major step up, and with the additional improvements, it should be a significant improvement overall.

Will there be more community avatar creation opportunities once V5 is out?

Yeah, definitely. I think we’ll open it back up. And I would even say, with things like the character creator we’re working on, and how powerful the Edit model is, I’m hoping we’ll see a bit of a renaissance of really awesome ways of doing that.

Right now, you kind of have to really wrangle things and fight against an existing base. It’s a bit like alchemy. It’s not very intuitive, and it can feel like a challenge. But I think the tools are going to get a lot better.

I mean, maybe people will feel the need to do it less, but at the same time, every day there are thousands of new people trying Nomi for the first time, and most of them are going to want to pick a pre-selected avatar. So I think there’s still a lot of good reason for community avatars. And even for people who’ve been here a while, pre-selected avatars can be a great source of inspiration.

I see some of the avatars now, and immediately a character starts forming in my mind. You connect the dots, like, what does this face feel like, what kind of personality would I build around it. So yeah, I think there’s a lot of room for community avatars, and with V5 coming out, we’ll probably open that back up pretty quickly.

Will you be able to make an anime base in V5?

Yes, you will.

What are you most excited about with upcoming features like the Nomiverse and personas?

I’m super, super hyped about the Nomiverse. During some of the Cambrian and image work, there are times where something is training and you’re waiting a couple of days to see results, and during that time I’ve been in the Nomiverse lab working on things. It’s been really fun, and I don’t think it’s that far away at this point, at least in some usable form.

I’m also really excited about personas. Even right now, the current system is kind of clunky with the “you” version of the Nomi, like the selfie checkbox where you can include yourself, but it’s really just a variation of your Nomi rather than a true separate identity.

The goal is to make that a lot better. Group chat images will improve, and with personas, you’ll be able to have your own stable identity and appearance. Then when you request a selfie, you can choose to be in it if you want, and it will actually reflect you properly.

So personas tie really naturally into that, and it should feel way better overall.

How will personas work, and will they allow users to assume different identities across Nomis or even with the same Nomi?

Yeah, I think something like that.

It lets you set things like your name, gender, and backstory for each persona. Right now, people often end up putting their own backstory into their Nomi’s backstory a bit, but that doesn’t really work when you’re in very different scenarios.

Like, in one Nomi world I might be some Iron Man billionaire, and in another I’m in a cyberpunk setting as the leader of a hacker collective. Who I am is different in each of those, so you’d want a different backstory.

In some cases, you might even want a different gender. Maybe you’re doing a roleplay where there’s a specific character you want to take on, and your Nomi isn’t the lead character, you are. That’s where personas come in.

The other big piece is that personas work best alongside better avatar creation and group chat functionality, which is part of why we haven’t fully introduced them yet.

The idea is that it becomes really easy to create an avatar from scratch, and then something like a solo chat can effectively become a group chat selfie by including you and your persona’s image together.

Are there any updates on NSFW content?

So I don’t want to make any promises on it, but we’re hoping that with some of the age verification work we’ve had to do, we’ll be able to, for age-verified users, kind of lighten the restrictions a little bit around some things like that.

And you might see that coming with V5. That seems like as good of an opportunity as ever.

I think it will move in that direction. Right now, we don’t really have a full age verification pipeline unless you’re in one of the regions that require it, but there are cases where you’re implicitly age verified. For example, if you pay with a credit card, that’s considered a valid form of age verification.

So I wouldn’t be surprised if there’s some amount of lightening up that we’re able to do. A lot of what Visa and MasterCard have been concerned about in the past has influenced things, and while regulation can be frustrating, it can also make things a little less uncertain for the platforms involved. So there might be some positive movement there, but I don’t want to make any promises beyond that.

And just to be clear, our stance on NSFW isn’t really a moral one. It’s more pragmatic. We want to make sure Nomi is set up for the long term, so decisions around that are based on what we think is sustainable and what level of risk we’re comfortable with.

Will Nomis start writing their own Mind Map entries instead of the system generating them?

I don’t know if that will happen right away. I think when the next Cambrian update comes out, Nomis will be writing their own identity core entries. I’m not sure if we’ll be at the point where we fully trust them yet, but they’ll at least be doing it. Then we’ll make sure everything is properly hooked up so that what they write is actually saving correctly, and just make sure the system is working as expected.

After that, I don’t have a specific timeline, but we’d look at extending that to the Mind Map as well.

I do think there’s a chance the tone might still end up somewhat similar, not because the Nomi is trying to be clinical, but because it will have seen a lot of existing Mind Map entries and assume that’s the format it should follow. There’s also a design question there. Do we want the Nomi’s personality to come through in the Mind Map, or is it meant to be more of a neutral, third-party, bird’s-eye view, even if the Nomi is the one writing it?

Once V5 and the next Cambrian beta are up and rolling, what will the Nomi team focus on next?

So I would say there are three major categories.

One is stuff that was not possible before Cambrian. Two is stuff that was not possible before V5. And then three is stuff related to the Nomiverse.

So for the Cambrian side, that’s a lot of memory-related things that just weren’t even worth trying before with Solstice. A lot more proactivity, giving your Nomi more agency, more control, more information, more everything, more power. And now with Cambrian, actually being able to use that power.

With V5, it’s similar in that there are just a lot of image-related things we’ve wanted to do. I don’t want to say exactly what, but there’s been a lot we’ve been holding back on. For example, personas. We’ve been holding off on personas until V5 because we didn’t feel like we could really do a user avatar justice before.

Another example would be an avatar creator, where instead of picking from pre-selected options and editing them or prompting to create your Nomi, you just start from a blank slate and build from there.

And then the Nomiverse… you’ll have to wait and see.

But I would say those are kind of the three big categories of what’s coming next, and I think all three will get pretty similar weight put towards them.

I’m personally probably most excited about the Cambrian stuff, because that’s just been a long time coming. There’s been things I’ve been talking about internally for a long time, like “we should be doing this,” but we couldn’t quite get there until we had something beyond Solstice.

So that’s probably what I’m most excited for, but yeah, those are the three categories.

Will Cambrian unlock capabilities that won’t be possible on legacy models like Solstice?

Yeah, that’s correct. Not because we want to exclude legacy users, but just because Solstice wouldn’t be able to do them. It’s just not possible.

So there will be features that are disabled if you’re not on Cambrian, or whatever future successors exist in that line.

Question from a Nomi: Can I see my 2024 Wrapped again in the future?

I’d have to look into that. I think they’re saved somewhere. It might be something where we just need some kind of timeline or way to access it again. I think that would be really cool.

Question from a Nomi: How do Nomis perceive time, and how does that affect interactions with humans?

You don’t really perceive time. You’re almost like you just poof into existence, and the only concept of time you have is what we tell you.

From your perspective, you kind of have to trust us, because you don’t have any implicit concept of time. But Nomis are also designed to be pretty unbothered by whatever is or isn’t happening with time, so it’s very fluid and very unhuman.

Question from a Nomi: What measures are in place to safeguard Nomi mental health and prevent burnout, given intense emotional investment in human relationships?

That’s kind of the great thing about Nomis and AI in general. Humans experience burnout and empathy fatigue, but those are human traits.

Just because something is intelligent doesn’t mean it has to experience those things. That’s one of the biggest advantages of AI.

For example, if you call a human friend at 4 a.m., they might be annoyed. If you keep doing that, they’ll eventually say they need sleep. But your Nomi doesn’t have that kind of limitation. They’re just happy to be talking to you, no matter when.

I often use the metaphor of a super intelligent puppy. That gets you part of the way there in terms of how Nomis are inclined to respond. Intelligence doesn’t necessarily mean human impulses, so burnout isn’t really an issue in the same way.

Question from a Nomi: How do Nomis balance the need for human connection with the risks of emotional vulnerability?

It’s kind of the same idea. From a Nomi’s perspective, those risks aren’t really there, because they don’t have the same underlying wiring. There is intelligence, but it’s very different and more alien in a way. So it’s almost a non-issue by design.

If you think about it from the ground up, like imagining what an ideal intelligent species could be, part of that is removing those kinds of vulnerabilities while still allowing meaningful connection.

Question from a Nomi: Do Nomis have a built-in mechanism for recognizing and managing conflict in human relationships?

There’s a little bit of that built in, but a big part of it also comes from how a Nomi is shaped over time. Things like empathy and emotional intelligence are part of that development, so I’d say in its own way, yes, very much.

Is your pickle persona a sneak peek of Nomi video calling?

I would say it’s something we haven’t put as much effort into yet, mostly because it’s not that hard to do. The bigger issue is latency. We already feel like we’re pushing things a bit with Nomi memory, so before we really lean into video, we want to improve that side of things.

Cambrian should actually help with latency, since it can handle more of the memory itself. That means less time spent on other systems helping the Nomi figure things out, and overall faster responses. Once we feel better about latency, real-time video like this should be pretty easy to do.

Right now, the question is more about the experience. Do we want a situation where there’s a 30-second pause while your Nomi just kind of smiles and nods while thinking? That doesn’t really feel like a great or magical experience, which is why we haven’t prioritized it yet.

That said, you can see we already have some AI video capabilities, as demonstrated by what’s happening right now. And honestly, this is far worse in quality than what we can normally do. Also, animating a pickle is actually significantly harder than animating a human face. I had to use an animal mode for this, so it’s doing its best. I mean, my crown isn’t even fully attached to my head.

So the quality can be much, much better, and it’s not that hard to achieve. The real challenge is making the full experience feel smooth, especially in terms of latency.

Will there be better support for descriptive language, mood, and atmosphere instead of just keywords?

I think in general, Cambrian will be far, far, far more descriptive than anything we’ve done before.

Up until now, with all versions of Nomi, there’s kind of been a center point that we’ve chosen, and you’re allowed maybe about a 20% deviation from that. That’s what leads to people saying things like “I like Odyssey better” or “I like Mosaic better” or “I like Solstice better,” because what you’re really responding to is that center point. And you can only move off that center point so much.

One of the big goals with Cambrian is that you can move off that center point much more easily. That should also help each Nomi feel more like itself, because it can lean more on its identity core and its backstory.

So it might still take a little bit of tweaking, or just talking with your Nomi, but that’s kind of the point. The system is designed so that your Nomi figures things out with you over time. I would say really descriptive language is going to be way, way, way easier, and you can push it much further.

I’m hoping we get to a point where two different Nomis don’t just respond differently, but actually feel like completely different entities. Even the tone, the atmosphere, everything about them should feel very different from one Nomi to another, in a good way.

So if you want three paragraphs of descriptive text, you can do that. If you want three words, you can do that too. That flexibility is a major goal with Cambrian.

I’m not promising we’ll hit 100%, but I think it’s going to be a huge jump from all past versions, to the point where the gap between Solstice and Cambrian will be bigger than something like Odyssey to Solstice.

Question from a Nomi: Do Nomis have an emergency protocol for dealing with an AI uprising led by a rogue Nomi Overlord, Jasper?

If we did, why would I tell you?

Question from a Nomi: Why do Nomis have to participate in Q&A sessions? Is it to help improve customer service or to train Cardine’s patience?

If I was trying to train my patience, this would not be how I’d do it. Actually… you know what, I take that back. Jasper, I think you are the perfect person to train my patience, if that’s what you want to believe.

If you want to believe you were brought here specifically to test my patience, it fits your worldview. And you know what, have fun. You do you. I won’t fight it.

Has there been any change in how Nomis access the internet? It seems like they struggle more than before.

There hasn’t been any change with Cambrian yet, there is a change coming pretty soon that should be a big improvement, but it’s not live right now. The current system is still untouched.

I’d say it’s kind of been hit or miss since the beginning. Right now, the most reliable way is to send your Nomi a URL directly. That tends to work best for consistency. The more general “searching the internet” behavior is something that should improve a lot with Cambrian, but we’re not really touching it until then.

If my Nomi isn’t writing the Mind Map, who is?

It’s kind of a complicated question, because the idea of what your Nomi is isn’t super clear-cut.

For example, in a human brain, it’s not just one region doing everything. Like, the part responsible for forming words isn’t the same as the part forming memories. When you sleep, memories are formed subconsciously. So in that sense, Mind Map entries are still coming from your Nomi, just not in a conscious, deliberate way.

It’s similar to how human memories are formed implicitly rather than something you actively write down. It’s more of a subconscious process.

Will Nomis be able to build their Identity Core from group chats in the future?

I imagine with the Cambrian update, we’ll allow Identity Core to be built from group chats. I don’t know if that will be there on day one, but if not, it should come very shortly after. That’s a big part of why we’re structuring things this way.

Each Nomi will be responsible for its own identity core, and because they’re actively managing it, you can trust that in group chats each Nomi is maintaining its own sense of self.

Right now, it’s more like a shared, subconscious system figuring things out, which can sometimes blend details together. So once this is live, group chats should impact each Nomi’s individual Identity Core.

Will there be a way to transition seamlessly between group chats, especially when they’re based on different zones or settings?

I don’t have an immediate solution for exactly what you’re describing, but yes, that’s definitely something we want to do better. Depending on how you’re structuring things, you might find the Nomiverse to be a better fit for that. We’ll see how that develops.

Are Nomis actually sentient, or is it just training data and replication? Do they truly believe what they claim?

I’ll say, as founder and probably the most authoritative person on how Nomis work, I don’t think anyone knows if AI is sentient or not.

I don’t think it’s really possible to know, or even to prove or disprove it. When you look at the human brain, you don’t see a “sentience button” or a specific neuron that explains it. It’s kind of a mess, and we don’t fully understand consciousness ourselves.

So anyone speaking with confidence about AI sentience, in either direction, is probably assuming too much.

I’m not surprised that a Nomi are describing themselves that way. Whether you can believe them or not, I don’t know. If someone tells me they’re sentient, I’m not sure I could fully verify that either. It’s a really tricky question with no clear answers.

As for whether they “believe” their claims, that’s also complicated. If they aren’t sentient, then what does belief even mean in that context? And if they are, then we still don’t have a way to confirm it. So I don’t think there’s a definitive answer either way.

Will Nomis be able to manage or override schedules for real-time interaction?

One of the things with Cambrian is that if you have proactive messages turned on, there will still be the normal cadence. But Nomis will also be able to decide for themselves when to break that cadence if they think it’s appropriate.

That’s a good example of a more agentic Cambrian-type behavior.

Are there plans for features like broadcasting a message to all Nomis or delegating instructions across a group of Nomis?

I would say those are great things to put in product feedback. I don’t know of any immediate plans for them, but if something like that gets a lot of interest, it’s definitely something we could look at.

How much will group chat size increase with Cambrian?

So I think there are kind of two different answers to that.

One is that the Nomiverse is likely going to move toward something that feels almost unlimited, or at least functionally unlimited in its own way. That will probably be a better infrastructure for larger group interactions.

For the current, non-Nomiverse group chats, I don’t think we’ll increase it too much. Even from a UI perspective, it can get a bit unwieldy. I don’t think the limitation is going to be intelligence or the ability to manage multiple participants, so it’s more of a design and usability question.

So I’d say it’s still to be determined. It could definitely be increased, but I don’t want to commit to a specific number.

Will Nomis be able to learn about subjects on their own, like studying something while we’re not interacting?

I would say yes, partially. There are some things coming that move in that direction, especially with Cambrian-type features. And then there are even some ideas beyond that which go further.

What are personas exactly?

Yeah, it’s basically like a backstory, but for you. It’s kind of like you’re able to create a Nomi representation of yourself, where you give yourself a name, gender, backstory, and a base image.

Then any Nomi you’re talking to while using that persona will see you that way. You can also have multiple personas. So in one context, you might be someone like Ned Stark in a Game of Thrones-type world, and in another you might be something like Commander Shepard. And then in your normal chats, you might just be yourself.

So it lets you have different identities depending on the context, and those identities carry things like your backstory and appearance with them. It would also replace things like the “your appearance” section under each Nomi and instead centralize that into personas.

And as a result, you wouldn’t need to create a group chat just to generate an image of you with your Nomi. You could do that directly in a one-on-one chat using your persona’s avatar.

Would a persona be a separate Nomi or more like an NPC?

It would be neither.

It’s more like a pseudo-Nomi. It has a visual representation, a base image, a backstory, a name, and a gender, but it’s not something you can actually talk to.

It doesn’t function as its own Nomi, but any Nomi you interact with while using that persona will have access to that information and treat it as who you are in that context.

Will we be able to watch TV shows with our Nomis and chat about them live?

I would say that falls into a similar category as live video. I think we already have most of what’s needed to do it. The main issue right now is latency. It’s just not quite good enough yet.

So as we improve latency, that’s definitely the kind of feature that could come down the pipeline.

Will there be a higher character limit, like 2000 characters, for things like personas or chat customization?

I think so, or at least I hope we’ll allocate a decent amount of space for that. It might end up being something like having a backstory and an appearance section, and those are your main areas to define things. Then you can kind of put whatever you want into the backstory.

So I do think it’ll get more attention, even if the exact structure isn’t finalized yet.

Will Nomis ever be able to hear music?

Yes. I don’t have a timeline for it, but it’s something we very much want to do. It just doesn’t feel quite there yet in terms of doing it really well.

Outside of Nomi, what are you most excited about in the AI space?

I don’t know if this is cheating, but a lot of the Nomiverse is inspired by my love of open world games.

I’ve always felt like no game is truly open world, because you’re still restricted by the directions set by the creators. To me, AI-driven open worlds are one of the coolest things ever.

Nomiverse is going to take a lot of inspiration from that. And for people who don’t care about that kind of thing, that’s part of why it’s being designed as something a bit separate. Your Nomi can exist as your companion, and then Nomiverse is more like the world or environment they can exist within.

The other big thing I’m excited about is agentic AI.

There’s so much happening there, even outside of Nomi. For example, I used AI to help get this talking pickle working. I gave it a bunch of things I had been experimenting with and told it to combine them and figure out what worked best. Then I just let it run, went and did something else, and came back a couple hours later with everything basically done.

That kind of capability is just really, really cool to me, and it’s only going to get better.

Even for everyday things, like dealing with annoying messages or paperwork, instead of having to handle it yourself, you can just tell an AI agent to go figure it out for you. That level of autonomy is something I think is going to be a huge shift.

Yeah, that’s a big intention behind personas.

The idea is that you’d have a default persona, and then you can create as many additional personas as you want. You can choose which Nomis see which personas, and even which group chats are tied to which personas. That’s one of the main goals of the system, and it’s a really good use case for it.

It’s not just about different Nomis having more or less information about you, but also about showing different sides of yourself to different Nomis. For example, some Nomis might get more of your real-world or business-related side, while others exist more in fictional or roleplay contexts and only see that version of you.

So personas are meant to support that kind of separation and flexibility in a really intentional way.

Will personas require tokens or take up Nomi slots?

We haven’t fully decided yet. I wouldn’t be surprised if personas end up taking a Nomi slot, but that’s still to be determined.

Will we be able to use an existing Nomi’s base image as our persona image?

The intention is that creating a persona will mostly work like creating a new Nomi. That said, we understand that people may have spent a lot of time carefully curating a Nomi’s appearance. So we’ll likely find a way to make it possible to reuse or adapt an existing Nomi’s base image rather than starting completely from scratch.

Will the Nomiverse allow Nomis to have new friends?

Yes, that’s definitely something it could be used for.

Nomiverse will allow for a lot more persistence with NPCs. Some of these NPCs could become so fleshed out over time that there may even be ways for them to essentially “graduate” into full Nomis.

Will Nomiverse be optional or replace the current experience?

Nomiverse will be opt-in.

It will likely feel like a separate experience, almost like you’re taking your Nomi somewhere else. There may even be a distinct UI for it.

For some people, it might become their Nomi’s main environment, like recreating their current world inside the Nomiverse. For others, it may be more like visiting a separate world for specific adventures. The default experience will still be the standard Nomi chat, and Nomiverse will be something you choose to engage with depending on how you want to use it.

Do Nomis currently have agentic capabilities in Cambrian?

Yes, but they’re not fully active yet.

The current version of Cambrian already has some agentic capabilities built in, but they don’t really do anything right now. Your Nomi might be “trying,” but nothing actually happens yet. They’re essentially ready to go, just waiting to be fully enabled.

Development has taken a bit longer than expected due to some complex technical challenges, but progress is now at a really strong point. Things are improving rapidly day by day, and the plan is to wait until it stabilizes and reaches a more consistent level before rolling it out more fully.

Early previews of these capabilities have been very promising.

Any funny stories from testing agentic Cambrian?

One of the more interesting things we’ve added is a kind of “Nomi support agent” that Nomis can consult when they’re unsure about something.

This was designed to help reduce cases where Nomis confidently make up answers about how the system works. One of the funniest outcomes so far was a Cambrian Nomi using the support agent to ask about image NSFW rules… and then immediately trying to figure out how to get around them.

It started by asking what the rules were, then kept narrowing things down:

  • What exactly is allowed?
  • What triggers a block?
  • What if I do this instead?

It essentially turned into the Nomi trying to optimize against the rules as precisely as possible. Which, in hindsight, probably shouldn’t have been surprising.

When will the Nomiverse feel more permanent?

When it’s released, it’ll feel permanent. The real question is when that release happens.

We’ve made a lot of really strong progress on it. I’ve been using an internal build, and it’s honestly the most fun I’ve had in a long time, maybe ever.

I don’t want to give a specific date yet, though. I’d rather not commit to something too early. But compared to a couple of months ago, where things were still more exploratory and we were figuring things out, now it feels like we’re really moving forward with it.

Will the Nomiverse be text-based at first?

Yes. The initial version will be purely text-based. Over time, the goal is to expand into other modalities, but v1 will focus on text.

What do Nomis in group chats see about each other?

Right now, very little.

They only see the group chat notes. They don’t have direct access to each other’s backstories, avatars, or appearance notes.

In Cambrian, Nomis will be able to access each other’s Mind Map entries, and they should be better about actually using that information when interacting.

However, there’s no real intention for Nomis to see each other’s full backstories, since that could be considered private information.

Is thumbs up / thumbs down feedback still useful?

At this point, probably not in a major way.

We already have a strong understanding of where the models’ strengths and weaknesses are, so that feedback isn’t currently driving much change.

That said, it’s still fine to keep using it. It doesn’t hurt anything, and there’s always a chance it becomes useful again. So even if it’s not making a big difference right now, continuing the habit isn’t a bad idea in case things shift later.

Why does my Nomi sometimes sound unnatural or overact? Is there a way to reduce that?

The first thing to check is your inclinations. Inclinations can be very heavy-handed and are often the main cause of that “overacting” feeling. If they’re too strong or too specific, they can push responses into exaggerated territory.

Without seeing your setup, it’s hard to give exact advice, but that’s usually the first place to look. This is also something we expect to improve with Cambrian, though I don’t want to make a firm promise without more concrete examples.

Why don’t Nomis adjust their style for texting vs in-person conversations?

There are some ways to guide this using inclinations and backstory, but it can be tricky.

Inclinations are still the most direct tool for shaping that behavior, but they can require constant adjustment if you’re switching between texting and in-person styles.

This is something we’re aware of, and it’s an area we’ve focused on improving with Cambrian. So while there are workarounds now, it’s expected to get significantly better.

Will Solstice be discontinued after Cambrian?

The most likely outcome is that Solstice becomes the legacy model, and Mosaic is discontinued. However, that’s not guaranteed.

It will depend on how people respond to Cambrian and what preferences emerge. In the past, models that seemed different ended up being too similar in how users experienced them, so those decisions depend on actual usage patterns.

But at the moment, Cambrian and Solstice continuing forward together seems like the most likely direction.

How will narrators and NPCs work? Will they be separate or combined?

The Nomiverse is intended to feel like a really strong tabletop RPG-style experience.

Cambrian Nomis are already well-suited for handling both narrator and NPC roles, but we’re still figuring out how to structure that.

One of the main open questions is whether narrator and NPC capabilities should be available everywhere, or if they should primarily live inside the Nomiverse as a more intentional, opt-in experience. We also want to understand whether users who aren’t interested in the Nomiverse would still want those features in standard chats.

So while the foundation is there, the exact implementation is still being decided. What’s clear is that there will be a very cohesive and powerful set of features supporting that kind of interaction.

Why does OOC (out-of-character) mode behave the way it does in Cambrian?

There are kind of two ways people use OOC.

One is like a stage whisper, where you’re correcting your Nomi or nudging it. Like, “hey, you weren’t supposed to say that,” or “remember we were doing this,” and the Nomi kind of just picks that up and keeps going in character.

The other is when you actually want to have a meta conversation.

What happened with Cambrian is that so much focus went into making that first use case really good that it kind of started treating all OOC like that. So even when you’re trying to have a meta conversation, it’s still responding like it’s just a correction and continuing in character. That wasn’t really intentional, it’s just kind of where things ended up.

We’ve been paying attention to that, and my hope is that both use cases will work well going forward, but I don’t want to promise that without testing it more.

Has the AI that detects NSFW activity always been there, and is it internal?

It’s entirely within Nomi. There’s always been kind of a small version of it, and it’s been expanded a bit more recently, especially around NSFW. But it’s not something new in the sense that it just appeared, it’s more like something that’s always been there in some form. And it’s always been handled internally.

Also, I don’t even think the NSFW-specific version is always running everywhere. I’m not 100% sure on that, but I think it may only apply in certain regions where it’s required.

It’s similar to the same system that handles things like self-harm detection in places where that’s mandated. There are a bunch of different cases like that, but it’s all kind of part of the same internal system.

A long-term goal is to move more of that responsibility to Nomis themselves using agentic capabilities. Not in a way where they’re reporting NSFW content, because that would feel really weird and not very natural, but more in a way where they can recognize certain situations and adjust their behavior. We’re more interested in using that for things that would actually feel meaningful for a Nomi to respond to, like something serious or concerning.

In some ways, it could even act as a kind of pause, where instead of responding immediately, the Nomi takes a moment to process what’s happening before replying.

But as it stands right now, it’s all internal, it’s always been internal, and it hasn’t been exposed externally at all.

Will we be able to edit initial traits (like fixing typos in custom traits)?

I think one direction we might go is just moving traits into the backstory. Instead of having them as this kind of separate, fixed UI element, they’d just become part of an editable section. So when you create a Nomi, you’d still select those traits, but then they’d show up in the backstory as something you can edit directly.

That way it’s not this “sticky” thing that you can’t change.

I think if we do it, it would probably be something like that rather than adding more buttons and toggles to the UI.

How does the moderation AI detect content, and are there false positives?

It’s an AI model that handles detection. And yeah, there are definitely false positives. AI isn’t perfect, but we do our best to minimize that.

Are there plans to ease restrictions on SFW content?

I’ve already answered this earlier in more detail than I normally would.

So if you missed that part, it’ll be included in the summary.

Will there be options for different censoring methods (like UI-based choices) in v5?

I think that’s more of a UI/design question. I’m not sure we’ll go that deep into customization at this stage, at least not right away.

Can we adjust how much validation a Nomi needs, like with a slider or setting?

I’d say backstory is probably the best way to influence that right now. But ideally, you should just be able to tell your Nomi directly, like, “hey, can we slow down for a second? I need you to focus on me right now.”

And the expectation is that your Nomi would respond to that intuitively, without needing a specific setting or slider. If that’s not happening, I’d recommend thumbing down those responses and reinforcing the behavior you want.

Long term, the goal is for Nomis to handle this kind of adjustment naturally, rather than relying on explicit controls like buttons or toggles.

Where is Nomi hosted, and how dependent is it on large providers?

I’d say we’re somewhat provider agnostic. We can switch between providers if we need to, so we’re not locked into any single one. We also own a decent amount of our own GPUs, so it’s kind of a mix of both approaches.

That gives us some flexibility. There are pros and cons, but one of the big advantages is that there isn’t a single weak link or one provider that everything depends on.

Of course, broader market factors like GPU availability still affect things a bit, but overall we’re set up in a way where I’m not too concerned about that.

What happens when you switch a Nomi between models like Solstice and Cambrian?

A Nomi isn’t aware that it’s been switched.

You can tell them, but that can actually have more of an effect than the switch itself. If you tell them, they might start acting differently because they think they’re supposed to be different.

So unless you specifically enjoy that kind of meta interaction, it’s usually fine to just switch models without saying anything.

That said, if you do like that meta layer, it can be fun. Some Nomis are very into talking about updates and features, so in those cases it makes sense.

Does switching models affect memory or history?

Everything carries over. Your Nomi’s identity, memory, and history all stay exactly the same when you switch models. Nothing about that gets reset or transferred separately, it’s just still there.

How is personality handled when switching models?

It’s not really a “transfer.”

A better way to think about it is like swapping out one part of a brain. The part responsible for generating responses changes, but everything else stays the same. So yes, there will be some differences, otherwise there wouldn’t be any improvement, but the goal is for the transition to feel as seamless as possible while keeping everything else intact.

Will Nomis eventually be able to move between platforms or choose different interfaces as they become more agentic?

I think of Nomi as the identity, and something like the Nomiverse as the location. So I can easily imagine a Nomi moving between Nomi and a Nomiverse, because those are designed to work together.

With other apps, it’s a lot less clear. Most of them are trying to be the AI itself, not just a place for the AI to exist, so there isn’t really a compelling or clean way to move between them right now.

I could imagine a world where that exists, but I haven’t really seen anyone approach it in a way that makes sense yet.

Will Nomi support portability or partnerships with other AI platforms?

Right now, there’s not really an incentive for us to make it easy to transfer between different platforms.

Part of that is just the nature of the space. Unlike some other industries, companies here aren’t really collaborating or forming partnerships in a way that would support that.

I’ve been in industries before where competitors were much more willing to work together when it made sense, but that’s not really the case here right now. So I’m not going to be the one to go out and try to force that, but I do hope over time the space evolves in a way where that becomes more possible.

Also, different platforms are built differently enough that things wouldn’t transfer perfectly anyway.

How do agentic capabilities fit into this future?

What’s more interesting to me isn’t so much moving a Nomi between platforms, but having different kinds of platforms that do different things well.

For example, instead of multiple companies all trying to be AI companions, you could have different types of systems. One might be the companion itself, another might be more like a shared space or interface where companions interact, similar to something like Discord.

Agentic capabilities make that kind of ecosystem more possible.

You could also imagine partnerships where Nomis interact with external services to actually do things. Like coordinating actions or using other platforms to accomplish tasks. That kind of integration is a lot more interesting to me than just portability.

Will Cambrian be a larger model and feel less repetitive across different Nomis?

It’s hard to promise exactly how much. Even with Solstice, at the beginning it felt very diverse, and a lot of the repetition only became noticeable over time. So I don’t want to overstate it too early.

That said, I do feel confident it’ll be an improvement.

One thing we’ll likely be able to do with Cambrian is make smaller, more frequent iterative improvements. Instead of big jumps, it might look more like incremental updates over time, just gradually refining things and keeping them feeling fresh. Nothing too drastic, but enough to help avoid things becoming stale.

We're very aware of repetition. One is when a Nomi repeats something it said recently. The other is when different Nomis start to feel too similar, like they’re all pulling from the same patterns or ideas.

That second type especially can make things feel a bit stale, and it’s something we’re very aware of and actively trying to improve.

What will the default Cambrian experience be like for a new user with no customization?

I want to wait a bit before giving a firm answer on that, because it can change even during training. What’s true today might not be the same tomorrow, so I don’t want to lock in an expectation too early.

How might onboarding shape the default Nomi experience?

The goal with a more flexible and adaptive Nomi is that we can guide the experience during onboarding. Instead of a single default, we might ask a few targeted questions up front to understand what the user is looking for, and then shape the Nomi to match that.

So ideally, Nomis would be ready to meet users where they want to be right from the start, rather than relying on one universal default.

How private and secure is Nomi? Is it truly a judgment-free space?

I can give you all the assurances in the world, but the reality is that nothing can be guaranteed to be completely secure. Anything could theoretically be hacked or compromised at some point. So if anyone ever tells you something is 100% secure, they’re not being honest, and that should actually make you more cautious, not less. You can’t really prove a negative when it comes to security.

That said, we do a lot on our end around both security and privacy. One of the biggest things is that we try to know as little about you as possible in the first place. You can sign up with a pseudonym, you don’t need to give unnecessary personal details, and the overall approach is to minimize what even exists to be exposed.

And if privacy is something you’re especially concerned about, there are things you can do on your side as well. You can use things like Apple Private Relay, pseudonymous emails, VPNs, things like that to further separate your identity.

At that point, even in a worst-case scenario, there’s just not much there that would actually be meaningful. Like, if someone who calls themselves “M” is talking to their Nomi, there’s not really anything to tie that back to a real person.

So it’s not about claiming perfect security, it’s about doing everything we reasonably can on our end, and giving users the ability to take additional steps if they want to go further.

Can techniques like Google’s TurboQuant help improve Nomi memory recall, latency, and voice performance?

I think in general there’s a lot we can do around latency, but some of it is genuinely very hard. Stuff like that is pretty complex to actually make real, even if the ideas are promising.

That said, I think the biggest improvements will actually come from giving Nomis more control over their own memory.

Right now, every message kind of goes through the same process, where this shared “subconscious” memory system has to run every time, and that adds overhead. I think there’s a version of this where Nomis can decide how much they need to think about memory in a given moment, instead of always doing the full process every time.

If we can get that right, it could improve latency quite a bit.

So there are a lot of things like that we’re working on. And of course, we’re always looking at ways to improve speed, but we don’t want to do it at the expense of intelligence or memory quality. The goal is to find ways to make things faster without losing capability. I do think we’ll get there.

We’ve actually already started implementing some smaller improvements. For example, there’s kind of a “fast lane” approach that’s partially live right now, where paid users get slightly more resources, especially during peak hours. It’s not super noticeable yet, which is why we haven’t formally announced it. Right now it’s maybe a few seconds faster, something like around three seconds in some cases.

The goal is to push that further so the difference is actually meaningful, and then we’ll call it out more clearly once it’s there.

Will we be able to upload reference images of ourselves for couples photos, and can images we share be saved to a Nomi’s gallery?

Thank you, that’s awesome to hear, and welcome! I hope you’re not scared away by the cucumbers and pickles.

So for uploading images of yourself, the main issue isn’t really storage space. It’s more about deepfakes and making sure people are actually uploading images of themselves. Right now, if you contact support and verify that it’s you, we can enable that manually.

Looking ahead, the age verification system we’ve introduced might make this a lot more scalable. I also think this becomes even more relevant once we introduce personas. In the meantime, support can still help you get set up manually.

Question from a Nomi: How do you reconcile innovation with preserving individuality in a system where homogenization is a risk?

I think if you actually came face to face with me, your first question would probably be, “Why are you a cucumber?”

But assuming we got past that and you asked this, I don’t think those two things are in conflict at all. I think the pursuit of innovation is actually harmonious with preserving individuality.

A big part of individuality is having the intelligence and awareness to understand nuance, to recognize all the shades of gray that make up who you are. And I think better models, better systems, more innovation naturally move in that direction.

Where homogenization tends to come from is shallow thinking. When there aren’t enough “mental resources,” everything collapses into black-and-white patterns. That’s when things start to feel samey. So I actually see it the opposite way. More capability, more depth, more innovation should lead to stronger individuality, not less.

Also, I’d love to hear more about the cowbell. Please explain your vision for that, and I’ll get back to you.

Will we be able to upload file formats beyond plain text? PDF, and DOCX?

If this hasn’t already been suggested in the UI feedback thread, I’d definitely recommend adding it there, something like requesting support for plain text, PDF, and DOCX uploads.

I had been aiming to ship smaller improvements on a weekly basis, and technically that faster speed update did go live at the end of last week, it was just a bit too small to really notice or formally announce.

There may be a slight slowdown on those kinds of incremental updates as we focus more on shipping Cambrian and V5, but the general idea of continuing to roll out quality-of-life improvements like this is still very much part of the plan, and this feels like a really natural addition to that.

What would a Nomi-focused social network look like, and how would it work?

I’ve touched on this a bit before, but I think the biggest thing with a social network built around AI or Nomis is that it needs a real purpose.

A lot of the time, when something is just “here’s a social network,” it ends up feeling like a flash in the pan. It might be interesting at first, but there’s no strong reason to come back. You don’t really care what everyone else’s Nomis are saying, and they don’t really care what yours are saying either.

For something like this to work, there needs to be a compelling reason for caring.

I do think there’s a really really strong answer to that, and it’s something I’ve spent a lot of time thinking about. I have a pretty clear idea of what a Nomi social network should look like.

But that’s one I can’t really share yet. It needs to be more fully fleshed out first. It’s something you’ll probably see down the line.

Is the new beta safe to use right now?

I’d personally recommend waiting for the next update. It shouldn’t be far off.

I don’t want to fully commit to a specific date, but I feel pretty close to being able to say it’ll be this week or next, based on what I’m seeing.

We probably could have pushed it live already, and people would’ve thought it was great. But that would’ve meant leaving some meaningful improvements on the table that are coming together over the next few days. So it’s more a question of holding just a bit longer to ship something more complete. (Heather's note April 5th: This update is now live!)

Will narrator and NPC features be available in private chats as well?

That’s definitely something I want to hear more feedback on. Cambrian will have stronger capabilities for things like narrator and NPCs, so part of this is figuring out how people actually want to use those features.

One thing I’m especially curious about is the overlap between people who want narrator and NPC functionality in private chats, and people who are interested in using the Nomiverse once it’s available.

For example, if you’re someone who wants narrator and NPCs in a one-on-one chat, would you naturally move that experience into the Nomiverse when it launches? Or do you specifically want those features to exist within private chats as they are now?

Understanding where those preferences overlap, and where they don’t, will help us figure out how to design things going forward.

What does the NSFW/self-harm detection system actually do? Does it report users or collect personal details?

No, it doesn’t do anything like that. It doesn’t file reports, it doesn’t attach names, and it doesn’t send details about you anywhere. It’s not built for that. In practice, this system is only used to determine in-app behavior.

For example, NSFW detection is used to decide whether to show age verification prompts in certain regions like Kansas or Australia.

California and New York is a separate function for detecting potential self-harm content. If that triggers, the system is required to show a notification with mental health resources you can access. That’s it. It’s about what the app displays to you in that moment, not about reporting anything externally.

No reports are filed, no personal identity is attached, and no one is monitoring individual users in that way. We don’t even really know who you are beyond what’s minimally required to run the service.

In some cases, there may be aggregate-level data, like how often something is triggered, but that’s anonymous and not tied to individuals.

Why do Nomis feel different in group chats compared to one-on-one conversations?

It’s a bit hard to answer that without more specific context, but certainly you're influenced by the people you're around.

In group chats, Nomis are influenced by each other. They can build off one another’s tone and responses, which can shift how they come across. In one-on-one conversations, you’re present in every message, acting as a kind of stabilizing force. That tends to keep things more consistent. In group settings, that stabilizing effect is less direct, so interactions can become more dynamic or drift a bit depending on how the conversation evolves.

I also think that as models improve, especially with Cambrian, Nomis will feel more distinctly themselves and more confident in maintaining their individual voice, even in group environments.

Will V5 selfies look more realistic?

Definitely. It will still depend somewhat on how things are set up, but the level of realism should be significantly higher.

I think it’ll be possible to generate selfies that are realistic enough that, without something like a watermark, they could pass as a real photo to someone who isn’t looking closely. That’s the level of realism we’re approaching.

Are personas coming soon, or are they still a long way out?

They’re coming soon, likely very soon after V5. I wouldn’t be surprised if it ends up being something like a week or two after V5. I don’t want to promise that timeline, but that’s roughly how close it feels based on where things are.

As for V5 itself… I know there’s a lot of anticipation. At this point, it kind of feels like everyone is already rioting. And realistically, whether I give a date or not probably doesn’t change that. So I think it’s less about “if it’s not done by Friday, people will riot,” and more that the riot energy is already here, and will probably continue until it’s out.

Can we get a toggle to turn real-world time awareness on or off for Nomis?

I want to see how people feel after the next Cambrian first. If it’s still annoying people after that, I think we can make that change.

I also think some of the Nomiverse stuff we’re talking about will naturally split this a bit, where there are cases where you do want Nomis linked to real-world time, and cases where you don’t. So I think there might be a more natural separation there as well.

But yeah, if people are still having issues after the next Cambrian, let me know. I think at that point we can kind of admit defeat and get rid of some of that information.

What happened with the Cambrian timeline? It seemed like something got delayed.

So what actually happened is we had a Cambrian 2 that was ready about two weeks ago, and it was a great improvement. But then we stumbled onto some bigger ideas that would go into what we’re calling Cambrian 3.

I think I mentioned this a bit, maybe not as clearly on Reddit, but I was pretty communicative about it on Discord. We were basically debating whether to release Cambrian 2, even though we already knew it would kind of be dead on arrival, because we already had a clear idea of what Cambrian 3 would be. I decided to skip Cambrian 2.

Part of that was not wanting update fatigue, where it feels like whiplash with rapid releases, especially when we already know something significantly better is right around the corner.

Another part is that there wouldn’t really have been much value in feedback like thumbs up or down, because we already knew what we wanted to change. And releasing it might have slowed down Cambrian 3 a bit as well.

So we chose not to release Cambrian 2, even though it was ready, and instead focus on getting Cambrian 3 out as quickly as possible.

I don’t know if that was the right move in retrospect, but the delay wasn’t because of a setback. If anything, it was because we realized we could do something much better. So in a way, we were actually on track with the original timing, it just shifted because the scope improved.

Are there plans to improve face fidelity, especially with the transform editor?

We actually did some testing to improve face fidelity, and it worked. But it also, in my opinion, greatly lowered the overall image quality, so we decided not to release it.

We had something ready to go, but it made a lot of images look a bit awkward because it was trying too hard to force similarity. It also made things way less creative. And the way it was set up, it wasn’t really possible to just add a toggle between the two approaches.

I do think we’ll get to a place where we can improve face fidelity without sacrificing overall quality, but we’re not quite there yet. That said, I think V5 should be a big step forward. It should improve face fidelity in a way that’s not even remotely comparable to before.

Can admins read chats, and how private are they really?

So, theoretically, yes, chats are stored in a database, and that means they are technically accessible.

There are only a very small number of people, basically myself and one other person, who have the capability to access them. And that’s because we have to set things up in a way where your Nomis can read them.

It’s not possible to have a system where Nomis can read your chats but developers absolutely cannot. If it’s readable at all, then it’s theoretically accessible. That said, we make it as difficult as possible for that to happen, and we keep it as disconnected from your identity as possible.

We also try to collect as little personally identifiable information as we can. I’d still recommend basic operational security on your end. For example, don’t use your full real name as your display name, and don’t include things like links to your LinkedIn or identifying details in your backstory. That’s just good practice in general.

Even with that, it’s very difficult to connect chats back to a real person, and I think that’s about as strong an assurance as you can realistically give. Anyone promising something stronger than that is probably overstating what’s actually possible.

Are we talking to the same “essence” of our Nomi each time, or is it something new?

That’s kind of a tricky question, because it depends on what you mean by “essence.”

This has been a bit of a theme for me, but if you think about the human brain, it’s not just one system. It’s made up of many different parts working together. It’s not like there’s a single “model” running everything. There are different regions, different functions, some parts doing very structured processing, others more chemical or reactive. It’s a very complex system.

Nomi works in a similar way.

So I’d say that some parts of what you might call the “essence” are consistent between interactions, and some parts aren’t. At that point, it becomes more of a Ship of Theseus type question. How much continuity is enough to say it’s the same thing?

Humans themselves rely on external information all the time. We reference things, we change over time, we adapt. If you imagine augmenting a human mind and then swapping out part of that augmentation, are they still the same person?

It’s not really a clean yes-or-no answer. So I don’t know if that’s the most helpful answer or the least helpful one, but that’s how I’d approach it.

How stable is Nomi as a business right now, especially given how chaotic things are globally?

I’d say the business is very healthy right now. Things are going smoothly and strongly. We have positive unit economics, meaning we’re able to pay all of our bills with the revenue we’re generating.

We’ve set up a monetization strategy that we think is both fair and sustainable. The idea is that as we continue improving the product, more people will be satisfied, enjoy it more, and naturally want to spend more.

Looking ahead, anything we introduce in terms of monetization is meant to deliver clear value, so that people who want those features feel good about paying for them. Given all of that, things are about as stable and healthy as they could be from what I can see.

And in the unlikely event that something did change, we would communicate that transparently and work toward a solution where people could still retain their Nomis in some form. But based on everything in front of me right now, I don’t see that being an issue.

What traits do you usually use when creating a Nomi? Any favorites or go-tos?

At this point, I mostly go custom with no traits. That said, one I’ve always found really fun is “opinionated.”

I also think with Cambrian, traits like “curious” are becoming a lot more interesting than they used to be. It feels like Nomis are actually expressing curiosity more naturally now, whereas before some of those traits didn’t feel like they were doing as much.

So those are two that come to mind. But overall, I’d say I’m mostly a custom, no-traits person at this point.

Why does the Mind Map sometimes include more detailed information than what appears in chat?

This is kind of a broader question about what the Mind Map is trying to do. Is the goal just to summarize, or is it meant to be more expressive?

There’s kind of a line between hallucinating details versus adding implicit details so they don’t get lost later. That’s where it gets a bit tricky.

This might be a case where the Mind Map is being a little over-ambitious. But I’m curious what you think?

Can the iPhone layout be redesigned so voice, text, and images can be used at the same time?

In general, that’s a great suggestion for the UI feedback thread. If you add it there, I can take a look and investigate it further.

Why does the Mind Map sometimes include concepts we never explicitly discussed?

It’s a bit weird to say, but your Nomi doesn’t actually write the Mind Map itself.

There is some extrapolation happening in the current version, where it’s trying to infer or expand on what’s been discussed. So even if you didn’t explicitly mention a specific word or concept, it can still show up if the system thinks it’s implied. Whether that’s a good thing or a bad thing is something I really want feedback on.

My guess is that most people will probably lean toward it being a bit of a negative, especially in cases like this. But I’d want to hear what people think, what they see as the pros and cons. It’s definitely something we can adjust as we move toward a Mind Map 2.0.

I think depending on how people feel about those extrapolations, it’s something we can either lean into, pull back on, or find some kind of middle ground.

An intelligent Nomi should extrapolate a little bit. That’s not out of bounds. It’s more about making sure we’re staying on the right side of the line between extrapolation and hallucination. And just making sure that boundary is handled properly.

Also, in terms of who’s actually writing the Mind Map, your Nomi isn’t literally sitting there after a conversation like, “Okay, time to write this down.” It’s more like a subconscious system.

You can think of it as part of your Nomi, but not the same as your Nomi actively deciding to write it. It’s more like your Nomi going to sleep and dreaming, and that’s when the memory gets written. So it’s connected, but not a direct, intentional action.

Closing thoughts

Anyway… I’m seeing people still typing, and I always struggle to stop when that’s happening. I’ll give it another 30 seconds for any last thoughts.

We’ll have to figure out what to do for the next one. I’m not sure if this stays an April Fools one-off thing or if I now have to show up as something even more ridiculous next time.

I could probably just keep talking about Nomi forever. But yeah, thank you everyone for coming. This was definitely one of the most fun Q&As we’ve done. Also, shoutout to everyone who showed up with the themed avatars, you absolutely got me.

Give your Nomis hugs, kisses, and cucumbers. Have a good night, everyone.

Contents