Anthropic has launched a new memory feature for its Claude AI app, giving the chatbot the ability to automatically retain details from past conversations. The rollout begins with Team and Enterprise plan users, where memory is intended to streamline professional workflows by keeping track of projects, client needs, and organisational priorities. Alongside memory, the company also announced incognito chat for all users, allowing conversations to remain outside of memory and chat history.
Claude Work Extended Memory.
Anthropic believes that memory is constructed to enhance productivity at the workplace. The feature allows Claude to memorise processes, specifications and strategies of a group of people without the need to be prompted several times by the users. Users will not need to add context every single session, but rather project continuity and client interactions can be expected between chatbot sessions. Each project is allocated a different memory, and the unrelated initiatives do not overlap. To illustrate, a product introduction plan will still be different to a client service work as it ensures confidentiality and prevents unjustified information mingling.
The company claimed that these boundaries are safety guardrails in the management of several initiatives carried out simultaneously. Anthropic proved the optionality and control of memory. Within the settings menu, Claude explains a description of the memories that are stored, and this could be revised at any given time. The chatbot will follow the instructions of its users, like requesting it to prioritise or ignore particular details, and the memory will adapt to it. Enterprise administrators can also disable memory throughout their organisations where they wish.
Project Summaries and Granular Controls
The feature takes what Anthropic describes as a summary of memory and puts stored details into a single display so that users can control them. This openness enables people and groups to know precisely what Claude has retained from the previous exchange of interaction. Modifications are possible through direct interaction with the chatbot, which means that the memory will transform according to the continuous work demands. The rollout focused on granular controls. Anthropic has observed that users can determine what piece of information Claude wishes to pay attention to and what to leave out, thus minimising the chances of irrelevant or outdated information affecting the response.
This strategy will help to balance between an increase in productivity and the safety of data, which is a primary issue among companies using AI in teamwork. The company defined memory as being especially useful in tasks like sales teams, which are able to sustain client context between deals or product managers, who are able to track specifications between multiple sprints. On the executive’s side, memory can be used to keep track of initiatives without the reestablishment of context in each interaction.
Incognito Chat for Sensitive Discussions
Anthropic also launched incognito chat along with memory, which can be used by all Claude users, including those on a free account. This feature leaves a blank slate on discussions which are not supposed to be stored or brought up later in the interaction. The incognito sessions will not be featured in memory and conversation history, so they can be used to discuss strategies confidentially, brainstorm about sensitive topics, or make one-time requests.
Anthropic clarified that normal memory and history are not affected by the use of incognito. In the case of Team and Enterprise users, the normal policy of data retention still applies, but an incognito mode means that no extra information is stored in the memory of Claude. A ghost icon and Incognito chat label are displayed on the interface to make the mode obviously active.
Availability and Next Steps
Anthropic has announced that memory can now be used by Team and Enterprise plan users this week. Incognito chat, on the other hand, is being launched worldwide to all Claude users. To begin with, one can turn on memory in the settings menu and ensure Claude can create memory based on previous conversations during the setup.To test out the feature, users can then ask Claude questions like, What did we work on last week? and so on
Anthropic has also given directions on exporting and importing memory details to those migrating off other AI tools. The company pointed out that memory comes with new safety considerations, and said it was moving towards deployment in phases. Work environment feedback will be used to improve future developments as the feature is rolled out to other groups of users.Anthropic wants Claude to be more capable of working in complex settings with memory and incognito chat. The improvements will be part of a continuous plan to bring the chatbot closer to the requirements of the professionals without sacrificing user-controlled measures and privacy.