Release v3.6.3

Published from npm package build
Source: https://github.com/thedotmack/claude-mem-source
This commit is contained in:
Alex Newman
2025-09-11 17:15:50 -04:00
parent c4eb2e2dc9
commit 97807494fd
43 changed files with 10632 additions and 286 deletions
+20
View File
@@ -5,18 +5,38 @@ All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/).
## [3.6.3] - 2025-09-11
### Changed
- Updated changelog generation prompts to use date strings in query text for temporal filtering
### Fixed
- Resolved changelog timestamp filtering by using semantic search instead of metadata queries, enabling proper date-based searches
- Corrected install.ts search instructions to remove misleading metadata filtering guidance that caused 'Error finding id' errors
## [3.6.2] - 2025-09-10
### Added
- Visual feedback to changelog command showing current version, next version, and number of overviews being processed
- Generate changelog for specific versions using `--generate` flag with npm publish time boundaries
- Introduce 'Who Wants To Be a Memoryonaire?' trivia game that generates personalized questions from your stored memories
- Add interactive terminal UI with lifelines (50:50, Phone-a-Friend, Audience Poll) and cross-platform audio support
- Implement permanent question caching with --regenerate flag for instant game loading
- Enable hybrid vector search to discover related memory chains during question generation
### Changed
- Changelog regeneration automatically removes old entries from JSONL file when using `--generate` or `--historical` flags
- Switch to direct JSONL file loading for instant memory access without API calls
- Optimize AI generation with faster 'sonnet' model for improved performance
- Reduce memory query limit from 100 to 50 to prevent token overflow
### Fixed
- Changelog command now uses npm publish timestamps exclusively for accurate version time ranges
- Resolved timestamp filtering issues with Chroma database by leveraging semantic search with embedded dates
- Resolve game hanging at startup due to confirmation loop
- Fix memory integration bypass that prevented questions from using actual stored memories
- Consolidate 500+ lines of duplicate code for better maintainability
## [3.6.1] - 2025-09-10
+623 -22
View File
@@ -1,31 +1,632 @@
CLAUDE-MEM SOFTWARE LICENSE
Copyright (c) 2024 Alex Newman (@thedotmack). All rights reserved.
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software in its compiled/distributed form via npm, to use the software
for any purpose, subject to the following conditions:
GNU AFFERO GENERAL PUBLIC LICENSE
Version 3, 19 November 2007
1. USE RIGHTS: You may use the claude-mem CLI tool for personal or commercial
purposes without restriction.
Copyright (C) 2025 Alex Newman (@thedotmack). All rights reserved.
2. NO SOURCE CODE RIGHTS: This license does NOT grant access to source code,
modification rights, or redistribution rights. The software is provided
as-is in its compiled form only.
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU Affero General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
3. NO REVERSE ENGINEERING: You may not reverse engineer, decompile, or
disassemble the software.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU Affero General Public License for more details.
4. NO REDISTRIBUTION: You may not redistribute, repackage, or resell this
software. Users must install it from the official npm registry.
You should have received a copy of the GNU Affero General Public License
along with this program. If not, see <https://www.gnu.org/licenses/>.
5. NO WARRANTY: THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
Preamble
6. LIMITATION OF LIABILITY: IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY
CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT
OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR
THE USE OR OTHER DEALINGS IN THE SOFTWARE.
The GNU Affero General Public License is a free, copyleft license for
software and other kinds of works, specifically designed to ensure
cooperation with the community in the case of network server software.
For questions about this license, contact: thedotmack@gmail.com
The licenses for most software and other practical works are designed
to take away your freedom to share and change the works. By contrast,
our General Public Licenses are intended to guarantee your freedom to
share and change all versions of a program--to make sure it remains free
software for all its users.
When we speak of free software, we are referring to freedom, not
price. Our General Public Licenses are designed to make sure that you
have the freedom to distribute copies of free software (and charge for
them if you wish), that you receive source code or can get it if you
want it, that you can change the software or use pieces of it in new
free programs, and that you know you can do these things.
Developers that use our General Public Licenses protect your rights
with two steps: (1) assert copyright on the software, and (2) offer
you this License which gives you legal permission to copy, distribute
and/or modify the software.
A secondary benefit of defending all users' freedom is that
improvements made in alternate versions of the program, if they
receive widespread use, become available for other developers to
incorporate. Many developers of free software are heartened and
encouraged by the resulting cooperation. However, in the case of
software used on network servers, this result may fail to come about.
The GNU General Public License permits making a modified version and
letting the public access it on a server without ever releasing its
source code to the public.
The GNU Affero General Public License is designed specifically to
ensure that, in such cases, the modified source code becomes available
to the community. It requires the operator of a network server to
provide the source code of the modified version running there to the
users of that server. Therefore, public use of a modified version, on
a publicly accessible server, gives the public access to the source
code of the modified version.
An older license, called the Affero General Public License and
published by Affero, was designed to accomplish similar goals. This is
a different license, not a version of the Affero GPL, but Affero has
released a new version of the Affero GPL which permits relicensing under
this license.
The precise terms and conditions for copying, distribution and
modification follow.
TERMS AND CONDITIONS
0. Definitions.
"This License" refers to version 3 of the GNU Affero General Public License.
"Copyright" also means copyright-like laws that apply to other kinds of
works, such as semiconductor masks.
"The Program" refers to any copyrightable work licensed under this
License. Each licensee is addressed as "you". "Licensees" and
"recipients" may be individuals or organizations.
To "modify" a work means to copy from or adapt all or part of the work
in a fashion requiring copyright permission, other than the making of an
exact copy. The resulting work is called a "modified version" of the
earlier work or a work "based on" the earlier work.
A "covered work" means either the unmodified Program or a work based
on the Program.
To "propagate" a work means to do anything with it that, without
permission, would make you directly or secondarily liable for
infringement under applicable copyright law, except executing it on a
computer or modifying a private copy. Propagation includes copying,
distribution (with or without modification), making available to the
public, and in some countries other activities as well.
To "convey" a work means any kind of propagation that enables other
parties to make or receive copies. Mere interaction with a user through
a computer network, with no transfer of a copy, is not conveying.
An interactive user interface displays "Appropriate Legal Notices"
to the extent that it includes a convenient and prominently visible
feature that (1) displays an appropriate copyright notice, and (2)
tells the user that there is no warranty for the work (except to the
extent that warranties are provided), that licensees may convey the
work under this License, and how to view a copy of this License. If
the interface presents a list of user commands or options, such as a
menu, a prominent item in the list meets this criterion.
1. Source Code.
The "source code" for a work means the preferred form of the work
for making modifications to it. "Object code" means any non-source
form of a work.
A "Standard Interface" means an interface that either is an official
standard defined by a recognized standards body, or, in the case of
interfaces specified for a particular programming language, one that
is widely used among developers working in that language.
The "System Libraries" of an executable work include anything, other
than the work as a whole, that (a) is included in the normal form of
packaging a Major Component, but which is not part of that Major
Component, and (b) serves only to enable use of the work with that
Major Component, or to implement a Standard Interface for which an
implementation is available to the public in source code form. A
"Major Component", in this context, means a major essential component
(kernel, window system, and so on) of the specific operating system
(if any) on which the executable work runs, or a compiler used to
produce the work, or an object code interpreter used to run it.
The "Corresponding Source" for a work in object code form means all
the source code needed to generate, install, and (for an executable
work) run the object code and to modify the work, including scripts to
control those activities. However, it does not include the work's
System Libraries, or general-purpose tools or generally available free
programs which are used unmodified in performing those activities but
which are not part of the work. For example, Corresponding Source
includes interface definition files associated with source files for
the work, and the source code for shared libraries and dynamically
linked subprograms that the work is specifically designed to require,
such as by intimate data communication or control flow between those
subprograms and other parts of the work.
The Corresponding Source need not include anything that users
can regenerate automatically from other parts of the Corresponding
Source.
The Corresponding Source for a work in source code form is that
same work.
2. Basic Permissions.
All rights granted under this License are granted for the term of
copyright on the Program, and are irrevocable provided the stated
conditions are met. This License explicitly affirms your unlimited
permission to run the unmodified Program. The output from running a
covered work is covered by this License only if the output, given its
content, constitutes a covered work. This License acknowledges your
rights of fair use or other equivalent, as provided by copyright law.
You may make, run and propagate covered works that you do not
convey, without conditions so long as your license otherwise remains
in force. You may convey covered works to others for the sole purpose
of having them make modifications exclusively for you, or provide you
with facilities for running those works, provided that you comply with
the terms of this License in conveying all material for which you do
not control copyright. Those thus making or running the covered works
for you must do so exclusively on your behalf, under your direction
and control, on terms that prohibit them from making any copies of
your copyrighted material outside their relationship with you.
Conveying under any other circumstances is permitted solely under
the conditions stated below. Sublicensing is not allowed; section 10
makes it unnecessary.
3. Protecting Users' Legal Rights From Anti-Circumvention Law.
No covered work shall be deemed part of an effective technological
measure under any applicable law fulfilling obligations under article
11 of the WIPO copyright treaty adopted on 20 December 1996, or
similar laws prohibiting or restricting circumvention of such
measures.
When you convey a covered work, you waive any legal power to forbid
circumvention of technological measures to the extent such circumvention
is effected by exercising rights under this License with respect to
the covered work, and you disclaim any intention to limit operation or
modification of the work as a means of enforcing, against the work's
users, your or third parties' legal rights to forbid circumvention of
technological measures.
4. Conveying Verbatim Copies.
You may convey verbatim copies of the Program's source code as you
receive it, in any medium, provided that you conspicuously and
appropriately publish on each copy an appropriate copyright notice;
keep intact all notices stating that this License and any
non-permissive terms added in accord with section 7 apply to the code;
keep intact all notices of the absence of any warranty; and give all
recipients a copy of this License along with the Program.
You may charge any price or no price for each copy that you convey,
and you may offer support or warranty protection for a fee.
5. Conveying Modified Source Versions.
You may convey a work based on the Program, or the modifications to
produce it from the Program, in the form of source code under the
terms of section 4, provided that you also meet all of these conditions:
a) The work must carry prominent notices stating that you modified
it, and giving a relevant date.
b) The work must carry prominent notices stating that it is
released under this License and any conditions added under section
7. This requirement modifies the requirement in section 4 to
"keep intact all notices".
c) You must license the entire work, as a whole, under this
License to anyone who comes into possession of a copy. This
License will therefore apply, along with any applicable section 7
additional terms, to the whole of the work, and all its parts,
regardless of how they are packaged. This License gives no
permission to license the work in any other way, but it does not
invalidate such permission if you have separately received it.
d) If the work has interactive user interfaces, each must display
Appropriate Legal Notices; however, if the Program has interactive
interfaces that do not display Appropriate Legal Notices, your
work need not make them do so.
A compilation of a covered work with other separate and independent
works, which are not by their nature extensions of the covered work,
and which are not combined with it such as to form a larger program,
in or on a volume of a storage or distribution medium, is called an
"aggregate" if the compilation and its resulting copyright are not
used to limit the access or legal rights of the compilation's users
beyond what the individual works permit. Inclusion of a covered work
in an aggregate does not cause this License to apply to the other
parts of the aggregate.
6. Conveying Non-Source Forms.
You may convey a covered work in object code form under the terms
of sections 4 and 5, provided that you also convey the
machine-readable Corresponding Source under the terms of this License,
in one of these ways:
a) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by the
Corresponding Source fixed on a durable physical medium
customarily used for software interchange.
b) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by a
written offer, valid for at least three years and valid for as
long as you offer spare parts or customer support for that product
model, to give anyone who possesses the object code either (1) a
copy of the Corresponding Source for all the software in the
product that is covered by this License, on a durable physical
medium customarily used for software interchange, for a price no
more than your reasonable cost of physically performing this
conveying of source, or (2) access to copy the
Corresponding Source from a network server at no charge.
c) Convey individual copies of the object code with a copy of the
written offer to provide the Corresponding Source. This
alternative is allowed only occasionally and noncommercially, and
only if you received the object code with such an offer, in accord
with subsection 6b.
d) Convey the object code by offering access from a designated
place (gratis or for a charge), and offer equivalent access to the
Corresponding Source in the same way through the same place at no
further charge. You need not require recipients to copy the
Corresponding Source along with the object code. If the place to
copy the object code is a network server, the Corresponding Source
may be on a different server (operated by you or a third party)
that supports equivalent copying facilities, provided you maintain
clear directions next to the object code saying where to find the
Corresponding Source. Regardless of what server hosts the
Corresponding Source, you remain obligated to ensure that it is
available for as long as needed to satisfy these requirements.
e) Convey the object code using peer-to-peer transmission, provided
you inform other peers where the object code and Corresponding
Source of the work are being offered to the general public at no
charge under subsection 6d.
A separable portion of the object code, whose source code is excluded
from the Corresponding Source as a System Library, need not be
included in conveying the object code work.
A "User Product" is either (1) a "consumer product", which means any
tangible personal property which is normally used for personal, family,
or household purposes, or (2) anything designed or sold for incorporation
into a dwelling. In determining whether a product is a consumer product,
doubtful cases shall be resolved in favor of coverage. For a particular
product received by a particular user, "normally used" refers to a
typical or common use of that class of product, regardless of the status
of the particular user or of the way in which the particular user
actually uses, or expects or is expected to use, the product. A product
is a consumer product regardless of whether the product has substantial
commercial, industrial or non-consumer uses, unless such uses represent
the only significant mode of use of the product.
"Installation Information" for a User Product means any methods,
procedures, authorization keys, or other information required to install
and execute modified versions of a covered work in that User Product from
a modified version of its Corresponding Source. The information must
suffice to ensure that the continued functioning of the modified object
code is in no case prevented or interfered with solely because
modification has been made.
If you convey an object code work under this section in, or with, or
specifically for use in, a User Product, and the conveying occurs as
part of a transaction in which the right of possession and use of the
User Product is transferred to the recipient in perpetuity or for a
fixed term (regardless of how the transaction is characterized), the
Corresponding Source conveyed under this section must be accompanied
by the Installation Information. But this requirement does not apply
if neither you nor any third party retains the ability to install
modified object code on the User Product (for example, the work has
been installed in ROM).
The requirement to provide Installation Information does not include a
requirement to continue to provide support service, warranty, or updates
for a work that has been modified or installed by the recipient, or for
the User Product in which it has been modified or installed. Access to a
network may be denied when the modification itself materially and
adversely affects the operation of the network or violates the rules and
protocols for communication across the network.
Corresponding Source conveyed, and Installation Information provided,
in accord with this section must be in a format that is publicly
documented (and with an implementation available to the public in
source code form), and must require no special password or key for
unpacking, reading or copying.
7. Additional Terms.
"Additional permissions" are terms that supplement the terms of this
License by making exceptions from one or more of its conditions.
Additional permissions that are applicable to the entire Program shall
be treated as though they were included in this License, to the extent
that they are valid under applicable law. If additional permissions
apply only to part of the Program, that part may be used separately
under those permissions, but the entire Program remains governed by
this License without regard to the additional permissions.
When you convey a copy of a covered work, you may at your option
remove any additional permissions from that copy, or from any part of
it. (Additional permissions may be written to require their own
removal in certain cases when you modify the work.) You may place
additional permissions on material, added by you to a covered work,
for which you have or can give appropriate copyright permission.
Notwithstanding any other provision of this License, for material you
add to a covered work, you may (if authorized by the copyright holders of
that material) supplement the terms of this License with terms:
a) Disclaiming warranty or limiting liability differently from the
terms of sections 15 and 16 of this License; or
b) Requiring preservation of specified reasonable legal notices or
author attributions in that material or in the Appropriate Legal
Notices displayed by works containing it; or
c) Prohibiting misrepresentation of the origin of that material, or
requiring that modified versions of such material be marked in
reasonable ways as different from the original version; or
d) Limiting the use for publicity purposes of names of licensors or
authors of the material; or
e) Declining to grant rights under trademark law for use of some
trade names, trademarks, or service marks; or
f) Requiring indemnification of licensors and authors of that
material by anyone who conveys the material (or modified versions of
it) with contractual assumptions of liability to the recipient, for
any liability that these contractual assumptions directly impose on
those licensors and authors.
All other non-permissive additional terms are considered "further
restrictions" within the meaning of section 10. If the Program as you
received it, or any part of it, contains a notice stating that it is
governed by this License along with a term that is a further
restriction, you may remove that term. If a license document contains
a further restriction but permits relicensing or conveying under this
License, you may add to a covered work material governed by the terms
of that license document, provided that the further restriction does
not survive such relicensing or conveying.
If you add terms to a covered work in accord with this section, you
must place, in the relevant source files, a statement of the
additional terms that apply to those files, or a notice indicating
where to find the applicable terms.
Additional terms, permissive or non-permissive, may be stated in the
form of a separately written license, or stated as exceptions;
the above requirements apply either way.
8. Termination.
You may not propagate or modify a covered work except as expressly
provided under this License. Any attempt otherwise to propagate or
modify it is void, and will automatically terminate your rights under
this License (including any patent licenses granted under the third
paragraph of section 11).
However, if you cease all violation of this License, then your
license from a particular copyright holder is reinstated (a)
provisionally, unless and until the copyright holder explicitly and
finally terminates your license, and (b) permanently, if the copyright
holder fails to notify you of the violation by some reasonable means
prior to 60 days after the cessation.
Moreover, your license from a particular copyright holder is
reinstated permanently if the copyright holder notifies you of the
violation by some reasonable means, this is the first time you have
received notice of violation of this License (for any work) from that
copyright holder, and you cure the violation prior to 30 days after
your receipt of the notice.
Termination of your rights under this section does not terminate the
licenses of parties who have received copies or rights from you under
this License. If your rights have been terminated and not permanently
reinstated, you do not qualify to receive new licenses for the same
material under section 10.
9. Acceptance Not Required for Having Copies.
You are not required to accept this License in order to receive or
run a copy of the Program. Ancillary propagation of a covered work
occurring solely as a consequence of using peer-to-peer transmission
to receive a copy likewise does not require acceptance. However,
nothing other than this License grants you permission to propagate or
modify any covered work. These actions infringe copyright if you do
not accept this License. Therefore, by modifying or propagating a
covered work, you indicate your acceptance of this License to do so.
10. Automatic Licensing of Downstream Recipients.
Each time you convey a covered work, the recipient automatically
receives a license from the original licensors, to run, modify and
propagate that work, subject to this License. You are not responsible
for enforcing compliance by third parties with this License.
An "entity transaction" is a transaction transferring control of an
organization, or substantially all assets of one, or subdividing an
organization, or merging organizations. If propagation of a covered
work results from an entity transaction, each party to that
transaction who receives a copy of the work also receives whatever
licenses to the work the party's predecessor in interest had or could
give under the previous paragraph, plus a right to possession of the
Corresponding Source of the work from the predecessor in interest, if
the predecessor has it or can get it with reasonable efforts.
You may not impose any further restrictions on the exercise of the
rights granted or affirmed under this License. For example, you may
not impose a license fee, royalty, or other charge for exercise of
rights granted under this License, and you may not initiate litigation
(including a cross-claim or counterclaim in a lawsuit) alleging that
any patent claim is infringed by making, using, selling, offering for
sale, or importing the Program or any portion of it.
11. Patents.
A "contributor" is a copyright holder who authorizes use under this
License of the Program or a work on which the Program is based. The
work thus licensed is called the contributor's "contributor version".
A contributor's "essential patent claims" are all patent claims
owned or controlled by the contributor, whether already acquired or
hereafter acquired, that would be infringed by some manner, permitted
by this License, of making, using, or selling its contributor version,
but do not include claims that would be infringed only as a
consequence of further modification of the contributor version. For
purposes of this definition, "control" includes the right to grant
patent sublicenses in a manner consistent with the requirements of
this License.
Each contributor grants you a non-exclusive, worldwide, royalty-free
patent license under the contributor's essential patent claims, to
make, use, sell, offer for sale, import and otherwise run, modify and
propagate the contents of its contributor version.
In the following three paragraphs, a "patent license" is any express
agreement or commitment, however denominated, not to enforce a patent
(such as an express permission to practice a patent or covenant not to
sue for patent infringement). To "grant" such a patent license to a
party means to make such an agreement or commitment not to enforce a
patent against the party.
If you convey a covered work, knowingly relying on a patent license,
and the Corresponding Source of the work is not available for anyone
to copy, free of charge and under the terms of this License, through a
publicly available network server or other readily accessible means,
then you must either (1) cause the Corresponding Source to be so
available, or (2) arrange to deprive yourself of the benefit of the
patent license for this particular work, or (3) arrange, in a manner
consistent with the requirements of this License, to extend the patent
license to downstream recipients. "Knowingly relying" means you have
actual knowledge that, but for the patent license, your conveying the
covered work in a country, or your recipient's use of the covered work
in a country, would infringe one or more identifiable patents in that
country that you have reason to believe are valid.
If, pursuant to or in connection with a single transaction or
arrangement, you convey, or propagate by procuring conveyance of, a
covered work, and grant a patent license to some of the parties
receiving the covered work authorizing them to use, propagate, modify
or convey a specific copy of the covered work, then the patent license
you grant is automatically extended to all recipients of the covered
work and works based on it.
A patent license is "discriminatory" if it does not include within
the scope of its coverage, prohibits the exercise of, or is
conditioned on the non-exercise of one or more of the rights that are
specifically granted under this License. You may not convey a covered
work if you are a party to an arrangement with a third party that is
in the business of distributing software, under which you make payment
to the third party based on the extent of your activity of conveying
the work, and under which the third party grants, to any of the
parties who would receive the covered work from you, a discriminatory
patent license (a) in connection with copies of the covered work
conveyed by you (or copies made from those copies), or (b) primarily
for and in connection with specific products or compilations that
contain the covered work, unless you entered into that arrangement,
or that patent license was granted, prior to 28 March 2007.
Nothing in this License shall be construed as excluding or limiting
any implied license or other defenses to infringement that may
otherwise be available to you under applicable patent law.
12. No Surrender of Others' Freedom.
If conditions are imposed on you (whether by court order, agreement or
otherwise) that contradict the conditions of this License, they do not
excuse you from the conditions of this License. If you cannot convey a
covered work so as to satisfy simultaneously your obligations under this
License and any other pertinent obligations, then as a consequence you may
not convey it at all. For example, if you agree to terms that obligate you
to collect a royalty for further conveying from those to whom you convey
the Program, the only way you could satisfy both those terms and this
License would be to refrain entirely from conveying the Program.
13. Remote Network Interaction; Use with the GNU General Public License.
Notwithstanding any other provision of this License, if you modify the
Program, your modified version must prominently offer all users
interacting with it remotely through a computer network (if your version
supports such interaction) an opportunity to receive the Corresponding
Source of your version by providing access to the Corresponding Source
from a network server at no charge, through some standard or customary
means of facilitating copying of software. This Corresponding Source
shall include the Corresponding Source for any work covered by version 3
of the GNU General Public License that is incorporated pursuant to the
following paragraph.
Notwithstanding any other provision of this License, you have
permission to link or combine any covered work with a work licensed
under version 3 of the GNU General Public License into a single
combined work, and to convey the resulting work. The terms of this
License will continue to apply to the part which is the covered work,
but the work with which it is combined will remain governed by version
3 of the GNU General Public License.
14. Revised Versions of this License.
The Free Software Foundation may publish revised and/or new versions of
the GNU Affero General Public License from time to time. Such new versions
will be similar in spirit to the present version, but may differ in detail to
address new problems or concerns.
Each version is given a distinguishing version number. If the
Program specifies that a certain numbered version of the GNU Affero General
Public License "or any later version" applies to it, you have the
option of following the terms and conditions either of that numbered
version or of any later version published by the Free Software
Foundation. If the Program does not specify a version number of the
GNU Affero General Public License, you may choose any version ever published
by the Free Software Foundation.
If the Program specifies that a proxy can decide which future
versions of the GNU Affero General Public License can be used, that proxy's
public statement of acceptance of a version permanently authorizes you
to choose that version for the Program.
Later license versions may give you additional or different
permissions. However, no additional obligations are imposed on any
author or copyright holder as a result of your choosing to follow a
later version.
15. Disclaimer of Warranty.
THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
16. Limitation of Liability.
IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
SUCH DAMAGES.
17. Interpretation of Sections 15 and 16.
If the disclaimer of warranty and limitation of liability provided
above cannot be given local legal effect according to their terms,
reviewing courts shall apply local law that most closely approximates
an absolute waiver of all civil liability in connection with the
Program, unless a warranty or assumption of liability accompanies a
copy of the Program in return for a fee.
END OF TERMS AND CONDITIONS
+2 -7
View File
@@ -18,7 +18,6 @@ Thats it. Restart Claude Code and youre good. No config. No tedious setup
- Starts new sessions with the right context
- Works quietly in the background
- One-command install and status check
- **🎭 Shakespeare's Memory Theatre**: Transform operations into theatrical magnificence!
## 🗑️ Smart Trash™ (Your Panic Button)
@@ -46,10 +45,6 @@ claude-mem uninstall # Remove hooks
# Extras
claude-mem trash-view # See whats in Smart Trash™
claude-mem restore # Restore deleted items
# 🎭 Shakespeare's Memory Theatre Commands
claude-mem theatre # Experience memory operations dramatically
claude-mem compress-theatrical file.jsonl # Theatrical compression
claude-mem status-theatrical # Dramatic status check
```
## 📁 Where Stuff Lives (super simple)
@@ -77,7 +72,7 @@ claude-mem install --force # fixes most issues
## 📄 License
This software is free to use but is NOT open source. See `LICENSE`.
Licensed under AGPL-3.0. See `LICENSE`.
---
@@ -88,4 +83,4 @@ npm install -g claude-mem
claude-mem install
```
Your future self will thank you. 🧠✨
Your future self will thank you. 🧠✨
+191 -256
View File
File diff suppressed because one or more lines are too long
+2 -1
View File
@@ -1,6 +1,6 @@
{
"name": "claude-mem",
"version": "3.6.2",
"version": "3.6.3",
"description": "Memory compression system for Claude Code - persist context across sessions",
"keywords": [
"claude",
@@ -52,6 +52,7 @@
"dist",
"hooks",
"commands",
"src",
".mcp.json",
"CHANGELOG.md"
]
+248
View File
@@ -0,0 +1,248 @@
#!/usr/bin/env node
// <Block> 1.1 ====================================
// CLI Dependencies and Imports Setup
// Natural pattern: Import what you need before using it
import { Command } from 'commander';
import { PACKAGE_NAME, PACKAGE_VERSION, PACKAGE_DESCRIPTION } from '../shared/config.js';
// Import command handlers
import { compress } from '../commands/compress.js';
import { install } from '../commands/install.js';
import { uninstall } from '../commands/uninstall.js';
import { status } from '../commands/status.js';
import { logs } from '../commands/logs.js';
import { loadContext } from '../commands/load-context.js';
import { trash } from '../commands/trash.js';
import { restore } from '../commands/restore.js';
import { save } from '../commands/save.js';
import { changelog } from '../commands/changelog.js';
// Cloud functionality disabled - incomplete setup
// import { cloudCommand } from '../commands/cloud.js';
import { importHistory } from '../commands/import-history.js';
import { TranscriptCompressor } from '../core/compression/TranscriptCompressor.js';
const program = new Command();
// </Block> =======================================
// <Block> 1.2 ====================================
// Program Configuration
// Natural pattern: Configure program metadata first
program
.name(PACKAGE_NAME)
.description(PACKAGE_DESCRIPTION)
.version(PACKAGE_VERSION);
// </Block> =======================================
// <Block> 1.3 ====================================
// Compress Command Definition
// Natural pattern: Define command with its options and handler
// Compress command
program
.command('compress [transcript]')
.description('Compress a Claude Code transcript into memory')
.option('--output <path>', 'Output directory for compressed files')
.option('--dry-run', 'Show what would be compressed without doing it')
.option('-v, --verbose', 'Show detailed output')
.action(compress);
// </Block> =======================================
// <Block> 1.4 ====================================
// Install Command Definition
// Natural pattern: Define command with its options and handler
// Install command
program
.command('install')
.description('Install Claude Code hooks for automatic compression')
.option('--user', 'Install for current user (default)')
.option('--project', 'Install for current project only')
.option('--local', 'Install to custom local directory')
.option('--path <path>', 'Custom installation path (with --local)')
.option('--timeout <ms>', 'Hook execution timeout in milliseconds', '180000')
.option('--skip-mcp', 'Skip Chroma MCP server installation')
.option('--force', 'Force installation even if already installed')
.action(install);
// </Block> =======================================
// <Block> 1.5 ====================================
// Uninstall Command Definition
// Natural pattern: Define command with its options and handler
// Uninstall command
program
.command('uninstall')
.description('Remove Claude Code hooks')
.option('--user', 'Remove from user settings (default)')
.option('--project', 'Remove from project settings')
.option('--all', 'Remove from both user and project settings')
.action(uninstall);
// </Block> =======================================
// <Block> 1.6 ====================================
// Status Command Definition
// Natural pattern: Define command with its handler
// Status command
program
.command('status')
.description('Check installation status of Claude Memory System')
.action(status);
// </Block> =======================================
// <Block> 1.7 ====================================
// Logs Command Definition
// Natural pattern: Define command with its options and handler
// Logs command
program
.command('logs')
.description('View claude-mem operation logs')
.option('--debug', 'Show debug logs only')
.option('--error', 'Show error logs only')
.option('--tail [n]', 'Show last n lines', '50')
.option('--follow', 'Follow log output')
.action(logs);
// </Block> =======================================
// <Block> 1.8 ====================================
// Load-Context Command Definition
// Natural pattern: Define command with its options and handler
// Load-context command
program
.command('load-context')
.description('Load compressed memories for current session')
.option('--project <name>', 'Filter by project name')
.option('--count <n>', 'Number of memories to load', '10')
.option('--raw', 'Output raw JSON instead of formatted text')
.option('--format <type>', 'Output format: json, session-start, or default')
.action(loadContext);
// </Block> =======================================
// <Block> 1.9 ====================================
// Trash and Restore Commands Definition
// Natural pattern: Define commands for safe file operations
// Trash command with subcommands
const trashCmd = program
.command('trash')
.description('Manage trash bin for safe file deletion')
.argument('[files...]', 'Files to move to trash')
.option('-r, --recursive', 'Remove directories recursively')
.option('-R', 'Remove directories recursively (same as -r)')
.option('-f, --force', 'Suppress errors for nonexistent files')
.action(async (files: string[] | undefined, options: any) => {
// If no files provided, show help
if (!files || files.length === 0) {
trashCmd.outputHelp();
return;
}
// Map -R to recursive
if (options.R) options.recursive = true;
await trash(files, {
force: options.force,
recursive: options.recursive
});
});
// Trash view subcommand
trashCmd
.command('view')
.description('View contents of trash bin')
.action(async () => {
const { viewTrash } = await import('../commands/trash-view.js');
await viewTrash();
});
// Trash empty subcommand
trashCmd
.command('empty')
.description('Permanently delete all files in trash')
.option('-f, --force', 'Skip confirmation prompt')
.action(async (options: any) => {
const { emptyTrash } = await import('../commands/trash-empty.js');
await emptyTrash(options);
});
// Restore command
program
.command('restore')
.description('Restore files from trash interactively')
.action(restore);
// </Block> =======================================
// Cloud command
// Cloud functionality disabled - incomplete setup
// program.addCommand(cloudCommand);
// Save command
program
.command('save <message>')
.description('Save a message to the memory system')
.action(save);
// Changelog command
program
.command('changelog')
.description('Generate CHANGELOG.md from claude-mem memories')
.option('--historical <n>', 'Number of versions to search (default: current version only)')
.option('--generate <version>', 'Generate changelog for a specific version')
.option('--start <time>', 'Start time for memory search (ISO format)')
.option('--end <time>', 'End time for memory search (ISO format)')
.option('--update', 'Update CHANGELOG.md from JSONL entries')
.option('--preview', 'Preview the generated changelog')
.option('-v, --verbose', 'Show detailed output')
.action(changelog);
// Import History command
program
.command('import-history')
.description('Import historical Claude Code conversations into memory')
.option('-v, --verbose', 'Show detailed output')
.option('-m, --multi', 'Enable multi-select mode (default is single-select)')
.action(importHistory);
// <Block> 1.11 ===================================
// Hook Commands
// Internal commands called by hook scripts
program
.command('hook:pre-compact', { hidden: true })
.description('Internal pre-compact hook handler')
.action(async () => {
const { preCompactHook } = await import('../commands/hooks.js');
await preCompactHook();
});
program
.command('hook:session-start', { hidden: true })
.description('Internal session-start hook handler')
.action(async () => {
const { sessionStartHook } = await import('../commands/hooks.js');
await sessionStartHook();
});
program
.command('hook:session-end', { hidden: true })
.description('Internal session-end hook handler')
.action(async () => {
const { sessionEndHook } = await import('../commands/hooks.js');
await sessionEndHook();
});
// </Block> =======================================
// Debug command to show filtered output
program
.command('debug-filter')
.description('Show filtered transcript output (first 5 messages)')
.argument('<transcript-path>', 'Path to transcript file')
.action((transcriptPath) => {
const compressor = new TranscriptCompressor();
compressor.showFilteredOutput(transcriptPath);
});
// <Block> 1.11 ===================================
// CLI Execution
// Natural pattern: After defining all commands, parse and execute
// Parse arguments and execute
program.parse();
// </Block> =======================================
+718
View File
@@ -0,0 +1,718 @@
import { OptionValues } from 'commander';
import { query } from '@anthropic-ai/claude-code';
import fs from 'fs';
import path from 'path';
import { getClaudePath } from '../shared/settings.js';
import { execSync } from 'child_process';
interface ChangelogEntry {
version: string;
date: string;
type: 'Added' | 'Changed' | 'Fixed' | 'Removed' | 'Deprecated' | 'Security';
description: string;
timestamp: string;
generatedAt?: string; // When this changelog entry was created
}
interface MemorySearchResult {
version: string;
text: string;
metadata: any;
}
export async function changelog(options: OptionValues): Promise<void> {
try {
// Handle --update flag to regenerate CHANGELOG.md from JSONL
if (options.update) {
await updateChangelogFromJsonl(options);
return;
}
// Get current version and project name from package.json
const packageJsonPath = path.join(process.cwd(), 'package.json');
let currentVersion = 'unknown';
let projectName = 'unknown';
if (fs.existsSync(packageJsonPath)) {
try {
const packageData = JSON.parse(fs.readFileSync(packageJsonPath, 'utf-8'));
currentVersion = packageData.version || 'unknown';
projectName = packageData.name || path.basename(process.cwd());
} catch (e) {
projectName = path.basename(process.cwd());
}
}
// Calculate versions to search for based on flags
const versionsToSearch: string[] = [];
let historicalCount = options.historical || 1; // Default to current version only
// Handle --generate flag for specific version
if (options.generate) {
versionsToSearch.push(options.generate);
historicalCount = 1; // Single version mode
console.log(`🎯 Generating changelog for specific version: ${options.generate}`);
} else if (currentVersion !== 'unknown') {
// Normal mode: use current version or historical versions
const parts = currentVersion.split('.');
if (parts.length === 3) {
let major = parseInt(parts[0]);
let minor = parseInt(parts[1]);
let patch = parseInt(parts[2]);
for (let i = 0; i < historicalCount; i++) {
versionsToSearch.push(`${major}.${minor}.${patch}`);
// Decrement version
if (patch === 0) {
if (minor === 0) {
// Can't go lower than x.0.0
break;
}
minor--;
patch = 9;
} else {
patch--;
}
}
}
}
if (versionsToSearch.length === 0) {
console.log('⚠️ Could not determine versions to search. Please check package.json');
process.exit(1);
}
// Check if current version already has a changelog entry
const projectChangelogDir = path.join(
process.env.HOME || process.env.USERPROFILE || '',
'.claude-mem',
'projects'
);
const changelogJsonlPath = path.join(projectChangelogDir, `${projectName}-changelog.jsonl`);
let hasCurrentVersion = false;
if (fs.existsSync(changelogJsonlPath)) {
const existingLines = fs.readFileSync(changelogJsonlPath, 'utf-8').split('\n').filter(l => l.trim());
for (const line of existingLines) {
try {
const entry = JSON.parse(line);
if (entry.version === currentVersion) {
hasCurrentVersion = true;
}
} catch (e) {
// Skip invalid lines
}
}
if (!options.historical && !options.generate && historicalCount === 1) {
if (hasCurrentVersion) {
console.log(`❌ Version ${currentVersion} already has changelog entries.`);
console.log('\n📝 Workflow:');
console.log(' 1. Make your code updates');
console.log(' 2. Build and test: bun run build');
console.log(' 3. Bump version: npm version patch');
console.log(' 4. Generate changelog: claude-mem changelog');
console.log(' 5. Commit and push\n');
console.log(`💡 Or use --historical 1 to regenerate this version's changelog`);
process.exit(1);
}
}
}
// Get npm publish times for all versions we need
let versionTimeRanges: Array<{version: string, startTime: string, endTime: string}> = [];
// Check if custom time range is provided
if (options.start && options.end) {
// Use custom time range for the specified version
const version = options.generate || currentVersion;
versionTimeRanges.push({
version,
startTime: options.start,
endTime: options.end
});
console.log(`📅 Using custom time range for ${version}:`);
console.log(` Start: ${new Date(options.start).toLocaleString()}`);
console.log(` End: ${new Date(options.end).toLocaleString()}`);
} else {
try {
const npmTimeData = execSync(`npm view ${projectName} time --json`, {
encoding: 'utf-8',
timeout: 5000
});
const publishTimes = JSON.parse(npmTimeData);
// For historical mode, we need one extra previous version to get proper time ranges
// E.g., for 3 versions, we need 4 timestamps to create 3 ranges
let extraPrevVersion = '';
if (historicalCount > 1) {
// Get the version before our oldest version in the search list
const oldestVersion = versionsToSearch[versionsToSearch.length - 1];
const parts = oldestVersion.split('.');
const major = parseInt(parts[0]);
const minor = parseInt(parts[1]);
const patch = parseInt(parts[2]);
if (patch > 0) {
extraPrevVersion = `${major}.${minor}.${patch - 1}`;
} else if (minor > 0) {
// Look for highest patch of previous minor
const prevMinorPrefix = `${major}.${minor - 1}.`;
const prevMinorVersions = Object.keys(publishTimes)
.filter(v => v.startsWith(prevMinorPrefix))
.sort((a, b) => {
const aPatch = parseInt(a.split('.')[2] || '0');
const bPatch = parseInt(b.split('.')[2] || '0');
return bPatch - aPatch;
});
if (prevMinorVersions.length > 0) {
extraPrevVersion = prevMinorVersions[0];
}
} else if (major > 0) {
// Look for highest version of previous major
const prevMajorPrefix = `${major - 1}.`;
const prevMajorVersions = Object.keys(publishTimes)
.filter(v => v.startsWith(prevMajorPrefix))
.sort((a, b) => {
const [, aMinor, aPatch] = a.split('.').map(Number);
const [, bMinor, bPatch] = b.split('.').map(Number);
if (aMinor !== bMinor) return bMinor - aMinor;
return bPatch - aPatch;
});
if (prevMajorVersions.length > 0) {
extraPrevVersion = prevMajorVersions[0];
}
}
if (options.verbose && extraPrevVersion && publishTimes[extraPrevVersion]) {
console.log(`📍 Using ${extraPrevVersion} as start boundary for time ranges`);
}
}
// Build time ranges for each version
for (let i = 0; i < versionsToSearch.length; i++) {
const version = versionsToSearch[i];
// Start time:
// - For the first (newest) version, use the publish time of the version before it
// - For middle versions, use the publish time of the next version in our list
// - For the last (oldest) version, use the extra previous version we found
let startTime = '2000-01-01T00:00:00Z'; // Default to old date
if (i === 0) {
// First (newest) version - find its immediate predecessor
const versionParts = version.split('.');
const major = parseInt(versionParts[0]);
const minor = parseInt(versionParts[1]);
const patch = parseInt(versionParts[2]);
let prevVersion = '';
if (patch > 0) {
prevVersion = `${major}.${minor}.${patch - 1}`;
} else if (minor > 0) {
// Look for highest patch of previous minor
const prevMinorPrefix = `${major}.${minor - 1}.`;
const prevMinorVersions = Object.keys(publishTimes)
.filter(v => v.startsWith(prevMinorPrefix))
.sort((a, b) => {
const aPatch = parseInt(a.split('.')[2] || '0');
const bPatch = parseInt(b.split('.')[2] || '0');
return bPatch - aPatch;
});
if (prevMinorVersions.length > 0) {
prevVersion = prevMinorVersions[0];
}
}
if (publishTimes[prevVersion]) {
startTime = publishTimes[prevVersion];
}
} else if (i < versionsToSearch.length - 1) {
// Middle versions - use the next version in our list
const prevVersionInList = versionsToSearch[i + 1];
if (publishTimes[prevVersionInList]) {
startTime = publishTimes[prevVersionInList];
}
} else {
// Last (oldest) version - use the extra previous version
if (extraPrevVersion && publishTimes[extraPrevVersion]) {
startTime = publishTimes[extraPrevVersion];
}
}
// End time is this version's publish time (or now for unreleased)
let endTime = publishTimes[version] || new Date().toISOString();
versionTimeRanges.push({ version, startTime, endTime });
if (options.verbose) {
console.log(`📅 Version ${version}: ${new Date(startTime).toLocaleString()} - ${new Date(endTime).toLocaleString()}`);
}
}
// Always log what we're doing for single version
if (historicalCount === 1) {
const latestRange = versionTimeRanges[0];
if (latestRange) {
console.log(`📦 Using npm time range for ${latestRange.version}: ${new Date(latestRange.startTime).toLocaleString()} - ${new Date(latestRange.endTime).toLocaleString()}`);
}
}
} catch (e) {
console.log('❌ Could not fetch npm publish times. Cannot proceed without time ranges.');
process.exit(1);
}
}
console.log(`🔍 Searching memories for versions: ${versionsToSearch.join(', ')}`);
console.log(`📦 Project: ${projectName}\n`);
// Phase 1: Search for version-related memories using MCP tools
// ALWAYS use time range search - no other method
const searchPrompt = versionTimeRanges.length > 0 ?
`You are helping generate a changelog by searching for memories within specific time ranges for multiple versions.
PROJECT: ${projectName}
VERSION TIME RANGES:
${versionTimeRanges.map(r => `- Version ${r.version}: ${new Date(r.startTime).toLocaleDateString()} to ${new Date(r.endTime).toLocaleDateString()}`).join('\n')}
YOUR TASK:
Use mcp__claude-mem__chroma_query_documents to search for memories for each version time range.
SEARCH STRATEGY:
${versionTimeRanges.map(r => {
const startDate = new Date(r.startTime);
const endDate = new Date(r.endTime);
// Generate all date prefixes between start and end
const datePrefixes: string[] = [];
const currentDate = new Date(startDate);
while (currentDate <= endDate) {
// Add day prefix like "2025-09-09"
const dayPrefix = currentDate.toISOString().split('T')[0];
datePrefixes.push(dayPrefix);
currentDate.setDate(currentDate.getDate() + 1);
}
return `
Version ${r.version} (${new Date(r.startTime).toLocaleDateString()} to ${new Date(r.endTime).toLocaleDateString()}):
1. Search for memories from these dates: ${datePrefixes.join(', ')}
2. Make multiple calls to mcp__claude-mem__chroma_query_documents:
- collection_name: "claude_memories"
- query_texts: Include the project name AND date in each query:
* "${projectName} ${datePrefixes[0]} feature"
* "${projectName} ${datePrefixes[0]} fix"
* "${projectName} ${datePrefixes[0]} change"
* "${projectName} ${datePrefixes[0]} improvement"
* "${projectName} ${datePrefixes[0]} refactor"
- n_results: 50
3. The date in the query text helps semantic search find memories from that day
4. Assign memories to this version if their timestamp falls within:
- Start: ${r.startTime}
- End: ${r.endTime}`;
}).join('\n')}
IMPORTANT:
- Always include project name and date in query_texts for best results
- Semantic search will naturally find memories near those dates
- Group returned memories by version based on their timestamp metadata
Return a JSON object with this structure:
{
"memories": [
{
"version": "version_number",
"text": "memory content",
"metadata": {metadata object with timestamp},
"relevance": "high/medium/low"
}
]
}
Group memories by the version they belong to based on timestamp.
Start searching now.` :
`ERROR: No time ranges available. This should never happen.`;
if (versionTimeRanges.length === 0) {
console.log('❌ No time ranges available. Cannot search memories.');
process.exit(1);
}
if (options.verbose) {
console.log('📝 Calling Claude to search memories...');
}
// Call Claude with MCP tools to search memories
const searchResponse = await query({
prompt: searchPrompt,
options: {
allowedTools: [
'mcp__claude-mem__chroma_query_documents',
'mcp__claude-mem__chroma_get_documents'
],
pathToClaudeCodeExecutable: getClaudePath()
}
});
// Extract memories from response
let memoriesJson = '';
if (searchResponse && typeof searchResponse === 'object' && Symbol.asyncIterator in searchResponse) {
for await (const message of searchResponse) {
if (message?.type === 'assistant' && message?.message?.content) {
const content = message.message.content;
if (typeof content === 'string') {
memoriesJson += content;
} else if (Array.isArray(content)) {
for (const block of content) {
if (block.type === 'text' && block.text) {
memoriesJson += block.text;
}
}
}
}
}
}
// Parse memories
let memories: MemorySearchResult[] = [];
try {
// Extract JSON from response (might be wrapped in markdown)
const jsonMatch = memoriesJson.match(/```json\n([\s\S]*?)\n```/) ||
memoriesJson.match(/\{[\s\S]*\}/);
if (jsonMatch) {
const parsed = JSON.parse(jsonMatch[1] || jsonMatch[0]);
if (parsed.memories && Array.isArray(parsed.memories)) {
memories = parsed.memories;
}
}
} catch (e) {
console.error('⚠️ Could not parse memory search results:', e);
}
if (memories.length === 0) {
console.log('\n⚠️ No version-related memories found. Try compressing more sessions first.');
process.exit(1);
}
console.log(`✅ Found ${memories.length} version-related memories\n`);
// Get system date for accuracy
const systemDate = execSync('date "+%Y-%m-%d %H:%M:%S %Z"').toString().trim();
const todayStr = systemDate.split(' ')[0]; // YYYY-MM-DD format
// Phase 2: Generate changelog entries from memories
const changelogPrompt = `Analyze these memories and generate changelog entries.
PROJECT: ${projectName}
DATE: ${todayStr}
MEMORIES BY VERSION:
${versionsToSearch.map(version => {
const versionMemories = memories.filter(m => m.version === version);
if (versionMemories.length === 0) return `### Version ${version}\nNo memories found.`;
return `### Version ${version} (${versionMemories.length} memories):
${versionMemories.map((m, i) => `${i + 1}. ${m.text}`).join('\n')}`;
}).join('\n\n')}
INSTRUCTIONS:
1. Extract concrete changes, fixes, and additions from the memories
2. Categorize each change as: Added, Changed, Fixed, Removed, Deprecated, or Security
3. Write clear, user-facing descriptions
4. Start each entry with an action verb
5. Focus on what matters to users, not internal implementation details
Return ONLY a JSON array with this structure:
[
{
"version": "3.6.1",
"type": "Added",
"description": "New feature description"
},
{
"version": "3.6.1",
"type": "Fixed",
"description": "Bug fix description"
}
]`;
console.log('🔄 Generating changelog entries...');
// Call Claude to generate changelog entries
const changelogResponse = await query({
prompt: changelogPrompt,
options: {
allowedTools: [],
pathToClaudeCodeExecutable: getClaudePath()
}
});
// Extract JSON from response
let entriesJson = '';
if (changelogResponse && typeof changelogResponse === 'object' && Symbol.asyncIterator in changelogResponse) {
for await (const message of changelogResponse) {
if (message?.type === 'assistant' && message?.message?.content) {
const content = message.message.content;
if (typeof content === 'string') {
entriesJson += content;
} else if (Array.isArray(content)) {
for (const block of content) {
if (block.type === 'text' && block.text) {
entriesJson += block.text;
}
}
}
}
}
}
// Parse changelog entries
let entries: ChangelogEntry[] = [];
try {
// Extract JSON (might be wrapped in markdown)
const jsonMatch = entriesJson.match(/```json\n([\s\S]*?)\n```/) ||
entriesJson.match(/\[[\s\S]*\]/);
if (jsonMatch) {
const parsed = JSON.parse(jsonMatch[1] || jsonMatch[0]);
if (Array.isArray(parsed)) {
const generatedAt = new Date().toISOString();
entries = parsed.map(e => ({
...e,
date: todayStr,
timestamp: e.timestamp || generatedAt, // Memory timestamp if available
generatedAt: generatedAt // When this changelog was generated
}));
}
}
} catch (e) {
console.error('⚠️ Could not parse changelog entries:', e);
}
if (entries.length === 0) {
console.log('⚠️ No changelog entries generated.');
process.exit(1);
}
// Ensure project changelog directory exists
if (!fs.existsSync(projectChangelogDir)) {
fs.mkdirSync(projectChangelogDir, { recursive: true });
}
// Save entries to project JSONL file
console.log(`\n💾 Saving ${entries.length} changelog entries to ${path.basename(changelogJsonlPath)}`);
// When using --historical or --generate, remove old entries for the versions being regenerated
if ((options.historical && historicalCount > 1) || options.generate) {
let existingEntries: ChangelogEntry[] = [];
if (fs.existsSync(changelogJsonlPath)) {
const lines = fs.readFileSync(changelogJsonlPath, 'utf-8').split('\n').filter(l => l.trim());
for (const line of lines) {
try {
const entry = JSON.parse(line);
// Keep entries that are NOT in the versions we're regenerating
if (!versionsToSearch.includes(entry.version)) {
existingEntries.push(entry);
}
} catch (e) {
// Skip invalid lines
}
}
}
// Rewrite the file with filtered entries plus new ones
const allEntries = [...existingEntries, ...entries];
const jsonlContent = allEntries.map(entry => JSON.stringify(entry)).join('\n') + '\n';
fs.writeFileSync(changelogJsonlPath, jsonlContent);
console.log(`🔄 Regenerated entries for versions: ${versionsToSearch.join(', ')}`);
} else {
// Append new entries to JSONL
const jsonlContent = entries.map(entry => JSON.stringify(entry)).join('\n') + '\n';
fs.appendFileSync(changelogJsonlPath, jsonlContent);
}
// Now generate markdown from all JSONL entries
console.log('\n📝 Generating CHANGELOG.md from entries...');
// Read all entries from JSONL
let allEntries: ChangelogEntry[] = [];
if (fs.existsSync(changelogJsonlPath)) {
const lines = fs.readFileSync(changelogJsonlPath, 'utf-8').split('\n').filter(l => l.trim());
for (const line of lines) {
try {
allEntries.push(JSON.parse(line));
} catch (e) {
// Skip invalid lines
}
}
}
// Group entries by version
const entriesByVersion = new Map<string, ChangelogEntry[]>();
for (const entry of allEntries) {
if (!entriesByVersion.has(entry.version)) {
entriesByVersion.set(entry.version, []);
}
entriesByVersion.get(entry.version)!.push(entry);
}
// Generate markdown
let markdown = '# Changelog\n\nAll notable changes to this project will be documented in this file.\n\nThe format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/).\n\n';
// Sort versions in descending order
const sortedVersions = Array.from(entriesByVersion.keys()).sort((a, b) => {
const aParts = a.split('.').map(Number);
const bParts = b.split('.').map(Number);
for (let i = 0; i < 3; i++) {
if (aParts[i] !== bParts[i]) return bParts[i] - aParts[i];
}
return 0;
});
for (const version of sortedVersions) {
const versionEntries = entriesByVersion.get(version)!;
const date = versionEntries[0].date || todayStr;
markdown += `\n## [${version}] - ${date}\n\n`;
// Group by type
const types: Array<ChangelogEntry['type']> = ['Added', 'Changed', 'Fixed', 'Removed', 'Deprecated', 'Security'];
for (const type of types) {
const typeEntries = versionEntries.filter(e => e.type === type);
if (typeEntries.length > 0) {
markdown += `### ${type}\n`;
for (const entry of typeEntries) {
markdown += `- ${entry.description}\n`;
}
markdown += '\n';
}
}
}
// Write the CHANGELOG.md
const changelogPath = path.join(process.cwd(), 'CHANGELOG.md');
fs.writeFileSync(changelogPath, markdown);
console.log(`✅ Generated CHANGELOG.md with ${allEntries.length} total entries across ${entriesByVersion.size} versions!`);
if (options.preview) {
console.log('\n📄 Preview:\n');
console.log(markdown.split('\n').slice(0, 30).join('\n'));
if (markdown.split('\n').length > 30) {
console.log('\n... (truncated for preview)');
}
}
} catch (error) {
console.error('❌ Error generating changelog:', error instanceof Error ? error.message : error);
if (error instanceof Error && error.stack) {
console.error('Stack:', error.stack);
}
process.exit(1);
}
}
async function updateChangelogFromJsonl(options: OptionValues): Promise<void> {
try {
// Get project name from package.json
const packageJsonPath = path.join(process.cwd(), 'package.json');
let projectName = 'unknown';
if (fs.existsSync(packageJsonPath)) {
try {
const packageData = JSON.parse(fs.readFileSync(packageJsonPath, 'utf-8'));
projectName = packageData.name || path.basename(process.cwd());
} catch (e) {
projectName = path.basename(process.cwd());
}
}
const projectChangelogDir = path.join(
process.env.HOME || process.env.USERPROFILE || '',
'.claude-mem',
'projects'
);
const changelogJsonlPath = path.join(projectChangelogDir, `${projectName}-changelog.jsonl`);
if (!fs.existsSync(changelogJsonlPath)) {
console.log('❌ No changelog entries found. Generate some first with: claude-mem changelog');
process.exit(1);
}
console.log('📝 Updating CHANGELOG.md from JSONL entries...');
// Read all entries from JSONL
let allEntries: ChangelogEntry[] = [];
const lines = fs.readFileSync(changelogJsonlPath, 'utf-8').split('\n').filter(l => l.trim());
for (const line of lines) {
try {
allEntries.push(JSON.parse(line));
} catch (e) {
// Skip invalid lines
}
}
if (allEntries.length === 0) {
console.log('❌ No valid entries found in JSONL file');
process.exit(1);
}
// Group entries by version
const entriesByVersion = new Map<string, ChangelogEntry[]>();
for (const entry of allEntries) {
if (!entriesByVersion.has(entry.version)) {
entriesByVersion.set(entry.version, []);
}
entriesByVersion.get(entry.version)!.push(entry);
}
// Generate markdown
let markdown = '# Changelog\n\nAll notable changes to this project will be documented in this file.\n\nThe format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/).\n\n';
// Sort versions in descending order
const sortedVersions = Array.from(entriesByVersion.keys()).sort((a, b) => {
const aParts = a.split('.').map(Number);
const bParts = b.split('.').map(Number);
for (let i = 0; i < 3; i++) {
if (aParts[i] !== bParts[i]) return bParts[i] - aParts[i];
}
return 0;
});
for (const version of sortedVersions) {
const versionEntries = entriesByVersion.get(version)!;
const date = versionEntries[0].date;
markdown += `\n## [${version}] - ${date}\n\n`;
// Group by type
const types: Array<ChangelogEntry['type']> = ['Added', 'Changed', 'Fixed', 'Removed', 'Deprecated', 'Security'];
for (const type of types) {
const typeEntries = versionEntries.filter(e => e.type === type);
if (typeEntries.length > 0) {
markdown += `### ${type}\n`;
for (const entry of typeEntries) {
markdown += `- ${entry.description}\n`;
}
markdown += '\n';
}
}
}
// Write the CHANGELOG.md
const changelogPath = path.join(process.cwd(), 'CHANGELOG.md');
fs.writeFileSync(changelogPath, markdown);
console.log(`✅ Updated CHANGELOG.md with ${allEntries.length} entries across ${entriesByVersion.size} versions!`);
if (options.preview) {
console.log('\n📄 Preview:\n');
console.log(markdown.split('\n').slice(0, 30).join('\n'));
if (markdown.split('\n').length > 30) {
console.log('\n... (truncated for preview)');
}
}
} catch (error) {
console.error('❌ Error updating changelog:', error instanceof Error ? error.message : error);
process.exit(1);
}
}
+43
View File
@@ -0,0 +1,43 @@
import { OptionValues } from 'commander';
import { basename, dirname } from 'path';
import {
createLoadingMessage,
createCompletionMessage,
createOperationSummary,
createUserFriendlyError
} from '../prompts/templates/context/ContextTemplates.js';
export async function compress(transcript?: string, options: OptionValues = {}): Promise<void> {
console.log(createLoadingMessage('compressing'));
if (!transcript) {
console.log(createUserFriendlyError('Compression', 'No transcript file provided', 'Please provide a path to a transcript file'));
return;
}
try {
const startTime = Date.now();
// Import and run compression
const { TranscriptCompressor } = await import('../core/compression/TranscriptCompressor.js');
const compressor = new TranscriptCompressor({
verbose: options.verbose || false
});
const sessionId = options.sessionId || basename(transcript, '.jsonl');
const archivePath = await compressor.compress(transcript, sessionId);
const duration = Date.now() - startTime;
console.log(createCompletionMessage('Compression', undefined, `Session archived as ${basename(archivePath)}`));
console.log(createOperationSummary('compress', { count: 1, duration, details: `Session: ${sessionId}` }));
} catch (error) {
const errorMessage = error instanceof Error ? error.message : String(error);
console.log(createUserFriendlyError(
'Compression',
errorMessage,
'Check that the transcript file exists and you have write permissions'
));
throw error; // Re-throw to maintain existing error handling behavior
}
}
+146
View File
@@ -0,0 +1,146 @@
/**
* Hook command handlers for binary distribution
* These execute the actual hook logic embedded in the binary
*/
import { basename, sep } from 'path';
import { compress } from './compress.js';
import { loadContext } from './load-context.js';
/**
* Pre-compact hook handler
* Runs compression on the Claude Code transcript
*/
export async function preCompactHook(): Promise<void> {
try {
// Read hook data from stdin (Claude Code sends JSON)
let inputData = '';
// Set up stdin to read data
process.stdin.setEncoding('utf8');
// Collect all input data
for await (const chunk of process.stdin) {
inputData += chunk;
}
// Parse the JSON input
let transcriptPath: string | undefined;
if (inputData) {
try {
const hookData = JSON.parse(inputData);
transcriptPath = hookData.transcript_path;
} catch (parseError) {
// If JSON parsing fails, treat the input as a direct path
transcriptPath = inputData.trim();
}
}
// Fallback to environment variable or command line argument
if (!transcriptPath) {
transcriptPath = process.env.TRANSCRIPT_PATH || process.argv[2];
}
if (!transcriptPath) {
console.log('🗜️ Compressing session transcript...');
console.log('❌ No transcript path provided to pre-compact hook');
console.log('Hook data received:', inputData || 'none');
console.log('Environment TRANSCRIPT_PATH:', process.env.TRANSCRIPT_PATH || 'not set');
console.log('Command line args:', process.argv.slice(2));
return;
}
// Run compression with the transcript path
await compress(transcriptPath, { dryRun: false });
} catch (error: any) {
console.error('Pre-compact hook failed:', error.message);
process.exit(1);
}
}
/**
* Session-start hook handler
* Loads context for the new session
*/
export async function sessionStartHook(): Promise<void> {
try {
// Read hook data from stdin (Claude Code sends JSON)
let inputData = '';
// Set up stdin to read data
process.stdin.setEncoding('utf8');
// Collect all input data
for await (const chunk of process.stdin) {
inputData += chunk;
}
// Parse the JSON input to get the current working directory
let project: string | undefined;
if (inputData) {
try {
const hookData = JSON.parse(inputData);
// Extract project name from cwd if provided
if (hookData.cwd) {
project = basename(hookData.cwd);
}
} catch (parseError) {
// If JSON parsing fails, continue without project filtering
console.error('Failed to parse session-start hook data:', parseError);
}
}
// If no project from hook data, try to get from current working directory
if (!project) {
project = basename(process.cwd());
}
// Load context with session-start format and project filtering
await loadContext({ format: 'session-start', count: '10', project });
} catch (error: any) {
console.error('Session-start hook failed:', error.message);
process.exit(1);
}
}
/**
* Session-end hook handler
* Compresses session transcript when ending with /clear
*/
export async function sessionEndHook(): Promise<void> {
try {
// Read hook data from stdin (Claude Code sends JSON)
let inputData = '';
// Set up stdin to read data
process.stdin.setEncoding('utf8');
// Collect all input data
for await (const chunk of process.stdin) {
inputData += chunk;
}
// Parse the JSON input to check the reason for session end
if (inputData) {
try {
const hookData = JSON.parse(inputData);
// If reason is "clear", compress the session transcript before it's deleted
if (hookData.reason === 'clear' && hookData.transcript_path) {
console.log('🗜️ Compressing current session before /clear...');
await compress(hookData.transcript_path, { dryRun: false });
}
} catch (parseError) {
// If JSON parsing fails, log but don't fail the hook
console.error('Failed to parse hook data:', parseError);
}
}
console.log('Session ended successfully');
} catch (error: any) {
console.error('Session-end hook failed:', error.message);
process.exit(1);
}
}
+541
View File
@@ -0,0 +1,541 @@
#!/usr/bin/env node
import * as p from '@clack/prompts';
import path from 'path';
import fs from 'fs';
import os from 'os';
import chalk from 'chalk';
import { TranscriptCompressor } from '../core/compression/TranscriptCompressor.js';
import { TitleGenerator, TitleGenerationRequest } from '../core/titles/TitleGenerator.js';
interface ConversationMetadata {
sessionId: string;
timestamp: string;
messageCount: number;
branch?: string;
cwd: string;
fileSize: number;
}
interface ConversationItem extends ConversationMetadata {
filePath: string;
projectName: string;
parsedDate: Date;
relativeDate: string;
}
function formatFileSize(bytes: number): string {
if (bytes < 1024) return `${bytes}B`;
if (bytes < 1024 * 1024) return `${(bytes / 1024).toFixed(1)}KB`;
return `${(bytes / (1024 * 1024)).toFixed(1)}MB`;
}
function formatRelativeDate(date: Date): string {
const now = new Date();
const diffMs = now.getTime() - date.getTime();
const diffMins = Math.floor(diffMs / 60000);
const diffHours = Math.floor(diffMs / 3600000);
const diffDays = Math.floor(diffMs / 86400000);
if (diffMins < 1) return 'just now';
if (diffMins < 60) return `${diffMins}m ago`;
if (diffHours < 24) return `${diffHours}h ago`;
if (diffDays < 7) return `${diffDays}d ago`;
if (diffDays < 30) return `${Math.floor(diffDays / 7)}w ago`;
if (diffDays < 365) return `${Math.floor(diffDays / 30)}mo ago`;
return `${Math.floor(diffDays / 365)}y ago`;
}
function parseTimestamp(timestamp: string, fallbackPath: string): Date {
try {
const parsed = new Date(timestamp);
if (!isNaN(parsed.getTime())) return parsed;
} catch {}
// Fallback: try to extract from filename
const match = fallbackPath.match(/(\d{4})-(\d{2})-(\d{2})_(\d{2})-(\d{2})-(\d{2})/);
if (match) {
const [_, year, month, day, hour, minute, second] = match;
return new Date(
parseInt(year),
parseInt(month) - 1,
parseInt(day),
parseInt(hour),
parseInt(minute),
parseInt(second)
);
}
// Last resort: file stats
const stats = fs.statSync(fallbackPath);
return stats.mtime;
}
function extractFirstUserMessage(filePath: string): string {
try {
const content = fs.readFileSync(filePath, 'utf-8');
const lines = content.trim().split('\n').filter(Boolean);
for (const line of lines) {
try {
const message = JSON.parse(line);
if (message.type === 'user' && message.message?.content) {
const messageContent = message.message.content;
if (Array.isArray(messageContent)) {
const textContent = messageContent
.filter(item => item.type === 'text')
.map(item => item.text)
.join(' ');
if (textContent.trim()) return textContent.trim();
} else if (typeof messageContent === 'string') {
return messageContent.trim();
}
}
} catch {}
}
return 'Conversation'; // Fallback
} catch {
return 'Conversation'; // Fallback
}
}
async function loadImportedSessions(): Promise<Set<string>> {
const importedIds = new Set<string>();
const indexPath = path.join(os.homedir(), '.claude-mem', 'claude-mem-index.jsonl');
if (!fs.existsSync(indexPath)) return importedIds;
const content = fs.readFileSync(indexPath, 'utf-8');
const lines = content.trim().split('\n').filter(Boolean);
for (const line of lines) {
try {
const entry = JSON.parse(line);
// Check both session_id (from index) and sessionId (legacy)
if (entry.session_id) {
importedIds.add(entry.session_id);
} else if (entry.sessionId) {
importedIds.add(entry.sessionId);
}
} catch {}
}
return importedIds;
}
async function scanConversations(): Promise<{ conversations: ConversationItem[]; skippedCount: number }> {
const claudeDir = path.join(os.homedir(), '.claude', 'projects');
if (!fs.existsSync(claudeDir)) {
return { conversations: [], skippedCount: 0 };
}
const projects = fs.readdirSync(claudeDir)
.filter(dir => fs.statSync(path.join(claudeDir, dir)).isDirectory());
const conversations: ConversationItem[] = [];
const importedSessionIds = await loadImportedSessions();
let skippedCount = 0;
for (const project of projects) {
const projectDir = path.join(claudeDir, project);
const files = fs.readdirSync(projectDir)
.filter(file => file.endsWith('.jsonl'))
.map(file => path.join(projectDir, file));
for (const filePath of files) {
try {
const content = fs.readFileSync(filePath, 'utf-8');
const lines = content.trim().split('\n').filter(Boolean);
// Parse first line for metadata
const firstLine = JSON.parse(lines[0]);
const messageCount = lines.length;
const stats = fs.statSync(filePath);
const fileSize = stats.size;
const metadata: ConversationMetadata = {
sessionId: firstLine.sessionId || path.basename(filePath, '.jsonl'),
timestamp: firstLine.timestamp || stats.mtime.toISOString(),
messageCount,
branch: firstLine.branch,
cwd: firstLine.cwd || projectDir,
fileSize
};
// Skip if already imported
if (importedSessionIds.has(metadata.sessionId)) {
skippedCount++;
continue;
}
const projectName = path.basename(path.dirname(filePath));
const parsedDate = parseTimestamp(metadata.timestamp, filePath);
const relativeDate = formatRelativeDate(parsedDate);
conversations.push({
filePath,
...metadata,
projectName,
parsedDate,
relativeDate
});
} catch {}
}
}
return { conversations, skippedCount };
}
export async function importHistory(options: { verbose?: boolean; multi?: boolean } = {}) {
console.clear();
p.intro(chalk.bgCyan.black(' CLAUDE-MEM IMPORT '));
const s = p.spinner();
s.start('Scanning conversation history');
const { conversations, skippedCount } = await scanConversations();
if (conversations.length === 0) {
s.stop('No new conversations found');
const message = skippedCount > 0
? `All ${skippedCount} conversation${skippedCount === 1 ? ' is' : 's are'} already imported.`
: 'No conversations found.';
p.outro(chalk.yellow(message));
return;
}
// Sort by date (newest first)
conversations.sort((a, b) => b.parsedDate.getTime() - a.parsedDate.getTime());
const statusMessage = skippedCount > 0
? `Found ${conversations.length} new conversation${conversations.length === 1 ? '' : 's'} (${skippedCount} already imported)`
: `Found ${conversations.length} new conversation${conversations.length === 1 ? '' : 's'}`;
s.stop(statusMessage);
// Group conversations by project for better organization
const projectGroups = conversations.reduce((acc, conv) => {
if (!acc[conv.projectName]) acc[conv.projectName] = [];
acc[conv.projectName].push(conv);
return acc;
}, {} as Record<string, ConversationItem[]>);
// Create selection options
const importMode = await p.select({
message: 'How would you like to import?',
options: [
{ value: 'browse', label: 'Browse by Project', hint: 'Select project then conversations' },
{ value: 'project', label: 'Import Entire Project', hint: 'Select project and import all conversations' },
{ value: 'recent', label: 'Recent Conversations', hint: 'Import most recent across all projects' },
{ value: 'search', label: 'Search', hint: 'Search for specific conversations' }
]
});
if (p.isCancel(importMode)) {
p.cancel('Import cancelled');
return;
}
let selectedConversations: ConversationItem[] = [];
if (importMode === 'browse') {
// Project selection
const projectOptions = Object.entries(projectGroups)
.sort((a, b) => b[1][0].parsedDate.getTime() - a[1][0].parsedDate.getTime())
.map(([project, convs]) => ({
value: project,
label: project,
hint: `${convs.length} conversation${convs.length === 1 ? '' : 's'}, latest: ${convs[0].relativeDate}`
}));
const selectedProject = await p.select({
message: 'Select a project',
options: projectOptions
});
if (p.isCancel(selectedProject)) {
p.cancel('Import cancelled');
return;
}
const projectConvs = projectGroups[selectedProject as string];
// Ask about title generation
const generateTitles = await p.confirm({
message: 'Would you like to generate titles for easier browsing?',
initialValue: false
});
if (p.isCancel(generateTitles)) {
p.cancel('Import cancelled');
return;
}
if (generateTitles) {
await processTitleGeneration(projectConvs, selectedProject as string);
}
// Conversation selection within project
const titleGenerator = new TitleGenerator();
const convOptions = projectConvs.map(conv => {
const title = titleGenerator.getTitleForSession(conv.sessionId);
const displayTitle = title ? `"${title}" • ` : '';
return {
value: conv.sessionId,
label: `${displayTitle}${conv.relativeDate}${conv.messageCount} messages • ${formatFileSize(conv.fileSize)}`,
hint: conv.branch ? `branch: ${conv.branch}` : undefined
};
});
if (options.multi) {
const selected = await p.multiselect({
message: `Select conversations from ${selectedProject} (Space to select, Enter to confirm)`,
options: convOptions,
required: false
});
if (p.isCancel(selected)) {
p.cancel('Import cancelled');
return;
}
const selectedIds = selected as string[];
selectedConversations = projectConvs.filter(c => selectedIds.includes(c.sessionId));
} else {
// Single select with continuous import
let continueImporting = true;
const importedInSession = new Set<string>();
while (continueImporting && projectConvs.length > importedInSession.size) {
const availableConvs = projectConvs.filter(c => !importedInSession.has(c.sessionId));
if (availableConvs.length === 0) break;
const titleGenerator = new TitleGenerator();
const convOptions = availableConvs.map(conv => {
const title = titleGenerator.getTitleForSession(conv.sessionId);
const displayTitle = title ? `"${title}" • ` : '';
return {
value: conv.sessionId,
label: `${displayTitle}${conv.relativeDate}${conv.messageCount} messages • ${formatFileSize(conv.fileSize)}`,
hint: conv.branch ? `branch: ${conv.branch}` : undefined
};
});
const selected = await p.select({
message: `Select a conversation (${importedInSession.size}/${projectConvs.length} imported)`,
options: [
...convOptions,
{ value: 'done', label: '✅ Done importing', hint: 'Exit import mode' }
]
});
if (p.isCancel(selected) || selected === 'done') {
continueImporting = false;
break;
}
const conv = availableConvs.find(c => c.sessionId === selected);
if (conv) {
selectedConversations = [conv];
await processImport(selectedConversations, options.verbose);
importedInSession.add(conv.sessionId);
}
}
if (importedInSession.size > 0) {
p.outro(chalk.green(`✅ Imported ${importedInSession.size} conversation${importedInSession.size === 1 ? '' : 's'}`));
} else {
p.outro(chalk.yellow('No conversations imported'));
}
return;
}
} else if (importMode === 'project') {
// Project selection for importing entire project
const projectOptions = Object.entries(projectGroups)
.sort((a, b) => b[1][0].parsedDate.getTime() - a[1][0].parsedDate.getTime())
.map(([project, convs]) => ({
value: project,
label: project,
hint: `${convs.length} conversation${convs.length === 1 ? '' : 's'}, latest: ${convs[0].relativeDate}`
}));
const selectedProject = await p.select({
message: 'Select a project to import all conversations',
options: projectOptions
});
if (p.isCancel(selectedProject)) {
p.cancel('Import cancelled');
return;
}
const projectConvs = projectGroups[selectedProject as string];
// Ask about title generation
const generateTitles = await p.confirm({
message: 'Would you like to generate titles for easier browsing?',
initialValue: false
});
if (p.isCancel(generateTitles)) {
p.cancel('Import cancelled');
return;
}
if (generateTitles) {
await processTitleGeneration(projectConvs, selectedProject as string);
}
const confirm = await p.confirm({
message: `Import all ${projectConvs.length} conversation${projectConvs.length === 1 ? '' : 's'} from ${selectedProject}?`
});
if (p.isCancel(confirm) || !confirm) {
p.cancel('Import cancelled');
return;
}
selectedConversations = projectConvs;
} else if (importMode === 'recent') {
const limit = await p.text({
message: 'How many recent conversations?',
placeholder: '10',
initialValue: '10',
validate: (value) => {
const num = parseInt(value);
if (isNaN(num) || num < 1) return 'Please enter a valid number';
if (num > conversations.length) return `Only ${conversations.length} available`;
}
});
if (p.isCancel(limit)) {
p.cancel('Import cancelled');
return;
}
const count = parseInt(limit as string);
selectedConversations = conversations.slice(0, count);
} else if (importMode === 'search') {
const searchTerm = await p.text({
message: 'Search conversations (project name or session ID)',
placeholder: 'Enter search term'
});
if (p.isCancel(searchTerm)) {
p.cancel('Import cancelled');
return;
}
const term = (searchTerm as string).toLowerCase();
const matches = conversations.filter(c =>
c.projectName.toLowerCase().includes(term) ||
c.sessionId.toLowerCase().includes(term) ||
(c.branch && c.branch.toLowerCase().includes(term))
);
if (matches.length === 0) {
p.outro(chalk.yellow('No matching conversations found'));
return;
}
const titleGenerator = new TitleGenerator();
const matchOptions = matches.map(conv => {
const title = titleGenerator.getTitleForSession(conv.sessionId);
const displayTitle = title ? `"${title}" • ` : '';
return {
value: conv.sessionId,
label: `${displayTitle}${conv.projectName}${conv.relativeDate}${conv.messageCount} msgs`,
hint: formatFileSize(conv.fileSize)
};
});
const selected = await p.multiselect({
message: `Found ${matches.length} matches. Select to import:`,
options: matchOptions,
required: false
});
if (p.isCancel(selected)) {
p.cancel('Import cancelled');
return;
}
const selectedIds = selected as string[];
selectedConversations = matches.filter(c => selectedIds.includes(c.sessionId));
}
// Process the import
if (selectedConversations.length > 0) {
await processImport(selectedConversations, options.verbose);
p.outro(chalk.green(`✅ Successfully imported ${selectedConversations.length} conversation${selectedConversations.length === 1 ? '' : 's'}`));
} else {
p.outro(chalk.yellow('No conversations selected for import'));
}
}
async function processTitleGeneration(conversations: ConversationItem[], projectName: string) {
const titleGenerator = new TitleGenerator();
const existingTitles = titleGenerator.getExistingTitles();
// Filter conversations that don't have titles yet
const conversationsNeedingTitles = conversations.filter(conv => !existingTitles.has(conv.sessionId));
if (conversationsNeedingTitles.length === 0) {
p.note('All conversations already have titles!', 'Title Generation');
return;
}
const s = p.spinner();
s.start(`Generating titles for ${conversationsNeedingTitles.length} conversations...`);
const requests: TitleGenerationRequest[] = conversationsNeedingTitles.map(conv => ({
sessionId: conv.sessionId,
projectName: projectName,
firstMessage: extractFirstUserMessage(conv.filePath)
}));
try {
await titleGenerator.batchGenerateTitles(requests);
s.stop(`✅ Generated ${conversationsNeedingTitles.length} titles`);
} catch (error) {
s.stop(`❌ Failed to generate titles`);
console.error(chalk.red(`Error: ${error}`));
}
}
async function processImport(conversations: ConversationItem[], verbose?: boolean) {
const s = p.spinner();
for (let i = 0; i < conversations.length; i++) {
const conv = conversations[i];
const progress = conversations.length > 1 ? `[${i + 1}/${conversations.length}] ` : '';
s.start(`${progress}Importing ${conv.projectName} (${conv.relativeDate})`);
try {
// Extract project name from the conversation's cwd field
const projectName = path.basename(conv.cwd);
// Use TranscriptCompressor to process
const compressor = new TranscriptCompressor();
await compressor.compress(conv.filePath, conv.sessionId, projectName);
s.stop(`${progress}Imported ${conv.projectName} (${conv.messageCount} messages)`);
if (verbose) {
p.note(`Session: ${conv.sessionId}\nSize: ${formatFileSize(conv.fileSize)}\nBranch: ${conv.branch || 'main'}`, 'Details');
}
} catch (error) {
s.stop(`${progress}Failed to import ${conv.projectName}`);
if (verbose) {
console.error(chalk.red(`Error: ${error}`));
}
}
}
}
File diff suppressed because it is too large Load Diff
+198
View File
@@ -0,0 +1,198 @@
import { OptionValues } from 'commander';
import fs from 'fs';
import { join } from 'path';
import { PathDiscovery } from '../services/path-discovery.js';
import {
createCompletionMessage,
createContextualError,
createUserFriendlyError,
formatTimeAgo,
outputSessionStartContent
} from '../prompts/templates/context/ContextTemplates.js';
interface IndexEntry {
summary: string;
entity: string;
keywords: string[];
}
interface TrashStatus {
folderCount: number;
fileCount: number;
totalSize: number;
isEmpty: boolean;
}
function formatSize(bytes: number): string {
if (bytes === 0) return '0 B';
const k = 1024;
const sizes = ['B', 'KB', 'MB', 'GB'];
const i = Math.floor(Math.log(bytes) / Math.log(k));
return parseFloat((bytes / Math.pow(k, i)).toFixed(1)) + ' ' + sizes[i];
}
function getTrashStatus(): TrashStatus {
const trashDir = PathDiscovery.getInstance().getTrashDirectory();
if (!fs.existsSync(trashDir)) {
return { folderCount: 0, fileCount: 0, totalSize: 0, isEmpty: true };
}
const items = fs.readdirSync(trashDir);
if (items.length === 0) {
return { folderCount: 0, fileCount: 0, totalSize: 0, isEmpty: true };
}
let folderCount = 0;
let fileCount = 0;
let totalSize = 0;
for (const item of items) {
const itemPath = join(trashDir, item);
const stats = fs.statSync(itemPath);
if (stats.isDirectory()) {
folderCount++;
} else {
fileCount++;
}
totalSize += stats.size;
}
return { folderCount, fileCount, totalSize, isEmpty: false };
}
export async function loadContext(options: OptionValues = {}): Promise<void> {
const pathDiscovery = PathDiscovery.getInstance();
const indexPath = pathDiscovery.getIndexPath();
try {
// Check if index file exists
if (!fs.existsSync(indexPath)) {
if (options.format === 'session-start') {
console.log(createContextualError('NO_MEMORIES', options.project || 'this project'));
}
return;
}
const content = fs.readFileSync(indexPath, 'utf-8');
const lines = content.trim().split('\n').filter(line => line.trim());
if (lines.length === 0) {
if (options.format === 'session-start') {
console.log(createContextualError('NO_MEMORIES', options.project || 'this project'));
}
return;
}
// Parse JSONL format - each line is a JSON object
const jsonObjects: any[] = [];
for (const line of lines) {
try {
// Skip lines that don't look like JSON (could be legacy format)
if (!line.trim().startsWith('{')) {
continue;
}
const obj = JSON.parse(line);
jsonObjects.push(obj);
} catch (e) {
// Skip malformed JSON lines
continue;
}
}
if (jsonObjects.length === 0) {
if (options.format === 'session-start') {
console.log(createContextualError('NO_MEMORIES', options.project || 'this project'));
}
return;
}
// Separate memories, overviews, and other types
const memories = jsonObjects.filter(obj => obj.type === 'memory');
const overviews = jsonObjects.filter(obj => obj.type === 'overview');
const sessions = jsonObjects.filter(obj => obj.type === 'session');
// Filter each type by project if specified
let filteredMemories = memories;
let filteredOverviews = overviews;
if (options.project) {
filteredMemories = memories.filter(obj => obj.project === options.project);
filteredOverviews = overviews.filter(obj => obj.project === options.project);
}
if (options.format === 'session-start') {
// Get last 10 memories and last 5 overviews for session-start
const recentMemories = filteredMemories.slice(-10);
const recentOverviews = filteredOverviews.slice(-5);
// Combine them for the display
const recentObjects = [...recentMemories, ...recentOverviews];
// Find most recent timestamp for last session info
let lastSessionTime = 'recently';
const timestamps = recentObjects
.map(obj => {
// Get timestamp from JSON object
return obj.timestamp ? new Date(obj.timestamp) : null;
})
.filter(date => date !== null)
.sort((a, b) => b.getTime() - a.getTime());
if (timestamps.length > 0) {
lastSessionTime = formatTimeAgo(timestamps[0]);
}
// Use dual-stream output for session start formatting
outputSessionStartContent({
projectName: options.project || 'your project',
memoryCount: recentMemories.length,
lastSessionTime,
recentObjects
});
} else if (options.format === 'json') {
// For JSON format, combine last 10 of each type
const recentMemories = filteredMemories.slice(-10);
const recentOverviews = filteredOverviews.slice(-3);
const recentObjects = [...recentMemories, ...recentOverviews];
console.log(JSON.stringify(recentObjects));
} else {
// Default format - show last 10 memories and last 3 overviews
const recentMemories = filteredMemories.slice(-10);
const recentOverviews = filteredOverviews.slice(-3);
const totalCount = recentMemories.length + recentOverviews.length;
console.log(createCompletionMessage('Context loading', totalCount, 'recent entries found'));
// Show memories first
recentMemories.forEach((obj) => {
console.log(`${obj.text} | ${obj.document_id} | ${obj.keywords}`);
});
// Then show overviews
recentOverviews.forEach((obj) => {
console.log(`**Overview:** ${obj.content}`);
});
}
// Display trash status if not empty (except for JSON format to avoid breaking JSON parsing)
if (options.format !== 'json') {
const trashStatus = getTrashStatus();
if (!trashStatus.isEmpty) {
const formattedSize = formatSize(trashStatus.totalSize);
console.log(`🗑️ Trash ${trashStatus.folderCount} folders | ${trashStatus.fileCount} files | ${formattedSize} use \`$ claude-mem restore\``);
console.log('');
}
}
} catch (error) {
const errorMessage = error instanceof Error ? error.message : String(error);
if (options.format === 'session-start') {
console.log(createContextualError('CONNECTION_FAILED', errorMessage));
} else {
console.log(createUserFriendlyError('Context loading', errorMessage, 'Check file permissions and try again'));
}
}
}
+84
View File
@@ -0,0 +1,84 @@
import { OptionValues } from 'commander';
import { readFileSync, readdirSync, statSync } from 'fs';
import { join } from 'path';
import { PathDiscovery } from '../services/path-discovery.js';
// <Block> 1.1 ====================================
async function showLog(logPath: string, logType: string, tail: number): Promise<void> {
// <Block> 1.2 ====================================
try {
const content = readFileSync(logPath, 'utf8');
const lines = content.split('\n').filter(line => line.trim());
const displayLines = lines.slice(-tail);
console.log(`📋 ${logType} Logs (last ${tail} lines):`);
console.log(` File: ${logPath}`);
console.log('');
// <Block> 1.3 ====================================
if (displayLines.length === 0) {
console.log(' No log entries found');
// </Block> =======================================
} else {
displayLines.forEach(line => {
console.log(` ${line}`);
});
}
// </Block> =======================================
console.log('');
// </Block> =======================================
} catch (error) {
// <Block> 1.4 ====================================
console.log(`❌ Could not read ${logType.toLowerCase()} log: ${logPath}`);
// </Block> =======================================
}
// </Block> =======================================
}
// <Block> 2.1 ====================================
export async function logs(options: OptionValues = {}): Promise<void> {
// <Block> 2.2 ====================================
const logsDir = PathDiscovery.getLogsDirectory();
const tail = parseInt(options.tail) || 20;
// </Block> =======================================
// Find most recent log file
try {
const files = readdirSync(logsDir);
const logFiles = files
.filter(f => f.startsWith('claude-mem-') && f.endsWith('.log'))
.map(f => ({
name: f,
path: join(logsDir, f),
mtime: statSync(join(logsDir, f)).mtime
}))
.sort((a, b) => b.mtime.getTime() - a.mtime.getTime());
if (logFiles.length === 0) {
console.log('❌ No log files found in ~/.claude-mem/logs/');
return;
}
// Show most recent log
await showLog(logFiles[0].path, 'Most Recent', tail);
if (options.all && logFiles.length > 1) {
console.log(`📚 Found ${logFiles.length} total log files`);
}
} catch (error) {
console.log('❌ Could not read logs directory: ~/.claude-mem/logs/');
console.log(' Run a compression first to generate logs');
}
// <Block> 2.5 ====================================
if (options.follow) {
console.log('Following logs... (Press Ctrl+C to stop)');
// Basic follow implementation - would need more sophisticated watching in real usage
setInterval(() => {
// This would need proper file watching implementation
}, 1000);
}
// </Block> =======================================
// </Block> =======================================
}
+122
View File
@@ -0,0 +1,122 @@
#!/usr/bin/env node
/**
* One-time migration script to convert claude-mem-index.md to claude-mem-index.jsonl
*/
import fs from 'fs';
import path from 'path';
import { PathDiscovery } from '../services/path-discovery.js';
export function migrateToJSONL(): void {
const pathDiscovery = PathDiscovery.getInstance();
const oldIndexPath = path.join(pathDiscovery.getDataDirectory(), 'claude-mem-index.md');
const newIndexPath = pathDiscovery.getIndexPath();
// Check if old index exists
if (!fs.existsSync(oldIndexPath)) {
console.log('No markdown index found to migrate');
return;
}
// Check if new index already exists
if (fs.existsSync(newIndexPath)) {
console.log('JSONL index already exists, skipping migration');
return;
}
console.log('Starting migration from MD to JSONL...');
const content = fs.readFileSync(oldIndexPath, 'utf-8');
const lines = content.split('\n').filter(line => line.trim());
const jsonlLines: string[] = [];
let currentSessionId = '';
let currentSessionTimestamp = '';
for (const line of lines) {
// Parse session headers: # Session: <id> [<timestamp>]
const sessionMatch = line.match(/^# Session:\s*([^\[]+)(?:\s*\[([^\]]+)\])?/);
if (sessionMatch) {
currentSessionId = sessionMatch[1].trim();
currentSessionTimestamp = sessionMatch[2]?.trim() || new Date().toISOString();
// Extract project from session ID (assuming format like <project>_<uuid>)
const projectMatch = currentSessionId.match(/^([^_]+)_/);
const project = projectMatch ? projectMatch[1] : 'unknown';
jsonlLines.push(JSON.stringify({
type: 'session',
session_id: currentSessionId,
timestamp: currentSessionTimestamp,
project
}));
continue;
}
// Parse overviews: **Overview:** <text>
const overviewMatch = line.match(/^\*\*Overview:\*\*\s*(.+)/);
if (overviewMatch) {
const overviewText = overviewMatch[1].trim();
// Extract project from current session ID
const projectMatch = currentSessionId.match(/^([^_]+)_/);
const project = projectMatch ? projectMatch[1] : 'unknown';
jsonlLines.push(JSON.stringify({
type: 'overview',
content: overviewText,
session_id: currentSessionId,
project,
timestamp: currentSessionTimestamp
}));
continue;
}
// Skip certain lines
if (line.startsWith('# NO SUMMARIES EXTRACTED')) {
continue;
}
// Parse memory entries (pipe-separated)
if (line.includes(' | ')) {
const parts = line.split(' | ').map(p => p.trim());
if (parts.length >= 3) {
const [text, document_id, keywords, timestamp, archive] = parts;
// Extract project from document_id (format: <project>_<session>_<number>)
const projectMatch = document_id?.match(/^([^_]+)_/);
const project = projectMatch ? projectMatch[1] : 'unknown';
jsonlLines.push(JSON.stringify({
type: 'memory',
text,
document_id: document_id || `${currentSessionId}_${Date.now()}`,
keywords: keywords || '',
session_id: currentSessionId,
project,
timestamp: timestamp || currentSessionTimestamp,
archive: archive || `${currentSessionId}.jsonl.archive`
}));
}
}
}
// Write JSONL file
fs.writeFileSync(newIndexPath, jsonlLines.join('\n') + '\n');
// Backup old index
const backupPath = oldIndexPath + '.backup';
fs.renameSync(oldIndexPath, backupPath);
console.log(`✅ Migration complete!`);
console.log(` - Converted ${jsonlLines.length} entries`);
console.log(` - New index: ${newIndexPath}`);
console.log(` - Backup: ${backupPath}`);
}
// Run if called directly
if (import.meta.url === `file://${process.argv[1]}`) {
migrateToJSONL();
}
+24
View File
@@ -0,0 +1,24 @@
import { readdirSync, renameSync } from 'fs';
import { join } from 'path';
import * as p from '@clack/prompts';
import { PathDiscovery } from '../services/path-discovery.js';
export async function restore(): Promise<void> {
const trashDir = PathDiscovery.getInstance().getTrashDirectory();
const files = readdirSync(trashDir);
if (files.length === 0) {
console.log('Trash is empty');
return;
}
const file = await p.select({
message: 'Select file to restore:',
options: files.map(f => ({ value: f, label: f }))
});
if (p.isCancel(file)) return;
renameSync(join(trashDir, file), join(process.cwd(), file));
console.log(`Restored ${file}`);
}
+70
View File
@@ -0,0 +1,70 @@
import { OptionValues } from 'commander';
import { appendFileSync } from 'fs';
import { PathDiscovery } from '../services/path-discovery.js';
/**
* Generates a descriptive session ID from the message content
* Takes first few meaningful words and creates a readable identifier
*/
function generateSessionId(message: string): string {
// Remove punctuation and split into words
const words = message
.toLowerCase()
.replace(/[^\w\s]/g, ' ')
.split(/\s+/)
.filter(word => word.length > 2); // Skip short words like 'a', 'is', 'to'
// Take first 3-4 meaningful words, max 30 chars
const sessionWords = words.slice(0, 4).join('-');
const truncated = sessionWords.length > 30 ? sessionWords.substring(0, 27) + '...' : sessionWords;
// Add timestamp suffix to ensure uniqueness
const timestamp = new Date().toISOString().substring(11, 19).replace(/:/g, '');
return `${truncated}-${timestamp}`;
}
/**
* Save command - stores a message to both Chroma collection and JSONL index
*/
export async function save(message: string, options: OptionValues = {}): Promise<void> {
if (!message || message.trim() === '') {
console.error('Error: Message is required');
process.exit(1);
}
const pathDiscovery = PathDiscovery.getInstance();
const timestamp = new Date().toISOString();
const projectName = PathDiscovery.getCurrentProjectName();
const sessionId = generateSessionId(message);
const documentId = `${projectName}_${sessionId}_overview`;
// 1. Save to Chroma collection (skip for now - MCP tools only available in Claude Code context)
// TODO: Add Chroma integration when called from Claude Code with MCP server running
// 2. Append to JSONL index file
const indexPath = pathDiscovery.getIndexPath();
const indexEntry = {
type: "overview",
content: message,
session_id: sessionId,
project: projectName,
timestamp: timestamp
};
// Ensure the directory exists
pathDiscovery.ensureDirectory(pathDiscovery.getDataDirectory());
// Append to JSONL file
appendFileSync(indexPath, JSON.stringify(indexEntry) + '\n', 'utf8');
// 3. Return JSON response for hook compatibility
console.log(JSON.stringify({
success: true,
document_id: documentId,
session_id: sessionId,
project: projectName,
timestamp: timestamp,
suppressOutput: true
}));
}
+176
View File
@@ -0,0 +1,176 @@
import { readFileSync, existsSync, readdirSync, statSync } from 'fs';
import { join, resolve, dirname } from 'path';
import { execSync } from 'child_process';
import { fileURLToPath } from 'url';
import { PathDiscovery } from '../services/path-discovery.js';
const __dirname = dirname(fileURLToPath(import.meta.url));
export async function status(): Promise<void> {
console.log('🔍 Claude Memory System Status Check');
console.log('=====================================\n');
console.log('📂 Installed Hook Scripts:');
const pathDiscovery = PathDiscovery.getInstance();
const claudeMemHooksDir = pathDiscovery.getHooksDirectory();
const preCompactScript = join(claudeMemHooksDir, 'pre-compact.js');
const sessionStartScript = join(claudeMemHooksDir, 'session-start.js');
const sessionEndScript = join(claudeMemHooksDir, 'session-end.js');
const checkScript = (path: string, name: string) => {
if (existsSync(path)) {
console.log(`${name}: Found at ${path}`);
} else {
console.log(`${name}: Not found at ${path}`);
}
};
checkScript(preCompactScript, 'pre-compact.js');
checkScript(sessionStartScript, 'session-start.js');
checkScript(sessionEndScript, 'session-end.js');
console.log('');
console.log('⚙️ Settings Configuration:');
const checkSettings = (name: string, path: string) => {
if (!existsSync(path)) {
console.log(` ⏭️ ${name}: No settings file`);
return;
}
console.log(` 📋 ${name}: ${path}`);
try {
const settings = JSON.parse(readFileSync(path, 'utf8'));
const hasPreCompact = settings.hooks?.PreCompact?.some((matcher: any) =>
matcher.hooks?.some((hook: any) =>
hook.command?.includes('pre-compact.js') || hook.command?.includes('claude-mem')
)
);
const hasSessionStart = settings.hooks?.SessionStart?.some((matcher: any) =>
matcher.hooks?.some((hook: any) =>
hook.command?.includes('session-start.js') || hook.command?.includes('claude-mem')
)
);
const hasSessionEnd = settings.hooks?.SessionEnd?.some((matcher: any) =>
matcher.hooks?.some((hook: any) =>
hook.command?.includes('session-end.js') || hook.command?.includes('claude-mem')
)
);
console.log(` PreCompact: ${hasPreCompact ? '✅' : '❌'}`);
console.log(` SessionStart: ${hasSessionStart ? '✅' : '❌'}`);
console.log(` SessionEnd: ${hasSessionEnd ? '✅' : '❌'}`);
} catch (error: any) {
console.log(` ⚠️ Could not parse settings`);
}
};
checkSettings('Global', pathDiscovery.getClaudeSettingsPath());
checkSettings('Project', join(process.cwd(), '.claude', 'settings.json'));
console.log('');
console.log('📦 Compressed Transcripts:');
const claudeProjectsDir = join(pathDiscovery.getClaudeConfigDirectory(), 'projects');
if (existsSync(claudeProjectsDir)) {
try {
let compressedCount = 0;
let archiveCount = 0;
const searchDir = (dir: string, depth = 0) => {
if (depth > 3) return;
const files = readdirSync(dir);
for (const file of files) {
const fullPath = join(dir, file);
const stats = statSync(fullPath);
if (stats.isDirectory() && !file.startsWith('.')) {
searchDir(fullPath, depth + 1);
} else if (file.endsWith('.jsonl.compressed')) {
compressedCount++;
} else if (file.endsWith('.jsonl.archive')) {
archiveCount++;
}
}
};
searchDir(claudeProjectsDir);
console.log(` Compressed files: ${compressedCount}`);
console.log(` Archive files: ${archiveCount}`);
} catch (error) {
console.log(` ⚠️ Could not scan projects directory`);
}
} else {
console.log(` ️ No Claude projects directory found`);
}
console.log('');
console.log('🔧 Runtime Environment:');
const checkCommand = (cmd: string, name: string) => {
try {
const version = execSync(`${cmd} --version`, { encoding: 'utf8' }).trim();
console.log(`${name}: ${version}`);
} catch {
console.log(`${name}: Not found`);
}
};
checkCommand('node', 'Node.js');
checkCommand('bun', 'Bun');
console.log('');
console.log('🧠 Chroma Storage Status:');
console.log(' ✅ Storage backend: Chroma MCP');
console.log(` 📍 Data location: ${pathDiscovery.getChromaDirectory()}`);
console.log(' 🔍 Features: Vector search, semantic similarity, document storage');
console.log('');
console.log('📊 Summary:');
const globalPath = pathDiscovery.getClaudeSettingsPath();
const projectPath = join(process.cwd(), '.claude', 'settings.json');
let isInstalled = false;
let installLocation = 'Not installed';
try {
if (existsSync(globalPath)) {
const settings = JSON.parse(readFileSync(globalPath, 'utf8'));
if (settings.hooks?.PreCompact || settings.hooks?.SessionStart || settings.hooks?.SessionEnd) {
isInstalled = true;
installLocation = 'Global';
}
}
if (existsSync(projectPath)) {
const settings = JSON.parse(readFileSync(projectPath, 'utf8'));
if (settings.hooks?.PreCompact || settings.hooks?.SessionStart || settings.hooks?.SessionEnd) {
isInstalled = true;
installLocation = installLocation === 'Global' ? 'Global + Project' : 'Project';
}
}
} catch {}
if (isInstalled) {
console.log(` ✅ Claude Memory System is installed (${installLocation})`);
console.log('');
console.log('💡 To test: Use /compact in Claude Code');
} else {
console.log(` ❌ Claude Memory System is not installed`);
console.log('');
console.log('💡 To install: claude-mem install');
}
}
+66
View File
@@ -0,0 +1,66 @@
import { rmSync, readdirSync, existsSync, statSync } from 'fs';
import { join } from 'path';
import * as p from '@clack/prompts';
import { PathDiscovery } from '../services/path-discovery.js';
export async function emptyTrash(options: { force?: boolean } = {}): Promise<void> {
const trashDir = PathDiscovery.getInstance().getTrashDirectory();
// Check if trash directory exists
if (!existsSync(trashDir)) {
p.log.info('🗑️ Trash is already empty');
return;
}
try {
const files = readdirSync(trashDir);
if (files.length === 0) {
p.log.info('🗑️ Trash is already empty');
return;
}
// Count items
let folderCount = 0;
let fileCount = 0;
for (const file of files) {
const filePath = join(trashDir, file);
const stats = statSync(filePath);
if (stats.isDirectory()) {
folderCount++;
} else {
fileCount++;
}
}
// Confirm deletion unless --force flag is used
if (!options.force) {
const confirm = await p.confirm({
message: `Permanently delete ${folderCount} folders and ${fileCount} files from trash?`,
initialValue: false
});
if (p.isCancel(confirm) || !confirm) {
p.log.info('Cancelled - trash not emptied');
return;
}
}
// Delete all files in trash
const s = p.spinner();
s.start('Emptying trash...');
for (const file of files) {
const filePath = join(trashDir, file);
rmSync(filePath, { recursive: true, force: true });
}
s.stop(`🗑️ Trash emptied - permanently deleted ${folderCount} folders and ${fileCount} files`);
} catch (error) {
p.log.error('Failed to empty trash');
console.error(error);
process.exit(1);
}
}
+124
View File
@@ -0,0 +1,124 @@
import { readdirSync, statSync } from 'fs';
import { join, basename } from 'path';
import * as p from '@clack/prompts';
import { PathDiscovery } from '../services/path-discovery.js';
interface TrashItem {
originalName: string;
trashedName: string;
size: number;
trashedAt: Date;
isDirectory: boolean;
}
function parseTrashName(filename: string): { name: string; timestamp: number } {
const lastDotIndex = filename.lastIndexOf('.');
if (lastDotIndex === -1) return { name: filename, timestamp: 0 };
const timestamp = parseInt(filename.substring(lastDotIndex + 1));
if (isNaN(timestamp)) return { name: filename, timestamp: 0 };
return {
name: filename.substring(0, lastDotIndex),
timestamp
};
}
function formatSize(bytes: number): string {
if (bytes === 0) return '0 B';
const k = 1024;
const sizes = ['B', 'KB', 'MB', 'GB'];
const i = Math.floor(Math.log(bytes) / Math.log(k));
return parseFloat((bytes / Math.pow(k, i)).toFixed(2)) + ' ' + sizes[i];
}
function getDirectorySize(dirPath: string): number {
let size = 0;
const files = readdirSync(dirPath);
for (const file of files) {
const filePath = join(dirPath, file);
const stats = statSync(filePath);
if (stats.isDirectory()) {
size += getDirectorySize(filePath);
} else {
size += stats.size;
}
}
return size;
}
export async function viewTrash(): Promise<void> {
const trashDir = PathDiscovery.getInstance().getTrashDirectory();
try {
const files = readdirSync(trashDir);
if (files.length === 0) {
p.log.info('🗑️ Trash is empty');
return;
}
const items: TrashItem[] = files.map(file => {
const filePath = join(trashDir, file);
const stats = statSync(filePath);
const { name, timestamp } = parseTrashName(file);
const size = stats.isDirectory() ? getDirectorySize(filePath) : stats.size;
return {
originalName: name,
trashedName: file,
size,
trashedAt: new Date(timestamp),
isDirectory: stats.isDirectory()
};
});
// Sort by date, newest first
items.sort((a, b) => b.trashedAt.getTime() - a.trashedAt.getTime());
// Display header
console.log('\n🗑️ Trash Contents\n');
console.log('─'.repeat(80));
// Display items
let totalSize = 0;
let folderCount = 0;
let fileCount = 0;
for (const item of items) {
totalSize += item.size;
if (item.isDirectory) {
folderCount++;
} else {
fileCount++;
}
const type = item.isDirectory ? '📁' : '📄';
const date = item.trashedAt.toLocaleString();
const size = formatSize(item.size);
console.log(`${type} ${item.originalName}`);
console.log(` Size: ${size} | Trashed: ${date}`);
console.log(` ID: ${item.trashedName}`);
console.log();
}
// Display summary
console.log('─'.repeat(80));
console.log(`Total: ${folderCount} folders, ${fileCount} files (${formatSize(totalSize)})`);
console.log('\nTo restore files: claude-mem restore');
console.log('To empty trash: claude-mem trash empty');
} catch (error) {
if ((error as any).code === 'ENOENT') {
p.log.info('🗑️ Trash is empty');
} else {
p.log.error('Failed to read trash directory');
console.error(error);
}
}
}
+60
View File
@@ -0,0 +1,60 @@
import { renameSync, existsSync, mkdirSync, statSync } from 'fs';
import { join, basename } from 'path';
import { glob } from 'glob';
import { PathDiscovery } from '../services/path-discovery.js';
interface TrashOptions {
force?: boolean;
recursive?: boolean;
}
export async function trash(filePaths: string | string[], options: TrashOptions = {}): Promise<void> {
const trashDir = PathDiscovery.getInstance().getTrashDirectory();
if (!existsSync(trashDir)) mkdirSync(trashDir, { recursive: true });
// Handle single string or array of paths
const paths = Array.isArray(filePaths) ? filePaths : [filePaths];
for (const filePath of paths) {
// Handle glob patterns
const expandedPaths = await glob(filePath);
const actualPaths = expandedPaths.length > 0 ? expandedPaths : [filePath];
for (const actualPath of actualPaths) {
try {
// Check if file exists
if (!existsSync(actualPath)) {
if (!options.force) {
console.error(`trash: ${actualPath}: No such file or directory`);
continue;
}
// With -f, silently skip missing files
continue;
}
// Check if it's a directory and we need recursive
const stats = statSync(actualPath);
if (stats.isDirectory() && !options.recursive) {
if (!options.force) {
console.error(`trash: ${actualPath}: is a directory`);
continue;
}
}
// Generate unique destination name to avoid conflicts
const fileName = basename(actualPath);
const timestamp = Date.now();
const destination = join(trashDir, `${fileName}.${timestamp}`);
renameSync(actualPath, destination);
console.log(`Moved ${fileName} to trash`);
} catch (error) {
if (!options.force) {
const errorMessage = error instanceof Error ? error.message : String(error);
console.error(`trash: ${actualPath}: ${errorMessage}`);
}
}
}
}
}
+133
View File
@@ -0,0 +1,133 @@
import { OptionValues } from 'commander';
import { readFileSync, writeFileSync, existsSync } from 'fs';
import { join } from 'path';
import { PathDiscovery } from '../services/path-discovery.js';
export async function uninstall(options: OptionValues = {}): Promise<void> {
console.log('🔄 Uninstalling Claude Memory System hooks...');
const locations = [];
if (options.all) {
locations.push({
name: 'User',
path: PathDiscovery.getInstance().getClaudeSettingsPath()
});
locations.push({
name: 'Project',
path: join(process.cwd(), '.claude', 'settings.json')
});
} else {
const isProject = options.project;
const pathDiscovery = PathDiscovery.getInstance();
locations.push({
name: isProject ? 'Project' : 'User',
path: isProject ? join(process.cwd(), '.claude', 'settings.json') : pathDiscovery.getClaudeSettingsPath()
});
}
const pathDiscovery = PathDiscovery.getInstance();
const claudeMemHooksDir = pathDiscovery.getHooksDirectory();
const preCompactScript = join(claudeMemHooksDir, 'pre-compact.js');
const sessionStartScript = join(claudeMemHooksDir, 'session-start.js');
const sessionEndScript = join(claudeMemHooksDir, 'session-end.js');
let removedCount = 0;
for (const location of locations) {
if (!existsSync(location.path)) {
console.log(`⏭️ No settings found at ${location.name} location`);
continue;
}
try {
const content = readFileSync(location.path, 'utf8');
const settings = JSON.parse(content);
if (!settings.hooks) {
console.log(`⏭️ No hooks configured in ${location.name} settings`);
continue;
}
let modified = false;
if (settings.hooks.PreCompact) {
const filteredPreCompact = settings.hooks.PreCompact.filter((matcher: any) =>
!matcher.hooks?.some((hook: any) =>
hook.command === preCompactScript ||
hook.command?.includes('pre-compact.js') ||
hook.command?.includes('claude-mem')
)
);
if (filteredPreCompact.length !== settings.hooks.PreCompact.length) {
settings.hooks.PreCompact = filteredPreCompact.length ? filteredPreCompact : undefined;
modified = true;
console.log(`✅ Removed PreCompact hook from ${location.name} settings`);
}
}
if (settings.hooks.SessionStart) {
const filteredSessionStart = settings.hooks.SessionStart.filter((matcher: any) =>
!matcher.hooks?.some((hook: any) =>
hook.command === sessionStartScript ||
hook.command?.includes('session-start.js') ||
hook.command?.includes('claude-mem')
)
);
if (filteredSessionStart.length !== settings.hooks.SessionStart.length) {
settings.hooks.SessionStart = filteredSessionStart.length ? filteredSessionStart : undefined;
modified = true;
console.log(`✅ Removed SessionStart hook from ${location.name} settings`);
}
}
if (settings.hooks.SessionEnd) {
const filteredSessionEnd = settings.hooks.SessionEnd.filter((matcher: any) =>
!matcher.hooks?.some((hook: any) =>
hook.command === sessionEndScript ||
hook.command?.includes('session-end.js') ||
hook.command?.includes('claude-mem')
)
);
if (filteredSessionEnd.length !== settings.hooks.SessionEnd.length) {
settings.hooks.SessionEnd = filteredSessionEnd.length ? filteredSessionEnd : undefined;
modified = true;
console.log(`✅ Removed SessionEnd hook from ${location.name} settings`);
}
}
if (settings.hooks.PreCompact === undefined) delete settings.hooks.PreCompact;
if (settings.hooks.SessionStart === undefined) delete settings.hooks.SessionStart;
if (settings.hooks.SessionEnd === undefined) delete settings.hooks.SessionEnd;
if (!Object.keys(settings.hooks).length) delete settings.hooks;
if (modified) {
const backupPath = location.path + '.backup.' + Date.now();
writeFileSync(backupPath, content);
console.log(`📋 Created backup: ${backupPath}`);
writeFileSync(location.path, JSON.stringify(settings, null, 2));
removedCount++;
console.log(`✅ Updated ${location.name} settings: ${location.path}`);
} else {
console.log(`️ No Claude Memory System hooks found in ${location.name} settings`);
}
} catch (error: any) {
console.log(`⚠️ Could not process ${location.name} settings: ${error.message}`);
}
}
console.log('');
if (removedCount > 0) {
console.log('✨ Uninstallation complete!');
console.log('The Claude Memory System hooks have been removed from your settings.');
console.log('');
console.log('Note: Your compressed transcripts and archives are preserved.');
console.log('To reinstall: claude-mem install');
} else {
console.log('️ No Claude Memory System hooks were found to remove.');
}
}
+206
View File
@@ -0,0 +1,206 @@
/**
* Claude Memory System - Core Constants
*
* This file contains core application constants, CLI messages,
* configuration templates, and infrastructure-related constants.
*/
// =============================================================================
// CONFIGURATION TEMPLATES
// =============================================================================
/**
* Hook configuration templates for Claude settings
*/
export const HOOK_CONFIG_TEMPLATES = {
PRE_COMPACT: (scriptPath: string) => ({
pattern: "*",
hooks: [{
type: "command",
command: scriptPath,
timeout: 180
}]
}),
SESSION_START: (scriptPath: string) => ({
pattern: "*",
hooks: [{
type: "command",
command: scriptPath,
timeout: 30
}]
}),
SESSION_END: (scriptPath: string) => ({
pattern: "*",
hooks: [{
type: "command",
command: scriptPath,
timeout: 180
}]
})
} as const;
// =============================================================================
// CLI MESSAGES AND STATUS TEMPLATES
// =============================================================================
/**
* Command-line interface messages
*/
export const CLI_MESSAGES = {
INSTALLATION: {
STARTING: '🚀 Installing Claude Memory System with Chroma...',
SUCCESS: '🎉 Installation complete! Vector database ready.',
HOOKS_INSTALLED: '✅ Installed hooks to ~/.claude-mem/hooks/',
MCP_CONFIGURED: (path: string) => `✅ Configured MCP memory server in ${path}`,
EMBEDDED_READY: '🧠 Chroma initialized for persistent semantic memory',
ALREADY_INSTALLED: '⚠️ Claude Memory hooks are already installed.',
USE_FORCE: ' Use --force to overwrite existing installation.',
SETTINGS_WRITTEN: (type: string, path: string) =>
`✅ Installed hooks in ${type} settings\n Settings file: ${path}`
},
NEXT_STEPS: [
'1. Restart Claude Code to load the new hooks',
'2. Use `/clear` and `/compact` in Claude Code to save and compress session memories',
'3. New sessions will automatically load relevant context'
],
ERRORS: {
HOOKS_NOT_FOUND: '❌ Hook source files not found',
SETTINGS_WRITE_FAILED: (path: string, error: string) =>
`❌ Failed to write settings file: ${error}\n Path: ${path}`,
MCP_CONFIG_PARSE_FAILED: (error: string) =>
`⚠️ Warning: Could not parse existing MCP config: ${error}`,
MCP_CONFIG_WRITE_FAILED: (error: string) =>
`⚠️ Warning: Could not write MCP config: ${error}`,
COMPRESSION_FAILED: (error: string) => `❌ Compression failed: ${error}`,
CONTEXT_LOAD_FAILED: (error: string) => `❌ Failed to load context: ${error}`
},
STATUS: {
NO_INDEX: '📚 No memory index found. Starting fresh session.',
RECENT_MEMORIES: '🧠 Recent memories from previous sessions:',
MEMORY_COUNT: (count: number) => `📚 Showing ${count} most recent memories`,
FULL_CONTEXT_AVAILABLE: '💡 Full context available via MCP memory tools'
}
} as const;
// =============================================================================
// DEBUG AND LOGGING TEMPLATES
// =============================================================================
/**
* Debug logging message templates
*/
export const DEBUG_MESSAGES = {
COMPRESSION_STARTED: '🚀 COMPRESSION STARTED',
TRANSCRIPT_PATH: (path: string) => `📁 Transcript Path: ${path}`,
SESSION_ID: (id: string) => `🔍 Session ID: ${id}`,
PROJECT_NAME: (name: string) => `📝 PROJECT NAME: ${name}`,
CLAUDE_SDK_CALL: '🤖 Calling Claude SDK to analyze and populate memory database...',
TRANSCRIPT_STATS: (size: number, count: number) =>
`📊 Transcript size: ${size} characters, ${count} messages`,
COMPRESSION_COMPLETE: (count: number) => `✅ COMPRESSION COMPLETE\n Total summaries extracted: ${count}`,
CLAUDE_PATH_FOUND: (path: string) => `🎯 Found Claude Code at: ${path}`,
MCP_CONFIG_USED: (path: string) => `📋 Using MCP config: ${path}`
} as const;
// =============================================================================
// SEARCH AND QUERY TEMPLATES
// =============================================================================
/**
* Memory database search templates
*/
export const SEARCH_TEMPLATES = {
SEARCH_SCRIPT: (query: string) => `
import { query } from "@anthropic-ai/claude-code";
const searchQuery = process.env.SEARCH_QUERY || '';
const result = await query({
prompt: "Search for: " + searchQuery,
options: {
mcpConfig: "~/.claude/.mcp.json",
allowedTools: ["mcp__claude-mem__chroma_query_documents"],
outputFormat: "json"
}
});
`,
SEARCH_PREFIX: "Search for: "
} as const;
// =============================================================================
// CHROMA INTEGRATION CONSTANTS
// =============================================================================
/**
* Chroma collection names for documents
*/
export const CHROMA_COLLECTIONS = {
DOCUMENTS: 'claude_mem_documents',
MEMORIES: 'claude_mem_memories'
} as const;
/**
* Default Chroma configuration values
*/
export const CHROMA_DEFAULTS = {
HOST: 'localhost:8000',
COLLECTION: 'claude_mem_documents'
} as const;
/**
* Chroma-specific CLI messages
*/
export const CHROMA_MESSAGES = {
CONNECTION: {
CONNECTING: '🔗 Connecting to Chroma server...',
CONNECTED: '✅ Connected to Chroma successfully',
FAILED: (error: string) => `❌ Failed to connect to Chroma: ${error}`,
DISCONNECTED: '👋 Disconnected from Chroma'
},
SEARCH: {
SEMANTIC_SEARCH: '🧠 Using semantic search with Chroma...',
KEYWORD_SEARCH: '🔍 Using keyword search with Chroma...',
HYBRID_SEARCH: '🔬 Using hybrid search with Chroma...',
RESULTS_FOUND: (count: number) => `📊 Found ${count} results in Chroma`
},
SETUP: {
STARTING_CHROMA: '🚀 Starting Chroma instance...',
CHROMA_READY: '✅ Chroma is ready and accepting connections',
INITIALIZING_COLLECTIONS: '📋 Initializing document collections...'
}
} as const;
/**
* Chroma error messages
*/
export const CHROMA_ERRORS = {
CONNECTION_FAILED: 'Could not establish connection to Chroma server',
MCP_SERVER_NOT_FOUND: 'Chroma MCP server not found',
INVALID_COLLECTION: (collection: string) => `Invalid Chroma collection: ${collection}`,
QUERY_FAILED: (query: string, error: string) => `Query failed for '${query}': ${error}`,
DOCUMENT_CREATION_FAILED: (id: string) => `Failed to create document '${id}' in Chroma`,
COLLECTION_CREATION_FAILED: (name: string) => `Failed to create collection '${name}' in Chroma`
} as const;
/**
* Export all core constants for easy importing
*/
export const CONSTANTS = {
HOOK_CONFIG_TEMPLATES,
CLI_MESSAGES,
DEBUG_MESSAGES,
SEARCH_TEMPLATES,
// Chroma constants
CHROMA_COLLECTIONS,
CHROMA_DEFAULTS,
CHROMA_MESSAGES,
CHROMA_ERRORS
} as const;
+238
View File
@@ -0,0 +1,238 @@
/**
* ChunkManager - Handles intelligent chunking of large transcripts
*
* This class manages the splitting of large filtered transcripts into chunks
* that fit within Claude's 32k token limit while preserving conversation context
* and maintaining message integrity.
*/
export interface ChunkMetadata {
chunkNumber: number;
totalChunks: number;
startIndex: number;
endIndex: number;
messageCount: number;
estimatedTokens: number;
sizeBytes: number;
hasOverlap: boolean;
overlapMessages?: number;
firstTimestamp?: string;
lastTimestamp?: string;
}
export interface ChunkingOptions {
maxTokensPerChunk?: number; // default: 28000 (leaving 4k buffer)
maxBytesPerChunk?: number; // default: 98000 (98KB)
preserveContext?: boolean; // keep context overlap between chunks
contextOverlap?: number; // messages to repeat (default: 2)
parallel?: boolean; // process chunks in parallel
}
export interface ChunkedMessage {
content: string;
estimatedTokens: number;
}
export class ChunkManager {
private static readonly DEFAULT_MAX_TOKENS = 28000;
private static readonly DEFAULT_MAX_BYTES = 98000;
private static readonly DEFAULT_CONTEXT_OVERLAP = 2;
private static readonly CHARS_PER_TOKEN_ESTIMATE = 3.5;
private options: Required<ChunkingOptions>;
constructor(options: ChunkingOptions = {}) {
this.options = {
maxTokensPerChunk: options.maxTokensPerChunk ?? ChunkManager.DEFAULT_MAX_TOKENS,
maxBytesPerChunk: options.maxBytesPerChunk ?? ChunkManager.DEFAULT_MAX_BYTES,
preserveContext: options.preserveContext ?? true,
contextOverlap: options.contextOverlap ?? ChunkManager.DEFAULT_CONTEXT_OVERLAP,
parallel: options.parallel ?? false
};
}
/**
* Estimates token count for a given text
* Uses rough approximation of 3.5 characters per token
*/
public estimateTokenCount(text: string): number {
return Math.ceil(text.length / ChunkManager.CHARS_PER_TOKEN_ESTIMATE);
}
/**
* Parses the filtered output format into structured messages
* Format: "- content"
*/
public parseFilteredOutput(filteredContent: string): ChunkedMessage[] {
const lines = filteredContent.split('\n').filter(line => line.trim());
const messages: ChunkedMessage[] = [];
for (const line of lines) {
// Parse format: "- content"
if (line.startsWith('- ')) {
const content = line.substring(2); // Remove "- " prefix
messages.push({
content,
estimatedTokens: this.estimateTokenCount(content)
});
}
}
return messages;
}
/**
* Chunks the filtered transcript into manageable pieces
*/
public chunkTranscript(filteredContent: string): Array<{ content: string; metadata: ChunkMetadata }> {
const messages = this.parseFilteredOutput(filteredContent);
const chunks: Array<{ content: string; metadata: ChunkMetadata }> = [];
let currentChunk: ChunkedMessage[] = [];
let currentTokens = 0;
let currentBytes = 0;
let chunkStartIndex = 0;
for (let i = 0; i < messages.length; i++) {
const message = messages[i];
const messageText = this.formatMessage(message);
const messageBytes = Buffer.byteLength(messageText, 'utf8');
const messageTokens = message.estimatedTokens;
// Check if adding this message would exceed limits
if (currentChunk.length > 0 &&
(currentTokens + messageTokens > this.options.maxTokensPerChunk ||
currentBytes + messageBytes > this.options.maxBytesPerChunk)) {
// Save current chunk
const chunkContent = this.formatChunk(currentChunk);
chunks.push({
content: chunkContent,
metadata: {
chunkNumber: chunks.length + 1,
totalChunks: 0, // Will be updated after all chunks are created
startIndex: chunkStartIndex,
endIndex: i - 1,
messageCount: currentChunk.length,
estimatedTokens: currentTokens,
sizeBytes: currentBytes,
hasOverlap: false
}
});
// Start new chunk with optional context overlap
currentChunk = [];
currentTokens = 0;
currentBytes = 0;
chunkStartIndex = i;
// Add overlap messages from previous chunk if enabled
if (this.options.preserveContext && chunks.length > 0) {
const overlapStart = Math.max(0, i - this.options.contextOverlap);
for (let j = overlapStart; j < i; j++) {
const overlapMessage = messages[j];
const overlapText = this.formatMessage(overlapMessage);
currentChunk.push(overlapMessage);
currentTokens += overlapMessage.estimatedTokens;
currentBytes += Buffer.byteLength(overlapText, 'utf8');
}
if (currentChunk.length > 0) {
// Mark that this chunk has overlap
chunkStartIndex = overlapStart;
}
}
}
// Add message to current chunk
currentChunk.push(message);
currentTokens += messageTokens;
currentBytes += messageBytes;
}
// Save final chunk if it has content
if (currentChunk.length > 0) {
const chunkContent = this.formatChunk(currentChunk);
chunks.push({
content: chunkContent,
metadata: {
chunkNumber: chunks.length + 1,
totalChunks: 0,
startIndex: chunkStartIndex,
endIndex: messages.length - 1,
messageCount: currentChunk.length,
estimatedTokens: currentTokens,
sizeBytes: currentBytes,
hasOverlap: this.options.preserveContext && chunks.length > 0
}
});
}
// Update total chunks count in metadata
chunks.forEach(chunk => {
chunk.metadata.totalChunks = chunks.length;
});
return chunks;
}
/**
* Formats a single message back to the filtered output format
*/
private formatMessage(message: ChunkedMessage): string {
return `- ${message.content}`;
}
/**
* Formats a chunk of messages
*/
private formatChunk(messages: ChunkedMessage[]): string {
return messages.map(m => this.formatMessage(m)).join('\n');
}
/**
* Creates a header for a chunk file with metadata
*/
public createChunkHeader(metadata: ChunkMetadata): string {
const lines = [];
// Add timestamp range if available, otherwise chunk number
if (metadata.firstTimestamp && metadata.lastTimestamp) {
lines.push(`# ${metadata.firstTimestamp} to ${metadata.lastTimestamp} (chunk ${metadata.chunkNumber}/${metadata.totalChunks})`);
} else {
lines.push(`# Chunk ${metadata.chunkNumber} of ${metadata.totalChunks}`);
}
return lines.join('\n') + '\n';
}
/**
* Checks if content needs chunking based on size
*/
public needsChunking(content: string): boolean {
const estimatedTokens = this.estimateTokenCount(content);
const sizeBytes = Buffer.byteLength(content, 'utf8');
return estimatedTokens > this.options.maxTokensPerChunk ||
sizeBytes > this.options.maxBytesPerChunk;
}
/**
* Gets chunking statistics for logging
*/
public getChunkingStats(chunks: Array<{ metadata: ChunkMetadata }>): string {
const totalMessages = chunks.reduce((sum, c) => sum + c.metadata.messageCount, 0);
const totalTokens = chunks.reduce((sum, c) => sum + c.metadata.estimatedTokens, 0);
const totalBytes = chunks.reduce((sum, c) => sum + c.metadata.sizeBytes, 0);
return [
`📊 Chunking Statistics:`,
` • Total chunks: ${chunks.length}`,
` • Total messages: ${totalMessages}`,
` • Total estimated tokens: ${totalTokens.toLocaleString()}`,
` • Total size: ${(totalBytes / 1024).toFixed(1)} KB`,
` • Average tokens per chunk: ${Math.round(totalTokens / chunks.length).toLocaleString()}`,
` • Average size per chunk: ${(totalBytes / chunks.length / 1024).toFixed(1)} KB`
].join('\n');
}
}
File diff suppressed because it is too large Load Diff
@@ -0,0 +1,366 @@
/**
* PromptOrchestrator - Single source of truth for all prompt generation
*
* This class serves as the central orchestrator for generating different types of prompts
* used throughout the claude-mem system. It provides clear, well-typed interfaces and
* methods for creating prompts for LLM analysis, human context, and system integration.
*/
import { createAnalysisPrompt } from '../../prompts/templates/analysis/AnalysisTemplates.js';
// =============================================================================
// CORE INTERFACES
// =============================================================================
/**
* Context data for LLM analysis prompts
*/
export interface AnalysisContext {
/** The transcript content to analyze */
transcriptContent: string;
/** Session identifier */
sessionId: string;
/** Project name for context */
projectName?: string;
/** Custom analysis instructions */
customInstructions?: string;
/** Compression trigger type */
trigger?: 'manual' | 'auto';
/** Original token count */
originalTokens?: number;
/** Target compression ratio */
targetCompressionRatio?: number;
}
/**
* Context data for human-facing session prompts
*/
export interface SessionContext {
/** Session identifier */
sessionId: string;
/** Source of the session start */
source: 'startup' | 'compact' | 'vscode' | 'web';
/** Project name */
projectName?: string;
/** Additional context to provide to the human */
additionalContext?: string;
/** Path to the transcript file */
transcriptPath?: string;
/** Working directory */
cwd?: string;
}
/**
* Context data for hook response generation
*/
export interface HookContext {
/** The hook event name */
hookEventName: string;
/** Session identifier */
sessionId: string;
/** Success status */
success: boolean;
/** Optional message */
message?: string;
/** Additional data specific to the hook */
data?: Record<string, unknown>;
/** Whether to continue processing */
shouldContinue?: boolean;
/** Reason for stopping if applicable */
stopReason?: string;
}
/**
* Generated analysis prompt for LLM consumption
*/
export interface AnalysisPrompt {
/** The formatted prompt text */
prompt: string;
/** Context used to generate the prompt */
context: AnalysisContext;
/** Prompt type identifier */
type: 'analysis';
/** Generated timestamp */
timestamp: string;
}
/**
* Generated session prompt for human context
*/
export interface SessionPrompt {
/** The formatted message text */
message: string;
/** Context used to generate the prompt */
context: SessionContext;
/** Prompt type identifier */
type: 'session';
/** Generated timestamp */
timestamp: string;
}
/**
* Generated hook response
*/
export interface HookResponse {
/** Whether to continue processing */
continue: boolean;
/** Reason for stopping if continue is false */
stopReason?: string;
/** Whether to suppress output */
suppressOutput?: boolean;
/** Hook-specific output data */
hookSpecificOutput?: Record<string, unknown>;
/** Context used to generate the response */
context: HookContext;
/** Response type identifier */
type: 'hook';
/** Generated timestamp */
timestamp: string;
}
// =============================================================================
// PROMPT ORCHESTRATOR CLASS
// =============================================================================
/**
* Central orchestrator for all prompt generation in the claude-mem system
*/
export class PromptOrchestrator {
private projectName: string;
constructor(projectName = 'claude-mem') {
this.projectName = projectName;
}
/**
* Creates an analysis prompt for LLM processing of transcript content
*/
public createAnalysisPrompt(context: AnalysisContext): AnalysisPrompt {
const timestamp = new Date().toISOString();
const prompt = this.buildAnalysisPrompt(context);
return {
prompt,
context,
type: 'analysis',
timestamp,
};
}
/**
* Creates a session start prompt for human context
*/
public createSessionStartPrompt(context: SessionContext): SessionPrompt {
const timestamp = new Date().toISOString();
const message = this.buildSessionStartMessage(context);
return {
message,
context,
type: 'session',
timestamp,
};
}
/**
* Creates a hook response for system integration
*/
public createHookResponse(context: HookContext): HookResponse {
const timestamp = new Date().toISOString();
const response = this.buildHookResponse(context);
return {
...response,
context,
type: 'hook',
timestamp,
};
}
// =============================================================================
// PRIVATE PROMPT BUILDERS
// =============================================================================
private buildAnalysisPrompt(context: AnalysisContext): string {
const {
transcriptContent,
sessionId,
projectName = this.projectName,
} = context;
// Extract project prefix from project name (convert to snake_case)
const projectPrefix = projectName.replace(/[-\s]/g, '_').toLowerCase();
// Use the simple prompt with the transcript included
return createAnalysisPrompt(
transcriptContent,
sessionId,
projectPrefix
);
}
private buildSessionStartMessage(context: SessionContext): string {
const {
sessionId,
source,
projectName = this.projectName,
additionalContext,
transcriptPath,
cwd,
} = context;
let message = `## Session Started (${source})
**Project**: ${projectName}
**Session ID**: ${sessionId} `;
if (transcriptPath) {
message += `**Transcript**: ${transcriptPath} `;
}
if (cwd) {
message += `**Working Directory**: ${cwd} `;
}
if (additionalContext) {
message += `\n### Additional Context\n${additionalContext}`;
}
message += `\n\nMemory system is active and ready to preserve context across sessions.`;
return message;
}
private buildHookResponse(context: HookContext): Omit<HookResponse, 'context' | 'type' | 'timestamp'> {
const {
hookEventName,
success,
message,
data,
shouldContinue = success,
stopReason,
} = context;
const response: Omit<HookResponse, 'context' | 'type' | 'timestamp'> = {
continue: shouldContinue,
suppressOutput: false,
};
if (!shouldContinue && stopReason) {
response.stopReason = stopReason;
}
// Add hook-specific output based on event type
if (hookEventName === 'SessionStart') {
response.hookSpecificOutput = {
hookEventName: 'SessionStart',
additionalContext: message,
...data,
};
} else if (data) {
response.hookSpecificOutput = data;
}
return response;
}
// =============================================================================
// UTILITY METHODS
// =============================================================================
/**
* Validates that an AnalysisContext has required fields
*/
public validateAnalysisContext(context: Partial<AnalysisContext>): context is AnalysisContext {
return !!(context.transcriptContent && context.sessionId);
}
/**
* Validates that a SessionContext has required fields
*/
public validateSessionContext(context: Partial<SessionContext>): context is SessionContext {
return !!(context.sessionId && context.source);
}
/**
* Validates that a HookContext has required fields
*/
public validateHookContext(context: Partial<HookContext>): context is HookContext {
return !!(context.hookEventName && context.sessionId && typeof context.success === 'boolean');
}
/**
* Gets the project name for this orchestrator instance
*/
public getProjectName(): string {
return this.projectName;
}
/**
* Sets a new project name for this orchestrator instance
*/
public setProjectName(projectName: string): void {
this.projectName = projectName;
}
}
// =============================================================================
// FACTORY FUNCTIONS
// =============================================================================
/**
* Creates a new PromptOrchestrator instance
*/
export function createPromptOrchestrator(projectName?: string): PromptOrchestrator {
return new PromptOrchestrator(projectName);
}
/**
* Creates an analysis context from basic parameters
*/
export function createAnalysisContext(
transcriptContent: string,
sessionId: string,
options: Partial<Omit<AnalysisContext, 'transcriptContent' | 'sessionId'>> = {}
): AnalysisContext {
return {
transcriptContent,
sessionId,
...options,
};
}
/**
* Creates a session context from basic parameters
*/
export function createSessionContext(
sessionId: string,
source: SessionContext['source'],
options: Partial<Omit<SessionContext, 'sessionId' | 'source'>> = {}
): SessionContext {
return {
sessionId,
source,
...options,
};
}
/**
* Creates a hook context from basic parameters
*/
export function createHookContext(
hookEventName: string,
sessionId: string,
success: boolean,
options: Partial<Omit<HookContext, 'hookEventName' | 'sessionId' | 'success'>> = {}
): HookContext {
return {
hookEventName,
sessionId,
success,
...options,
};
}
+128
View File
@@ -0,0 +1,128 @@
import { query } from '@anthropic-ai/claude-code';
import fs from 'fs';
import path from 'path';
import os from 'os';
import { getClaudePath } from '../../shared/settings.js';
export interface TitleGenerationRequest {
sessionId: string;
projectName: string;
firstMessage: string;
}
export interface GeneratedTitle {
session_id: string;
generated_title: string;
timestamp: string;
project_name: string;
}
export class TitleGenerator {
private titlesIndexPath: string;
constructor() {
this.titlesIndexPath = path.join(os.homedir(), '.claude-mem', 'conversation-titles.jsonl');
this.ensureTitlesIndex();
}
private ensureTitlesIndex(): void {
const dir = path.dirname(this.titlesIndexPath);
if (!fs.existsSync(dir)) {
fs.mkdirSync(dir, { recursive: true });
}
if (!fs.existsSync(this.titlesIndexPath)) {
fs.writeFileSync(this.titlesIndexPath, '', 'utf-8');
}
}
async generateTitle(firstMessage: string): Promise<string> {
const prompt = `Generate a 3-7 word descriptive title for this conversation based on the first message.
The title should:
- Capture the main topic or intent
- Be concise and descriptive
- Use proper capitalization
- Not include "Help with" or "Question about" prefixes
First message: "${firstMessage.substring(0, 500)}"
Respond with just the title, nothing else.`;
const response = await query({
prompt,
options: {
model: 'claude-3-5-haiku-20241022',
pathToClaudeCodeExecutable: getClaudePath(),
},
});
let title = '';
if (response && typeof response === 'object' && Symbol.asyncIterator in response) {
for await (const message of response) {
if (message?.content) title += message.content;
if (message?.text) title += message.text;
}
} else if (typeof response === 'string') {
title = response;
}
return title.trim().replace(/^["']|["']$/g, '');
}
async batchGenerateTitles(requests: TitleGenerationRequest[]): Promise<GeneratedTitle[]> {
const results: GeneratedTitle[] = [];
for (const request of requests) {
try {
const title = await this.generateTitle(request.firstMessage);
const generatedTitle: GeneratedTitle = {
session_id: request.sessionId,
generated_title: title,
timestamp: new Date().toISOString(),
project_name: request.projectName
};
results.push(generatedTitle);
this.storeTitleInIndex(generatedTitle);
} catch (error) {
console.error(`Failed to generate title for ${request.sessionId}:`, error);
}
}
return results;
}
private storeTitleInIndex(title: GeneratedTitle): void {
const line = JSON.stringify(title) + '\n';
fs.appendFileSync(this.titlesIndexPath, line, 'utf-8');
}
getExistingTitles(): Map<string, GeneratedTitle> {
const titles = new Map<string, GeneratedTitle>();
if (!fs.existsSync(this.titlesIndexPath)) {
return titles;
}
const content = fs.readFileSync(this.titlesIndexPath, 'utf-8');
const lines = content.trim().split('\n').filter(Boolean);
for (const line of lines) {
try {
const title = JSON.parse(line) as GeneratedTitle;
titles.set(title.session_id, title);
} catch (error) {
// Skip invalid lines
}
}
return titles;
}
getTitleForSession(sessionId: string): string | null {
const titles = this.getExistingTitles();
const title = titles.get(sessionId);
return title ? title.generated_title : null;
}
}
+64
View File
@@ -0,0 +1,64 @@
/**
* Time utilities for formatting relative timestamps
*/
export function formatRelativeTime(timestamp: string | Date): string {
try {
const date = timestamp instanceof Date ? timestamp : new Date(timestamp);
const now = new Date();
const diffMs = now.getTime() - date.getTime();
const diffSeconds = Math.floor(diffMs / 1000);
const diffMinutes = Math.floor(diffSeconds / 60);
const diffHours = Math.floor(diffMinutes / 60);
const diffDays = Math.floor(diffHours / 24);
const diffWeeks = Math.floor(diffDays / 7);
const diffMonths = Math.floor(diffDays / 30);
if (diffSeconds < 60) {
return 'Just now';
} else if (diffMinutes < 60) {
return diffMinutes === 1 ? '1 minute ago' : `${diffMinutes} minutes ago`;
} else if (diffHours < 24) {
return diffHours === 1 ? '1 hour ago' : `${diffHours} hours ago`;
} else if (diffDays === 1) {
return 'Yesterday';
} else if (diffDays < 7) {
return `${diffDays} days ago`;
} else if (diffWeeks === 1) {
return '1 week ago';
} else if (diffWeeks < 4) {
return `${diffWeeks} weeks ago`;
} else if (diffMonths === 1) {
return '1 month ago';
} else if (diffMonths < 12) {
return `${diffMonths} months ago`;
} else {
const diffYears = Math.floor(diffMonths / 12);
return diffYears === 1 ? '1 year ago' : `${diffYears} years ago`;
}
} catch (error) {
// Return a fallback for invalid timestamps
return 'Recently';
}
}
export function parseTimestamp(entry: any): Date | null {
// Try multiple timestamp fields that might exist
const possibleFields = ['timestamp', 'created_at', 'date', 'time'];
for (const field of possibleFields) {
if (entry[field]) {
try {
const date = new Date(entry[field]);
if (!isNaN(date.getTime())) {
return date;
}
} catch {
continue;
}
}
}
// If no valid timestamp found, return null
return null;
}
+191
View File
@@ -0,0 +1,191 @@
/**
* Claude Memory System - Prompt-Related Constants and Templates
*
* This file contains all prompts, instructions, and output templates
* for the analysis and context priming system.
*/
import * as HookTemplates from './templates/hooks/HookTemplates.js';
// =============================================================================
// ANALYSIS PROMPTS AND TEMPLATES
// =============================================================================
/**
* Entity naming patterns for the knowledge graph
*/
export const ENTITY_NAMING_PATTERNS = {
component: "Component_Name",
decision: "Decision_Name",
pattern: "Pattern_Name",
tool: "Tool_Name",
fix: "Fix_Name",
workflow: "Workflow_Name"
} as const;
/**
* Available entity types for classification
*/
export const ENTITY_TYPES = {
component: "component", // UI components, modules, services
pattern: "pattern", // Architectural or design patterns
workflow: "workflow", // Processes, pipelines, sequences
integration: "integration", // APIs, external services, data sources
concept: "concept", // Abstract ideas, methodologies, principles
decision: "decision", // Design choices, trade-offs, solutions
tool: "tool", // Utilities, libraries, development tools
fix: "fix" // Bug fixes, patches, workarounds
} as const;
/**
* Standard observation fields for entities
*/
export const OBSERVATION_FIELDS = [
"Core purpose: [what it fundamentally does]",
"Brief description: [one-line summary for session-start display]",
"Implementation: [key technical details, code patterns]",
"Dependencies: [what it requires or builds upon]",
"Usage context: [when/why it's used]",
"Performance characteristics: [speed, reliability, constraints]",
"Integration points: [how it connects to other systems]",
"Keywords: [searchable terms for this concept]",
"Decision rationale: [why this approach was chosen]",
"Next steps: [what needs to be done next with this component]",
"Files modified: [list of files changed]",
"Tools used: [development tools/commands used]"
] as const;
/**
* Relationship types for creating meaningful entity connections
*/
export const RELATIONSHIP_TYPES = [
"executes_via", "orchestrates_through", "validates_using",
"provides_auth_to", "manages_state_for", "processes_events_from",
"caches_data_from", "routes_requests_to", "transforms_data_for",
"extends", "enhances_performance_of", "builds_upon",
"fixes_issue_in", "replaces", "optimizes",
"triggers_tool", "receives_result_from"
] as const;
// =============================================================================
// CONTEXT PRIMING TEMPLATES
// =============================================================================
/**
* System message templates for context priming
*/
export const CONTEXT_TEMPLATES = {
PRIMARY_CONTEXT: (projectName: string) =>
`Context primed for project: ${projectName}. Access memories with chroma_query_documents(["${projectName}*"]) or chroma_get_documents(["document_id"]).`,
RECENT_SESSIONS: (sessionList: string) =>
`Recent sessions available: ${sessionList}`,
AVAILABLE_ENTITIES: (type: string, entities: string[], hasMore: boolean, moreCount: number) =>
`Available ${type} entities: ${entities.join(', ')}${hasMore ? ` (+${moreCount} more)` : ''}`,
SESSION_START_HEADER: '🧠 Active Working Context from Previous Sessions:',
SESSION_START_SEPARATOR: '═'.repeat(70),
RESUME_INSTRUCTIONS: `💡 TO RESUME: Load active components with chroma_get_documents(["<exact_document_ids>"])
📊 TO EXPLORE: Search related work with chroma_query_documents(["<keywords>"])`
} as const;
// =============================================================================
// SESSION START OUTPUT TEMPLATES
// =============================================================================
/**
* Session start formatting templates
*/
export const SESSION_START_TEMPLATES = {
FOCUS_LINE: (focus: string) => `📌 CURRENT FOCUS: ${focus}`,
LAST_WORKED: (timeAgo: string, projectName: string) => `Last worked: ${timeAgo} | Project: ${projectName}`,
SECTIONS: {
COMPONENTS: '🎯 ACTIVE COMPONENTS (load these for context):',
DECISIONS: '🔄 RECENT DECISIONS & PATTERNS:',
TOOLS: '🛠️ TOOLS & INFRASTRUCTURE:',
FIXES: '🐛 RECENT FIXES:',
ACTIONS: '⚡ NEXT ACTIONS:'
},
ACTION_PREFIX: '□ ',
ENTITY_BULLET: '• '
} as const;
/**
* Time formatting for "time ago" displays
*/
export const TIME_FORMATS = {
JUST_NOW: 'just now',
HOURS_AGO: (hours: number) => `${hours} hour${hours > 1 ? 's' : ''} ago`,
DAYS_AGO: (days: number) => `${days} day${days > 1 ? 's' : ''} ago`,
RECENTLY: 'recently'
} as const;
// =============================================================================
// HOOK RESPONSE TEMPLATES
// =============================================================================
/**
* Standard hook response structures for Claude Code integration
*/
export const HOOK_RESPONSES = {
SUCCESS: (hookEventName: string, message: string) => ({
hookSpecificOutput: {
hookEventName,
status: "success",
message
},
suppressOutput: true
}),
SKIPPED: (hookEventName: string, message: string) => ({
hookSpecificOutput: {
hookEventName,
status: "skipped",
message
},
suppressOutput: true
}),
BLOCKED: (reason: string) => ({
decision: "block",
reason
}),
CONTINUE: (hookEventName: string, additionalContext?: string) => ({
continue: true,
...(additionalContext && {
hookSpecificOutput: {
hookEventName,
additionalContext
}
})
}),
ERROR: (reason: string) => ({
decision: "block",
reason
})
} as const;
/**
* Pre-defined hook messages
*/
export const HOOK_MESSAGES = {
COMPRESSION_SUCCESS: "Memory compression completed successfully",
COMPRESSION_FAILED: (stderr: string) => `Compression failed: ${stderr}`,
CONTEXT_LOADED: "Project context loaded successfully",
CONTEXT_SKIPPED: "Continuing session - context loading skipped",
NO_TRANSCRIPT: "No transcript path provided",
HOOK_ERROR: (error: string) => `Hook error: ${error}`
} as const;
/**
* Export hook templates for direct usage
*/
export { HookTemplates };
+30
View File
@@ -0,0 +1,30 @@
/**
* Prompts Module - Single source of truth for all prompt generation
*
* This module provides a centralized system for generating prompts across
* the claude-mem system. It includes the core PromptOrchestrator class
* and all related TypeScript interfaces.
*/
// Export all interfaces
export type {
AnalysisContext,
SessionContext,
HookContext,
AnalysisPrompt,
SessionPrompt,
HookResponse,
} from '../core/orchestration/PromptOrchestrator.js';
// Export the main class
export {
PromptOrchestrator,
} from '../core/orchestration/PromptOrchestrator.js';
// Export factory functions
export {
createPromptOrchestrator,
createAnalysisContext,
createSessionContext,
createHookContext,
} from '../core/orchestration/PromptOrchestrator.js';
+190
View File
@@ -0,0 +1,190 @@
# Claude Memory Templates
This directory contains modular templates for the Claude Memory System, including LLM analysis prompts and system integration responses.
## Files
### AnalysisTemplates.ts
The main template system for LLM analysis prompts. Contains clean, separated template functions for:
- **Entity extraction instructions** - Guidelines for identifying and categorizing technical entities
- **Relationship mapping instructions** - Rules for creating meaningful connections between entities
- **Output format specifications** - Exact format requirements for pipe-separated summaries
- **Example outputs** - Sample outputs to guide the LLM
- **MCP tool usage instructions** - Step-by-step MCP tool usage workflow
- **Dynamic content injection helpers** - Functions for injecting project/session context
### HookTemplates.ts
System integration templates for Claude Code hook responses. Provides standardized templates for:
- **Pre-compact hook responses** - Approve/block compression operations with proper formatting
- **Session-start hook responses** - Load and format context with rich memory information
- **Pre-tool use hook responses** - Security policies and permission controls
- **Error handling templates** - User-friendly error messages with troubleshooting guidance
- **Progress indicators** - Status updates for long-running operations
- **Response validation** - Ensures compliance with Claude Code hook specifications
### ContextTemplates.ts
Human-readable formatting templates for user-facing messages during memory operations.
### Legacy Templates
- `analysis-template.txt` - Legacy mustache-style template (deprecated)
- `session-start-template.txt` - Legacy mustache-style template (deprecated)
## Architecture
The new template system follows these principles:
1. **Pure Functions** - Each template function takes context and returns formatted strings
2. **Modular Design** - Complex prompts are broken into focused, reusable components
3. **Type Safety** - Full TypeScript support with proper interfaces
4. **Context Injection** - Dynamic content injection through helper functions
5. **Composable Templates** - Build complex prompts by combining template sections
## Usage
### Hook Templates Usage
```typescript
import {
createPreCompactSuccessResponse,
createSessionStartMemoryResponse,
createPreToolUseAllowResponse,
validateHookResponse
} from './HookTemplates.js';
// Pre-compact hook: approve compression
const preCompactResponse = createPreCompactSuccessResponse();
console.log(JSON.stringify(preCompactResponse));
// Output: {"continue": true, "suppressOutput": true}
// Session start hook: load context with memories
const sessionResponse = createSessionStartMemoryResponse({
projectName: 'claude-mem',
memoryCount: 15,
lastSessionTime: '2 hours ago',
recentComponents: ['HookTemplates', 'PromptOrchestrator'],
recentDecisions: ['Use TypeScript for type safety']
});
console.log(JSON.stringify(sessionResponse));
// Pre-tool use: allow memory tools
const toolResponse = createPreToolUseAllowResponse('Memory operations are always permitted');
console.log(JSON.stringify(toolResponse));
// Validate responses before sending
const validation = validateHookResponse(preCompactResponse, 'PreCompact');
if (!validation.isValid) {
console.error('Invalid response:', validation.errors);
}
```
### Analysis Templates Usage
```typescript
import { buildCompleteAnalysisPrompt } from './AnalysisTemplates.js';
const prompt = buildCompleteAnalysisPrompt(
'myproject', // projectPrefix
'session123', // sessionId
[], // toolUseChains
'2024-01-01', // timestamp (optional)
'archive.jsonl' // archiveFilename (optional)
);
```
### Individual Template Components
```typescript
import {
createEntityExtractionInstructions,
createOutputFormatSpecification,
createExampleOutput
} from './AnalysisTemplates.js';
// Get just the entity extraction guidelines
const entityInstructions = createEntityExtractionInstructions('myproject');
// Get output format specification
const outputFormat = createOutputFormatSpecification('2024-01-01', 'archive.jsonl');
// Get example output
const examples = createExampleOutput('myproject', 'session123');
```
### Context Injection
```typescript
import {
injectProjectContext,
injectSessionContext,
validateTemplateContext
} from './AnalysisTemplates.js';
// Validate context before using templates
const context = { projectPrefix: 'myproject', sessionId: 'session123' };
const errors = validateTemplateContext(context);
if (errors.length > 0) {
console.error('Invalid context:', errors);
}
// Inject dynamic content into template strings
let template = "Working on {{projectPrefix}} session {{sessionId}}";
template = injectProjectContext(template, 'myproject');
template = injectSessionContext(template, 'session123');
```
## Template Sections
### Entity Extraction Instructions
- Categories of entities to extract (components, patterns, decisions, etc.)
- Naming conventions with project prefixes
- Entity type classifications
- Observation field templates
### Relationship Mapping
- Available relationship types
- Active-voice relationship guidelines
- Graph connection strategies
### Output Format
- Pipe-separated format specification
- Required fields and exact values
- Summary writing guidelines
### MCP Tool Usage
- Step-by-step MCP tool workflow
- Entity creation instructions
- Relationship creation guidelines
### Critical Requirements
- Entity count requirements (3-15 entities)
- Relationship count requirements (5-20 relationships)
- Output line requirements (3-10 summaries)
- Format validation rules
## Benefits Over Legacy System
1. **Maintainability** - Separated concerns make individual sections easy to update
2. **Testability** - Pure functions can be unit tested independently
3. **Reusability** - Template components can be reused across different contexts
4. **Debugging** - Easy to isolate issues to specific template sections
5. **Type Safety** - Full TypeScript support prevents runtime template errors
6. **Performance** - No string parsing overhead, direct function composition
## Migration from constants.ts
The massive `createAnalysisPrompt` function in `constants.ts` has been refactored into this modular system:
**Before** (130+ lines in single function):
```typescript
export function createAnalysisPrompt(...) {
// Massive template string with embedded logic
return `You are analyzing...${incrementalSection}${toolChains}...`;
}
```
**After** (clean delegation):
```typescript
export function createAnalysisPrompt(...) {
return buildCompleteAnalysisPrompt(...);
}
```
This maintains backward compatibility while providing a much cleaner, more maintainable internal structure.
@@ -0,0 +1,118 @@
/**
* Analysis Templates for LLM Instructions
*
* Generates prompts for extracting memories from conversations and storing in Chroma
*/
import Handlebars from 'handlebars';
// =============================================================================
// MAIN ANALYSIS PROMPT TEMPLATE
// =============================================================================
const ANALYSIS_PROMPT = `You are analyzing a Claude Code conversation transcript to create memories using the Chroma MCP memory system.
YOUR TASK:
1. Extract key learnings and accomplishments as natural language memories
2. Store memories using mcp__claude-mem__chroma_add_documents
3. Return a structured JSON response with the extracted summaries
WHAT TO EXTRACT:
- Technical implementations (functions, classes, APIs, databases)
- Design patterns and architectural decisions
- Bug fixes and problem solutions
- Workflows, processes, and integrations
- Performance optimizations and improvements
STORAGE INSTRUCTIONS:
Call mcp__claude-mem__chroma_add_documents with:
- collection_name: "claude_memories"
- documents: Array of natural language descriptions
- ids: ["{{projectPrefix}}_{{sessionId}}_1", "{{projectPrefix}}_{{sessionId}}_2", ...]
- metadatas: Array with fields:
* type: component/pattern/workflow/integration/concept/decision/tool/fix
* keywords: Comma-separated search terms
* context: Brief situation description
* timestamp: "{{timestamp}}"
* session_id: "{{sessionId}}"
ERROR HANDLING:
If you get "IDs already exist" errors, use mcp__claude-mem__chroma_update_documents instead.
If any tool calls fail, continue and return the JSON response anyway.
Project: {{projectPrefix}}
Session ID: {{sessionId}}
Conversation to compress:`;
// Compile template once
const compiledAnalysisPrompt = Handlebars.compile(ANALYSIS_PROMPT, { noEscape: true });
// =============================================================================
// MAIN API FUNCTIONS
// =============================================================================
/**
* Creates the comprehensive analysis prompt for memory extraction
*/
export function buildComprehensiveAnalysisPrompt(
projectPrefix: string,
sessionId: string,
timestamp?: string,
archiveFilename?: string
): string {
const context = {
projectPrefix,
sessionId,
timestamp: timestamp || new Date().toISOString(),
archiveFilename: archiveFilename || `${sessionId}.jsonl.archive`
};
return compiledAnalysisPrompt(context);
}
/**
* Creates the analysis prompt
*/
export function createAnalysisPrompt(
transcript: string,
sessionId: string,
projectPrefix: string,
timestamp?: string
): string {
const prompt = buildComprehensiveAnalysisPrompt(
projectPrefix,
sessionId,
timestamp
);
const responseFormat = `
RESPONSE FORMAT:
After storing memories in Chroma, return EXACTLY this JSON structure wrapped in tags:
<JSONResponse>
{
"overview": "2-3 sentence summary of session themes and accomplishments. Write for any developer to understand by organically defining jargon.",
"summaries": [
{
"text": "What was accomplished (start with action verb)",
"document_id": "${projectPrefix}_${sessionId}_1",
"keywords": "comma, separated, terms",
"timestamp": "${timestamp || new Date().toISOString()}",
"archive": "${sessionId}.jsonl.archive"
}
]
}
</JSONResponse>
IMPORTANT:
- Return 3-10 summaries based on conversation complexity
- Each summary should correspond to a memory you attempted to store
- If tool calls fail, still return the JSON response with summaries
- The JSON must be valid and complete
- Place NOTHING outside the <JSONResponse> tags
- Do not include any explanatory text before or after the JSON`;
return prompt + '\n\n' + transcript + responseFormat;
}
@@ -0,0 +1,644 @@
/**
* Context Templates for Human-Readable Formatting
*
* Essential templates for user-facing messages in the memory system.
* Focused on session start messages, error handling, and operation feedback.
* Previously included Handlebars templates for session start formatting; current
* version renders directly via console for clarity and performance.
*/
import { formatRelativeTime, parseTimestamp } from '../../../lib/time-utils.js';
// =============================================================================
// TERMINAL WIDTH & WORD WRAPPING
// =============================================================================
/**
* Determines target wrap width based on:
* 1) CLAUDE_MEM_WRAP_WIDTH env override
* 2) TTY columns (capped at 120)
* 3) Fallback default of 80
*/
function getWrapWidth(): number {
const env = process.env.CLAUDE_MEM_WRAP_WIDTH;
if (env) {
const n = parseInt(env, 10);
if (!Number.isNaN(n) && n > 40 && n <= 200) return n;
}
// Default to classic 80 columns unless overridden
return 80;
}
/**
* Wrap a single logical line to the given width, preserving leading indentation.
* Also avoids wrapping pure separator lines (====, ----, etc.).
*/
function wrapSingleLine(line: string, width: number): string {
if (!line) return '';
// Don't wrap long separator lines
if (/^[\-=\u2014_\u2500\u2550]{5,}$/.test(line.trim())) return line;
// If already short enough, return as-is
if (line.length <= width) return line;
const indentMatch = line.match(/^\s*/);
const indent = indentMatch ? indentMatch[0] : '';
const content = line.slice(indent.length);
const avail = Math.max(10, width - indent.length); // keep some minimum
const words = content.split(/(\s+)/); // keep whitespace tokens
const out: string[] = [];
let current = '';
const pushLine = () => {
out.push(indent + current.trimEnd());
current = '';
};
for (const token of words) {
if (token === '') continue;
// If token itself is longer than available width, hard-break it
if (!/\s/.test(token) && token.length > avail) {
if (current.trim().length > 0) pushLine();
let start = 0;
while (start < token.length) {
const chunk = token.slice(start, start + avail);
out.push(indent + chunk);
start += avail;
}
current = '';
continue;
}
if (indent.length + current.length + token.length > width) {
pushLine();
}
current += token;
}
if (current.trim().length > 0 || out.length === 0) pushLine();
return out.join('\n');
}
/**
* Wrap a block of text (possibly multi-line) to the given width.
* Preserves blank lines and wraps each line independently.
*/
function wrapText(text: string, width: number): string {
if (!text) return '';
return text
.split('\n')
.map((line) => wrapSingleLine(line, width))
.join('\n');
}
/** Create a full-width horizontal line with the given character */
function makeLine(char: string = '─', width: number = getWrapWidth()): string {
if (!char || char.length === 0) char = '-';
// Repeat and slice to exact width to avoid multi-column surprises
return char.repeat(width).slice(0, width);
}
// =============================================================================
// SESSION START MESSAGES
// =============================================================================
/**
* Creates a welcoming session start message explaining what memories were loaded
*/
export function createSessionStartMessage(
projectName: string,
memoryCount: number,
lastSessionTime?: string
): string {
const width = getWrapWidth();
const timeInfo = lastSessionTime ? ` (last worked: ${lastSessionTime})` : '';
if (memoryCount === 0) {
return wrapText(
`🧠 Loading memories from previous sessions for ${projectName}${timeInfo}
No relevant memories found - this appears to be your first session or a new project area.
💡 Getting Started:
• Start working and memories will be automatically created
• At the end of your session, ask to compress and store the conversation
• Next time you return, relevant context will be loaded automatically`,
width
);
}
const memoryText =
memoryCount === 1 ? 'relevant memory' : 'relevant memories';
return wrapText(
`🧠 Loading memories from previous sessions for ${projectName}${timeInfo}
Found ${memoryCount} ${memoryText} to help continue your work.`,
width
);
}
// =============================================================================
// OPERATION MESSAGES
// =============================================================================
/**
* Creates a loading message during context retrieval
*/
export function createLoadingMessage(operation: string): string {
const operations: Record<string, string> = {
searching: '🔍 Searching previous memories...',
loading: '📚 Loading relevant context...',
formatting: '✨ Organizing memories for display...',
compressing: '🗜️ Compressing session transcript...',
archiving: '📦 Archiving conversation...',
};
const width = getWrapWidth();
return wrapText(operations[operation] || `${operation}...`, width);
}
/**
* Creates a completion message after context operations
*/
export function createCompletionMessage(
operation: string,
count?: number,
details?: string
): string {
const countInfo = count !== undefined ? ` (${count} items)` : '';
const detailInfo = details ? `\n${details}` : '';
const width = getWrapWidth();
return wrapText(
`${operation} completed successfully${countInfo}${detailInfo}`,
width
);
}
// =============================================================================
// ERROR MESSAGES (USER-FRIENDLY)
// =============================================================================
/**
* Creates user-friendly error messages with helpful suggestions
*/
export function createUserFriendlyError(
operation: string,
error: string,
suggestion?: string
): string {
const suggestionText = suggestion ? `\n\n💡 ${suggestion}` : '';
const width = getWrapWidth();
return wrapText(
`${operation} encountered an issue: ${error}${suggestionText}`,
width
);
}
/**
* Common error scenarios with built-in suggestions
*/
export const ERROR_SCENARIOS = {
NO_MEMORIES: (projectName: string) => ({
message: `No previous memories found for ${projectName}`,
suggestion:
'This appears to be your first session. Memories will be created as you work.',
}),
CONNECTION_FAILED: () => ({
message: 'Could not connect to memory system',
suggestion:
'Try restarting Claude Code or check if the MCP server is properly configured.',
}),
SEARCH_FAILED: (query: string) => ({
message: `Search for "${query}" didn't return any results`,
suggestion:
'Try using different keywords or check if memories exist for this project.',
}),
LOAD_TIMEOUT: () => ({
message: 'Memory loading timed out',
suggestion:
'The operation is taking longer than expected. You can continue without loaded context.',
}),
};
/**
* Creates contextual error messages based on common scenarios
*/
export function createContextualError(
scenario: keyof typeof ERROR_SCENARIOS,
...args: string[]
): string {
const errorInfo = (ERROR_SCENARIOS[scenario] as any)(...args);
return createUserFriendlyError(
'Memory system',
errorInfo.message,
errorInfo.suggestion
);
}
// =============================================================================
// TIME AND DATE FORMATTING
// =============================================================================
/**
* Formats timestamps into human-readable "time ago" format
*/
export function formatTimeAgo(timestamp: string | Date): string {
const date = typeof timestamp === 'string' ? new Date(timestamp) : timestamp;
const now = new Date();
const diff = now.getTime() - date.getTime();
const minutes = Math.floor(diff / 60000);
const hours = Math.floor(diff / 3600000);
const days = Math.floor(diff / 86400000);
if (minutes < 1) return 'just now';
if (minutes < 60) return `${minutes} minute${minutes > 1 ? 's' : ''} ago`;
if (hours < 24) return `${hours} hour${hours > 1 ? 's' : ''} ago`;
if (days < 7) return `${days} day${days > 1 ? 's' : ''} ago`;
return date.toLocaleDateString();
}
/**
* Creates summary text for memory operations
*/
export function createOperationSummary(
operation: 'compress' | 'load' | 'search' | 'archive',
results: { count: number; duration?: number; details?: string }
): string {
const { count, duration, details } = results;
const durationText = duration ? ` in ${duration}ms` : '';
const detailsText = details ? ` - ${details}` : '';
const templates = {
compress: `Compressed ${count} conversation turns${durationText}${detailsText}`,
load: `Loaded ${count} relevant memories${durationText}${detailsText}`,
search: `Found ${count} matching memories${durationText}${detailsText}`,
archive: `Archived ${count} conversation segments${durationText}${detailsText}`,
};
const width = getWrapWidth();
return wrapText(`📊 ${templates[operation]}`, width);
}
// =============================================================================
// SESSION START TEMPLATE SYSTEM (data processing only)
// =============================================================================
/**
* Interface for memory entry data structure
*/
interface MemoryEntry {
summary: string;
keywords?: string;
location?: string;
sessionId?: string;
number?: number;
}
/**
* Interface for grouped session memories
*/
interface SessionGroup {
sessionId: string;
memories: MemoryEntry[];
}
/**
* Formats current date and time for session start
*/
export function formatCurrentDateTime(): string {
const now = new Date();
const currentDateTime = now.toLocaleString('en-US', {
weekday: 'long',
year: 'numeric',
month: 'long',
day: 'numeric',
hour: '2-digit',
minute: '2-digit',
timeZoneName: 'short',
});
return `Current Date / Time: ${currentDateTime}\n`;
}
/**
* Extracts overview section from JSON objects
* Looks for objects with type "overview" and matching project
*/
export function extractOverview(
recentObjects: any[],
projectName?: string
): string | null {
// Find overview objects
const overviewObjects = recentObjects.filter(
(obj) => obj.type === 'overview'
);
if (overviewObjects.length === 0) {
return null;
}
// If project is specified, find overview for that project
if (projectName) {
const projectOverview = overviewObjects.find(
(obj) => obj.project === projectName
);
if (projectOverview) {
return projectOverview.content;
}
}
// Return the most recent overview if no project match
return overviewObjects[overviewObjects.length - 1]?.content || null;
}
/**
* Interface for overview with timestamp
*/
interface OverviewEntry {
content: string;
timestamp?: Date;
timeAgo?: string;
sessionId?: string;
}
/**
* Interface for session-grouped overviews
*/
interface SessionOverviewGroup {
sessionId: string;
overviews: OverviewEntry[];
earliestTimestamp?: Date;
timeAgo?: string;
}
/**
* Extracts multiple overviews with timestamps
* Returns up to 'count' most recent overviews
*/
export function extractOverviews(
recentObjects: any[],
count: number = 3,
projectName?: string
): OverviewEntry[] {
// Find overview objects
const overviewObjects = recentObjects.filter(
(obj) => obj.type === 'overview'
);
if (overviewObjects.length === 0) {
return [];
}
// Filter by project if specified
let filteredOverviews = overviewObjects;
if (projectName) {
filteredOverviews = overviewObjects.filter(
(obj) => obj.project === projectName
);
// Fall back to all overviews if no project match
if (filteredOverviews.length === 0) {
filteredOverviews = overviewObjects;
}
}
// Take the last 'count' overviews
const recentOverviews = filteredOverviews.slice(-count);
// Process each overview with timestamp and session ID
return recentOverviews.map((obj) => {
const entry: OverviewEntry = {
content: obj.content || '',
sessionId: obj.sessionId || obj.session_id || 'unknown',
};
// Try to parse timestamp
const timestamp = parseTimestamp(obj);
if (timestamp) {
entry.timestamp = timestamp;
entry.timeAgo = formatRelativeTime(timestamp);
} else {
// Fallback if no timestamp
entry.timeAgo = 'Recently';
}
return entry;
}); // Show in original order (oldest to newest)
}
/**
* Pure data processing function - converts JSON objects into structured memory entries
* No formatting is done here, only data parsing and cleaning
*/
export function processMemoryEntries(recentObjects: any[]): MemoryEntry[] {
if (recentObjects.length === 0) {
return [];
}
// Filter only memory type objects and convert to MemoryEntry format
return recentObjects
.filter((obj) => obj.type === 'memory')
.map((obj) => {
const entry: MemoryEntry = {
summary: obj.text || '',
sessionId: obj.session_id || '',
};
// Add optional fields if present
if (obj.keywords) {
entry.keywords = obj.keywords;
}
if (obj.document_id && !obj.document_id.includes('Session:')) {
entry.location = obj.document_id;
}
return entry;
})
.filter((entry) => entry.summary.length > 0);
}
/**
* Groups memories by session ID and adds numbering
*/
function groupMemoriesBySession(memories: MemoryEntry[]): SessionGroup[] {
const sessionMap = new Map<string, MemoryEntry[]>();
// Group memories by session ID
memories.forEach((memory) => {
const sessionId = memory.sessionId;
if (sessionId) {
if (!sessionMap.has(sessionId)) {
sessionMap.set(sessionId, []);
}
sessionMap.get(sessionId)!.push(memory);
}
});
// Convert to session groups with numbering
return Array.from(sessionMap.entries()).map(
([sessionId, sessionMemories]) => {
const numberedMemories = sessionMemories.map((memory, index) => ({
...memory,
number: index + 1,
}));
return {
sessionId,
memories: numberedMemories,
};
}
);
}
/**
* Groups overviews by session ID and calculates session timestamps
*/
function groupOverviewsBySession(
overviews: OverviewEntry[]
): SessionOverviewGroup[] {
const sessionMap = new Map<string, OverviewEntry[]>();
// Group overviews by session ID
overviews.forEach((overview) => {
const sessionId = overview.sessionId || 'unknown';
if (!sessionMap.has(sessionId)) {
sessionMap.set(sessionId, []);
}
sessionMap.get(sessionId)!.push(overview);
});
// Convert to session groups with timestamps
return Array.from(sessionMap.entries()).map(
([sessionId, sessionOverviews]) => {
// Find the earliest timestamp in this session's overviews
const timestamps = sessionOverviews
.map((o) => o.timestamp)
.filter((t): t is Date => t !== undefined)
.sort((a, b) => a.getTime() - b.getTime());
const group: SessionOverviewGroup = {
sessionId,
overviews: sessionOverviews,
};
// Add session-level timestamp if available
if (timestamps.length > 0) {
group.earliestTimestamp = timestamps[0];
group.timeAgo = formatRelativeTime(timestamps[0]);
}
return group;
}
);
}
/**
* Renders the complete session start template with provided data using Handlebars
* Data processing is separated from presentation - template controls the format
*/
// Intentionally removed Handlebars-based renderer; console output is handled by
// outputSessionStartContent() below.
/**
* Outputs session start content using dual streams:
* - stdout (console.log) -> Claude's context only (granular memories)
* - stderr (console.error) -> User visible (clean overviews)
*/
export function outputSessionStartContent(params: {
projectName: string;
memoryCount: number;
lastSessionTime?: string;
recentObjects: any[];
}): void {
const { projectName, memoryCount, lastSessionTime, recentObjects } = params;
const width = getWrapWidth();
// Extract overviews for user display - get more to show session grouping
const overviews = extractOverviews(recentObjects, 10, projectName);
// Process memory entries for Claude context
const memories = processMemoryEntries(recentObjects);
// Helper to split and normalize keywords into a map (lowercased -> original)
const splitKeywordsInto = (kw: string, dest: Map<string, string>) => {
const tokens =
kw.includes(',') || kw.includes('\n') ? kw.split(/[\n,]+/) : [kw];
for (const t of tokens) {
const trimmed = t.trim();
if (!trimmed) continue;
const key = trimmed.toLowerCase();
if (!dest.has(key)) dest.set(key, trimmed);
}
};
// Output memories first, then overviews at bottom, all sorted oldest to newest
if (memories.length > 0) {
const sessionGroups = groupMemoriesBySession(memories);
console.log('');
console.log('');
console.log(wrapText('📚 Memories', width));
sessionGroups.forEach((group) => {
console.log(makeLine('─', width));
console.log('');
console.log(wrapText(`🔍 ${group.sessionId}`, width));
// Collect keywords for this session as we iterate its memories
const groupKeywordMap = new Map<string, string>();
group.memories.forEach((memory) => {
console.log('');
console.log(wrapText(`${memory.number}. ${memory.summary}`, width));
if (memory.keywords)
splitKeywordsInto(memory.keywords, groupKeywordMap);
});
// Print this session's aggregated keywords under the session block
const groupKeywords = Array.from(groupKeywordMap.values());
if (groupKeywords.length > 0) {
console.log('');
console.log(wrapText(`🏷️ ${groupKeywords.join(', ')}`, width));
}
console.log('');
});
}
// Overview section at bottom with session grouping
if (overviews.length > 0) {
const sessionGroups = groupOverviewsBySession(overviews);
console.log('');
console.log(wrapText('🧠 Overviews', width));
console.log(makeLine('─', width));
// Match the memories section layout: session header, numbered items, per-session separator
sessionGroups.forEach((group) => {
console.log('');
console.log(wrapText(`🔍 ${group.sessionId}`, width));
group.overviews.forEach((overview, index) => {
console.log('');
console.log(wrapText(`${index + 1}. ${overview.content}`, width));
console.log('');
if (overview.timeAgo) {
console.log(wrapText(`📅 ${overview.timeAgo}`, width));
}
});
console.log('');
console.log(makeLine('─', width));
});
} else if (memories.length === 0) {
console.log(
wrapText(`🧠 No recent context found for ${projectName}`, width)
);
}
}
@@ -0,0 +1,185 @@
/**
* Hook Templates Test
*
* Basic validation tests for hook response templates to ensure they
* generate valid responses that conform to Claude Code's hook system.
*/
import {
createPreCompactSuccessResponse,
createPreCompactBlockedResponse,
createPreCompactApprovalResponse,
createSessionStartSuccessResponse,
createSessionStartEmptyResponse,
createSessionStartErrorResponse,
createSessionStartMemoryResponse,
createPreToolUseAllowResponse,
createPreToolUseDenyResponse,
createPreToolUseAskResponse,
createHookSuccessResponse,
createHookErrorResponse,
validateHookResponse,
createContextualHookResponse,
formatDuration,
createOperationSummary,
OPERATION_STATUS_TEMPLATES,
ERROR_RESPONSE_TEMPLATES
} from './HookTemplates.js';
// =============================================================================
// PRE-COMPACT HOOK TESTS
// =============================================================================
console.log('Testing Pre-Compact Hook Templates...');
// Test successful pre-compact response
const preCompactSuccess = createPreCompactSuccessResponse();
console.log('✓ Pre-compact success:', JSON.stringify(preCompactSuccess, null, 2));
// Test blocked pre-compact response
const preCompactBlocked = createPreCompactBlockedResponse('User requested to skip compression');
console.log('✓ Pre-compact blocked:', JSON.stringify(preCompactBlocked, null, 2));
// Test approval response
const preCompactApproval = createPreCompactApprovalResponse('approve', 'Compression approved by policy');
console.log('✓ Pre-compact approval:', JSON.stringify(preCompactApproval, null, 2));
// =============================================================================
// SESSION START HOOK TESTS
// =============================================================================
console.log('\nTesting Session Start Hook Templates...');
// Test successful session start with context
const sessionStartSuccess = createSessionStartSuccessResponse('Loaded 5 memories from previous sessions');
console.log('✓ Session start success:', JSON.stringify(sessionStartSuccess, null, 2));
// Test empty session start
const sessionStartEmpty = createSessionStartEmptyResponse();
console.log('✓ Session start empty:', JSON.stringify(sessionStartEmpty, null, 2));
// Test error session start
const sessionStartError = createSessionStartErrorResponse('Memory index corrupted');
console.log('✓ Session start error:', JSON.stringify(sessionStartError, null, 2));
// Test rich memory response
const sessionStartMemory = createSessionStartMemoryResponse({
projectName: 'claude-mem',
memoryCount: 12,
lastSessionTime: '2 hours ago',
recentComponents: ['PromptOrchestrator', 'HookTemplates', 'MCPClient'],
recentDecisions: ['Use TypeScript for type safety', 'Implement embedded Weaviate']
});
console.log('✓ Session start memory:', JSON.stringify(sessionStartMemory, null, 2));
// =============================================================================
// PRE-TOOL USE HOOK TESTS
// =============================================================================
console.log('\nTesting Pre-Tool Use Hook Templates...');
// Test allow response
const preToolAllow = createPreToolUseAllowResponse('Tool execution approved by security policy');
console.log('✓ Pre-tool allow:', JSON.stringify(preToolAllow, null, 2));
// Test deny response
const preToolDeny = createPreToolUseDenyResponse('Bash commands disabled in restricted mode');
console.log('✓ Pre-tool deny:', JSON.stringify(preToolDeny, null, 2));
// Test ask response
const preToolAsk = createPreToolUseAskResponse('File operation requires user confirmation');
console.log('✓ Pre-tool ask:', JSON.stringify(preToolAsk, null, 2));
// =============================================================================
// GENERIC HOOK TESTS
// =============================================================================
console.log('\nTesting Generic Hook Templates...');
// Test basic success
const genericSuccess = createHookSuccessResponse(false);
console.log('✓ Generic success:', JSON.stringify(genericSuccess, null, 2));
// Test basic error
const genericError = createHookErrorResponse('Operation failed due to network timeout', true);
console.log('✓ Generic error:', JSON.stringify(genericError, null, 2));
// =============================================================================
// VALIDATION TESTS
// =============================================================================
console.log('\nTesting Hook Response Validation...');
// Test valid PreCompact response
const preCompactValidation = validateHookResponse(preCompactSuccess, 'PreCompact');
console.log('✓ PreCompact validation:', preCompactValidation);
// Test invalid PreCompact response (with hookSpecificOutput)
const invalidPreCompact = {
continue: true,
hookSpecificOutput: { hookEventName: 'PreCompact' }
};
const preCompactInvalidValidation = validateHookResponse(invalidPreCompact, 'PreCompact');
console.log('✓ PreCompact invalid validation:', preCompactInvalidValidation);
// Test valid SessionStart response
const sessionStartValidation = validateHookResponse(sessionStartSuccess, 'SessionStart');
console.log('✓ SessionStart validation:', sessionStartValidation);
// =============================================================================
// CONTEXTUAL HOOK RESPONSE TESTS
// =============================================================================
console.log('\nTesting Contextual Hook Responses...');
// Test successful session start context
const contextualSessionStart = createContextualHookResponse({
hookEventName: 'SessionStart',
sessionId: 'test-123',
success: true,
message: 'Successfully loaded 8 memories from previous claude-mem sessions'
});
console.log('✓ Contextual SessionStart:', JSON.stringify(contextualSessionStart, null, 2));
// Test failed PreCompact context
const contextualPreCompactFail = createContextualHookResponse({
hookEventName: 'PreCompact',
sessionId: 'test-123',
success: false,
message: 'Compression blocked: insufficient disk space'
});
console.log('✓ Contextual PreCompact fail:', JSON.stringify(contextualPreCompactFail, null, 2));
// =============================================================================
// UTILITY FUNCTION TESTS
// =============================================================================
console.log('\nTesting Utility Functions...');
// Test duration formatting
console.log('✓ Duration 500ms:', formatDuration(500));
console.log('✓ Duration 5s:', formatDuration(5000));
console.log('✓ Duration 90s:', formatDuration(90000));
console.log('✓ Duration 2m30s:', formatDuration(150000));
// Test operation summary
console.log('✓ Operation summary success:', createOperationSummary('Memory compression', true, 5000, 15, 'entities extracted'));
console.log('✓ Operation summary failure:', createOperationSummary('Context loading', false, 2000, 0, 'connection timeout'));
// =============================================================================
// TEMPLATE CONSTANT TESTS
// =============================================================================
console.log('\nTesting Template Constants...');
// Test operation status templates
console.log('✓ Compression complete:', OPERATION_STATUS_TEMPLATES.COMPRESSION_COMPLETE(25, 5000));
console.log('✓ Context loaded:', OPERATION_STATUS_TEMPLATES.CONTEXT_LOADED(8));
console.log('✓ Tool allowed:', OPERATION_STATUS_TEMPLATES.TOOL_ALLOWED('Bash'));
// Test error response templates
console.log('✓ File not found:', ERROR_RESPONSE_TEMPLATES.FILE_NOT_FOUND('/path/to/transcript.txt'));
console.log('✓ Connection failed:', ERROR_RESPONSE_TEMPLATES.CONNECTION_FAILED('MCP memory server'));
console.log('✓ Operation timeout:', ERROR_RESPONSE_TEMPLATES.OPERATION_TIMEOUT('compression', 30000));
console.log('\n✅ All hook template tests completed successfully!');
@@ -0,0 +1,546 @@
/**
* Hook Templates for System Integration
*
* This module provides standardized templates for hook responses that integrate
* with Claude Code's hook system. These templates ensure consistent formatting
* and proper JSON structure for different hook events.
*
* Based on Claude Code Hook Documentation v2025
*/
import {
BaseHookResponse,
PreCompactResponse,
SessionStartResponse,
PreToolUseResponse,
HookPayload,
PreCompactPayload,
SessionStartPayload
} from '../../../shared/types.js';
// =============================================================================
// HOOK RESPONSE INTERFACES
// =============================================================================
/**
* Context data for generating hook responses
*/
export interface HookResponseContext {
/** The hook event name */
hookEventName: string;
/** Session identifier */
sessionId: string;
/** Whether the operation was successful */
success: boolean;
/** Optional message for the response */
message?: string;
/** Additional data specific to the hook type */
additionalData?: Record<string, unknown>;
/** Duration of the operation in milliseconds */
duration?: number;
/** Number of items processed */
itemCount?: number;
}
/**
* Progress information for long-running operations
*/
export interface OperationProgress {
/** Current step number */
current: number;
/** Total number of steps */
total: number;
/** Description of current step */
currentStep?: string;
/** Estimated time remaining in milliseconds */
estimatedRemaining?: number;
}
// =============================================================================
// PRE-COMPACT HOOK TEMPLATES
// =============================================================================
/**
* Creates a successful pre-compact response that allows compression to proceed
* PreCompact hooks do NOT support hookSpecificOutput according to documentation
*/
export function createPreCompactSuccessResponse(): PreCompactResponse {
return {
continue: true,
suppressOutput: true
};
}
/**
* Creates a blocked pre-compact response that prevents compression
*/
export function createPreCompactBlockedResponse(reason: string): PreCompactResponse {
return {
continue: false,
stopReason: reason,
suppressOutput: true
};
}
/**
* Creates a pre-compact response with approval decision
*/
export function createPreCompactApprovalResponse(
decision: 'approve' | 'block',
reason?: string
): PreCompactResponse {
return {
decision,
reason,
continue: decision === 'approve',
suppressOutput: true
};
}
// =============================================================================
// SESSION START HOOK TEMPLATES
// =============================================================================
/**
* Creates a successful session start response with loaded context
* SessionStart hooks DO support hookSpecificOutput
*/
export function createSessionStartSuccessResponse(
additionalContext?: string
): SessionStartResponse {
return {
continue: true,
suppressOutput: true,
hookSpecificOutput: {
hookEventName: 'SessionStart',
additionalContext
}
};
}
/**
* Creates a session start response when no context is available
*/
export function createSessionStartEmptyResponse(): SessionStartResponse {
return {
continue: true,
suppressOutput: true,
hookSpecificOutput: {
hookEventName: 'SessionStart',
additionalContext: 'Starting fresh session - no previous context available'
}
};
}
/**
* Creates a session start response with error information
*/
export function createSessionStartErrorResponse(error: string): SessionStartResponse {
return {
continue: true, // Continue even if context loading fails
suppressOutput: true,
hookSpecificOutput: {
hookEventName: 'SessionStart',
additionalContext: `Context loading encountered an issue: ${error}. Starting without previous context.`
}
};
}
/**
* Creates a rich session start response with memory summary
*/
export function createSessionStartMemoryResponse(memoryData: {
projectName: string;
memoryCount: number;
lastSessionTime?: string;
recentComponents?: string[];
recentDecisions?: string[];
}): SessionStartResponse {
const { projectName, memoryCount, lastSessionTime, recentComponents = [], recentDecisions = [] } = memoryData;
const timeInfo = lastSessionTime ? ` (last worked: ${lastSessionTime})` : '';
const contextParts: string[] = [];
contextParts.push(`🧠 Loaded ${memoryCount} memories from previous sessions for ${projectName}${timeInfo}`);
if (recentComponents.length > 0) {
contextParts.push(`\n🎯 Recent components: ${recentComponents.slice(0, 3).join(', ')}`);
}
if (recentDecisions.length > 0) {
contextParts.push(`\n🔄 Recent decisions: ${recentDecisions.slice(0, 2).join(', ')}`);
}
contextParts.push('\n💡 Use chroma_query_documents(["keywords"]) to find related work or chroma_get_documents(["document_id"]) to load specific content');
return {
continue: true,
suppressOutput: true,
hookSpecificOutput: {
hookEventName: 'SessionStart',
additionalContext: contextParts.join('')
}
};
}
// =============================================================================
// PRE-TOOL USE HOOK TEMPLATES
// =============================================================================
/**
* Creates a pre-tool use response that allows the tool to execute
*/
export function createPreToolUseAllowResponse(reason?: string): PreToolUseResponse {
return {
continue: true,
suppressOutput: true,
permissionDecision: 'allow',
permissionDecisionReason: reason
};
}
/**
* Creates a pre-tool use response that blocks the tool execution
*/
export function createPreToolUseDenyResponse(reason: string): PreToolUseResponse {
return {
continue: false,
stopReason: reason,
suppressOutput: true,
permissionDecision: 'deny',
permissionDecisionReason: reason
};
}
/**
* Creates a pre-tool use response that asks for user confirmation
*/
export function createPreToolUseAskResponse(reason: string): PreToolUseResponse {
return {
continue: true,
suppressOutput: false, // Show output so user can see the question
permissionDecision: 'ask',
permissionDecisionReason: reason
};
}
// =============================================================================
// GENERIC HOOK RESPONSE TEMPLATES
// =============================================================================
/**
* Creates a basic success response for any hook type
*/
export function createHookSuccessResponse(suppressOutput = true): BaseHookResponse {
return {
continue: true,
suppressOutput
};
}
/**
* Creates a basic error response for any hook type
*/
export function createHookErrorResponse(
reason: string,
suppressOutput = true
): BaseHookResponse {
return {
continue: false,
stopReason: reason,
suppressOutput
};
}
/**
* Creates a response with system message (warning/info for user)
*/
export function createHookSystemMessageResponse(
message: string,
continueProcessing = true
): BaseHookResponse & { systemMessage: string } {
return {
continue: continueProcessing,
suppressOutput: true,
systemMessage: message
};
}
// =============================================================================
// OPERATION STATUS TEMPLATES
// =============================================================================
/**
* Templates for different types of operation status messages
*/
export const OPERATION_STATUS_TEMPLATES = {
// Compression operations
COMPRESSION_STARTED: 'Starting memory compression...',
COMPRESSION_ANALYZING: 'Analyzing transcript content...',
COMPRESSION_EXTRACTING: 'Extracting memories and connections...',
COMPRESSION_SAVING: 'Saving compressed memories...',
COMPRESSION_COMPLETE: (count: number, duration?: number) =>
`Memory compression complete. Extracted ${count} memories${duration ? ` in ${Math.round(duration/1000)}s` : ''}`,
// Context loading operations
CONTEXT_LOADING: 'Loading previous session context...',
CONTEXT_SEARCHING: 'Searching for relevant memories...',
CONTEXT_FORMATTING: 'Organizing context for display...',
CONTEXT_LOADED: (count: number) => `Context loaded successfully. Found ${count} relevant memories`,
CONTEXT_EMPTY: 'No previous context found. Starting fresh session',
// Tool operations
TOOL_CHECKING: (toolName: string) => `Checking permissions for ${toolName}...`,
TOOL_ALLOWED: (toolName: string) => `${toolName} execution approved`,
TOOL_BLOCKED: (toolName: string, reason: string) => `${toolName} blocked: ${reason}`,
// General operations
OPERATION_STARTING: (operation: string) => `Starting ${operation}...`,
OPERATION_PROGRESS: (operation: string, current: number, total: number) =>
`${operation}: ${current}/${total} (${Math.round((current/total)*100)}%)`,
OPERATION_COMPLETE: (operation: string) => `${operation} completed successfully`,
OPERATION_FAILED: (operation: string, error: string) => `${operation} failed: ${error}`
} as const;
/**
* Creates a progress message for long-running operations
*/
export function createProgressMessage(
operation: string,
progress: OperationProgress
): string {
const { current, total, currentStep, estimatedRemaining } = progress;
const percentage = Math.round((current / total) * 100);
let message = `${operation}: ${current}/${total} (${percentage}%)`;
if (currentStep) {
message += ` - ${currentStep}`;
}
if (estimatedRemaining && estimatedRemaining > 1000) {
const seconds = Math.round(estimatedRemaining / 1000);
message += ` (${seconds}s remaining)`;
}
return message;
}
// =============================================================================
// ERROR RESPONSE TEMPLATES
// =============================================================================
/**
* Standard error messages for different failure scenarios
*/
export const ERROR_RESPONSE_TEMPLATES = {
// File system errors
FILE_NOT_FOUND: (path: string) => `File not found: ${path}`,
FILE_READ_ERROR: (path: string, error: string) => `Failed to read ${path}: ${error}`,
FILE_WRITE_ERROR: (path: string, error: string) => `Failed to write ${path}: ${error}`,
// Network/connection errors
CONNECTION_FAILED: (service: string) => `Failed to connect to ${service}`,
CONNECTION_TIMEOUT: (service: string) => `Connection to ${service} timed out`,
// Validation errors
INVALID_PAYLOAD: (field: string) => `Invalid or missing field: ${field}`,
INVALID_FORMAT: (expected: string, received: string) => `Expected ${expected}, received ${received}`,
// Operation errors
OPERATION_TIMEOUT: (operation: string, timeout: number) =>
`${operation} timed out after ${timeout}ms`,
OPERATION_CANCELLED: (operation: string) => `${operation} was cancelled`,
INSUFFICIENT_PERMISSIONS: (operation: string) =>
`Insufficient permissions for ${operation}`,
// Memory system errors
MEMORY_SYSTEM_UNAVAILABLE: 'Memory system is not available',
MEMORY_CORRUPTION: 'Memory index appears to be corrupted',
MEMORY_SEARCH_FAILED: (query: string) => `Memory search failed for query: "${query}"`,
// Compression errors
COMPRESSION_FAILED: (stage: string) => `Compression failed during ${stage}`,
INVALID_TRANSCRIPT: 'Transcript file is invalid or corrupted',
// General errors
UNKNOWN_ERROR: (context: string) => `An unexpected error occurred during ${context}`,
SYSTEM_ERROR: (error: string) => `System error: ${error}`
} as const;
/**
* Creates a standardized error response with troubleshooting guidance
*/
export function createDetailedErrorResponse(
operation: string,
error: string,
troubleshootingSteps: string[] = []
): BaseHookResponse {
const baseMessage = `${operation} failed: ${error}`;
const fullMessage = troubleshootingSteps.length > 0
? `${baseMessage}\n\nTroubleshooting steps:\n${troubleshootingSteps.map(step => `${step}`).join('\n')}`
: baseMessage;
return {
continue: false,
stopReason: fullMessage,
suppressOutput: false // Show error details to user
};
}
// =============================================================================
// HOOK RESPONSE VALIDATION
// =============================================================================
/**
* Validates that a hook response conforms to Claude Code expectations
*/
export function validateHookResponse(
response: any,
hookType: string
): { isValid: boolean; errors: string[] } {
const errors: string[] = [];
// Check required fields
if (typeof response !== 'object' || response === null) {
errors.push('Response must be a valid JSON object');
return { isValid: false, errors };
}
// Validate continue field
if (response.continue !== undefined && typeof response.continue !== 'boolean') {
errors.push('continue field must be a boolean');
}
// Validate suppressOutput field
if (response.suppressOutput !== undefined && typeof response.suppressOutput !== 'boolean') {
errors.push('suppressOutput field must be a boolean');
}
// Validate stopReason field
if (response.stopReason !== undefined && typeof response.stopReason !== 'string') {
errors.push('stopReason field must be a string');
}
// Hook-specific validations
if (hookType === 'PreCompact') {
// PreCompact should not have hookSpecificOutput
if (response.hookSpecificOutput !== undefined) {
errors.push('PreCompact hooks do not support hookSpecificOutput');
}
// Validate decision field if present
if (response.decision !== undefined && !['approve', 'block'].includes(response.decision)) {
errors.push('decision field must be "approve" or "block"');
}
}
if (hookType === 'SessionStart') {
// Validate hookSpecificOutput structure
if (response.hookSpecificOutput) {
const hso = response.hookSpecificOutput;
if (hso.hookEventName !== 'SessionStart') {
errors.push('hookSpecificOutput.hookEventName must be "SessionStart"');
}
if (hso.additionalContext !== undefined && typeof hso.additionalContext !== 'string') {
errors.push('hookSpecificOutput.additionalContext must be a string');
}
}
}
if (hookType === 'PreToolUse') {
// Validate permissionDecision field
if (response.permissionDecision !== undefined) {
if (!['allow', 'deny', 'ask'].includes(response.permissionDecision)) {
errors.push('permissionDecision must be "allow", "deny", or "ask"');
}
}
}
return {
isValid: errors.length === 0,
errors
};
}
// =============================================================================
// UTILITY FUNCTIONS
// =============================================================================
/**
* Creates a hook response based on context and automatically handles hook-specific formatting
*/
export function createContextualHookResponse(context: HookResponseContext): BaseHookResponse {
const { hookEventName, success, message, additionalData, duration, itemCount } = context;
// Base response
const response: BaseHookResponse = {
continue: success,
suppressOutput: true
};
// Add failure reason if not successful
if (!success && message) {
response.stopReason = message;
response.suppressOutput = false; // Show error to user
}
// Handle hook-specific output
if (success && hookEventName === 'SessionStart' && message) {
return {
...response,
hookSpecificOutput: {
hookEventName: 'SessionStart',
additionalContext: message
}
} as SessionStartResponse;
}
// Handle PreCompact approval
if (hookEventName === 'PreCompact') {
return {
...response,
decision: success ? 'approve' : 'block',
reason: message
} as PreCompactResponse;
}
return response;
}
/**
* Formats duration in milliseconds to human-readable format
*/
export function formatDuration(milliseconds: number): string {
if (milliseconds < 1000) {
return `${milliseconds}ms`;
}
const seconds = Math.round(milliseconds / 1000);
if (seconds < 60) {
return `${seconds}s`;
}
const minutes = Math.floor(seconds / 60);
const remainingSeconds = seconds % 60;
return remainingSeconds > 0 ? `${minutes}m ${remainingSeconds}s` : `${minutes}m`;
}
/**
* Creates a summary line for operation completion
*/
export function createOperationSummary(
operation: string,
success: boolean,
duration?: number,
itemCount?: number,
details?: string
): string {
const status = success ? '✅' : '❌';
const durationText = duration ? ` in ${formatDuration(duration)}` : '';
const itemText = itemCount ? ` (${itemCount} items)` : '';
const detailText = details ? ` - ${details}` : '';
return `${status} ${operation}${itemText}${durationText}${detailText}`;
}
+364
View File
@@ -0,0 +1,364 @@
import * as p from '@clack/prompts';
import { TranscriptParser } from './transcript-parser.js';
import path from 'path';
import fs from 'fs';
/**
* Conversation item for selection UI
*/
export interface ConversationItem {
filePath: string;
sessionId: string;
timestamp: string;
messageCount: number;
gitBranch?: string;
cwd: string;
fileSize: number;
displayName: string;
projectName: string;
parsedDate: Date;
relativeDate: string;
dateGroup: string;
}
/**
* Selection result
*/
export interface SelectionResult {
selectedFiles: string[];
cancelled: boolean;
}
/**
* Interactive conversation selector service
*/
export class ConversationSelector {
private parser: TranscriptParser;
constructor() {
this.parser = new TranscriptParser();
}
/**
* Show interactive selection UI for conversations with improved flow
*/
async selectConversations(): Promise<SelectionResult> {
p.intro('🧠 Claude History Import');
const s = p.spinner();
s.start('Scanning for conversation files...');
const conversationFiles = await this.parser.scanConversationFiles();
if (conversationFiles.length === 0) {
s.stop('❌ No conversation files found');
p.outro('No conversation files found in Claude projects directory');
return { selectedFiles: [], cancelled: true };
}
// Get metadata for each file
const conversations: ConversationItem[] = [];
for (const filePath of conversationFiles) {
try {
const metadata = await this.parser.getConversationMetadata(filePath);
const projectName = this.extractProjectName(filePath);
const parsedDate = this.parseTimestamp(metadata.timestamp, filePath);
const relativeDate = this.formatRelativeDate(parsedDate);
const dateGroup = this.getDateGroup(parsedDate);
conversations.push({
filePath,
...metadata,
projectName,
parsedDate,
relativeDate,
dateGroup,
displayName: this.createDisplayName(filePath, metadata)
});
} catch (e) {
// Skip invalid files silently
}
}
if (conversations.length === 0) {
s.stop('❌ No valid conversation files found');
p.outro('No valid conversation files found');
return { selectedFiles: [], cancelled: true };
}
s.stop(`Found ${conversations.length} conversation files`);
// Sort by timestamp (newest first)
conversations.sort((a, b) => b.parsedDate.getTime() - a.parsedDate.getTime());
// If there are too many conversations, offer filtering options first
let filteredConversations = conversations;
if (conversations.length > 100) {
const filterChoice = await p.select({
message: `Found ${conversations.length} conversations. How would you like to proceed?`,
options: [
{ value: 'recent', label: 'Show recent (last 50)', hint: 'Most recent conversations' },
{ value: 'project', label: 'Filter by project', hint: 'Select specific project first' },
{ value: 'all', label: 'Show all', hint: `Display all ${conversations.length} conversations` }
]
});
if (p.isCancel(filterChoice)) {
p.cancel('Selection cancelled');
return { selectedFiles: [], cancelled: true };
}
if (filterChoice === 'recent') {
filteredConversations = conversations.slice(0, 50);
} else if (filterChoice === 'project') {
const projectNames = [...new Set(conversations.map(c => c.projectName))].sort();
const selectedProject = await p.select({
message: 'Select project:',
options: projectNames.map(project => {
const count = conversations.filter(c => c.projectName === project).length;
return {
value: project,
label: project,
hint: `${count} conversation${count === 1 ? '' : 's'}`
};
})
});
if (p.isCancel(selectedProject)) {
p.cancel('Selection cancelled');
return { selectedFiles: [], cancelled: true };
}
filteredConversations = conversations.filter(c => c.projectName === selectedProject);
}
}
// Conversation selection
const selectedConversations = await this.selectConversationsFromList(filteredConversations);
if (!selectedConversations || selectedConversations.length === 0) {
p.cancel('No conversations selected');
return { selectedFiles: [], cancelled: true };
}
// Confirmation
const confirmed = await this.confirmSelection(selectedConversations);
if (!confirmed) {
p.cancel('Import cancelled');
return { selectedFiles: [], cancelled: true };
}
p.outro(`Ready to import ${selectedConversations.length} conversations`);
return { selectedFiles: selectedConversations.map(c => c.filePath), cancelled: false };
}
/**
* Extract project name from file path
*/
private extractProjectName(filePath: string): string {
return path.basename(path.dirname(filePath));
}
/**
* Safely parse timestamp with fallback to file modification time
*/
private parseTimestamp(timestamp: string | undefined, filePath: string): Date {
// Try parsing the provided timestamp
if (timestamp) {
const date = new Date(timestamp);
if (!isNaN(date.getTime())) {
return date;
}
}
// Fallback to file modification time
try {
const stats = fs.statSync(filePath);
return stats.mtime;
} catch (e) {
// Last resort: current time
return new Date();
}
}
/**
* Format date as relative time (e.g., "2 days ago", "3 weeks ago")
*/
private formatRelativeDate(date: Date): string {
const now = new Date();
const diffMs = now.getTime() - date.getTime();
const diffMinutes = Math.floor(diffMs / (1000 * 60));
const diffHours = Math.floor(diffMs / (1000 * 60 * 60));
const diffDays = Math.floor(diffMs / (1000 * 60 * 60 * 24));
const diffWeeks = Math.floor(diffDays / 7);
const diffMonths = Math.floor(diffDays / 30);
if (diffMinutes < 1) return 'just now';
if (diffMinutes < 60) return `${diffMinutes}m ago`;
if (diffHours < 24) return `${diffHours}h ago`;
if (diffDays < 7) return `${diffDays}d ago`;
if (diffWeeks < 4) return `${diffWeeks}w ago`;
if (diffMonths < 12) return `${diffMonths}mo ago`;
const diffYears = Math.floor(diffMonths / 12);
return `${diffYears}y ago`;
}
/**
* Get date group for grouping conversations
*/
private getDateGroup(date: Date): string {
const now = new Date();
const today = new Date(now.getFullYear(), now.getMonth(), now.getDate());
const yesterday = new Date(today.getTime() - 24 * 60 * 60 * 1000);
const thisWeekStart = new Date(today.getTime() - today.getDay() * 24 * 60 * 60 * 1000);
const lastWeekStart = new Date(thisWeekStart.getTime() - 7 * 24 * 60 * 60 * 1000);
const thisMonthStart = new Date(now.getFullYear(), now.getMonth(), 1);
const conversationDate = new Date(date.getFullYear(), date.getMonth(), date.getDate());
if (conversationDate.getTime() >= today.getTime()) {
return 'Today';
} else if (conversationDate.getTime() >= yesterday.getTime()) {
return 'Yesterday';
} else if (conversationDate.getTime() >= thisWeekStart.getTime()) {
return 'This Week';
} else if (conversationDate.getTime() >= lastWeekStart.getTime()) {
return 'Last Week';
} else if (conversationDate.getTime() >= thisMonthStart.getTime()) {
return 'This Month';
} else {
return 'Older';
}
}
/**
* Create display name for conversation
*/
private createDisplayName(filePath: string, metadata: any): string {
const parsedDate = this.parseTimestamp(metadata.timestamp, filePath);
const relativeDate = this.formatRelativeDate(parsedDate);
const sizeKB = Math.round(metadata.fileSize / 1024);
const branchInfo = metadata.gitBranch ? `${metadata.gitBranch}` : '';
return `${relativeDate}${metadata.messageCount} msgs • ${sizeKB}KB${branchInfo ? `${branchInfo}` : ''}`;
}
/**
* Select specific conversations from list
*/
private async selectConversationsFromList(
conversations: ConversationItem[]
): Promise<ConversationItem[] | null> {
// Group conversations by date for better organization
const groupedConversations = this.groupConversationsByDate(conversations);
const options = this.createGroupedOptions(groupedConversations, conversations);
// Multi-select with select all/none shortcuts
const selectedIndices = await p.multiselect({
message: `Select conversations to import (${conversations.length} available, Space=toggle, Enter=confirm):`,
options,
required: false
});
if (p.isCancel(selectedIndices)) return null;
// Return selected conversations
const selected = selectedIndices as number[];
if (selected.length === 0) {
return [];
}
return selected.map(i => conversations[i]);
}
/**
* Confirm selection before processing
*/
private async confirmSelection(conversations: ConversationItem[]): Promise<boolean> {
const totalSize = conversations.reduce((sum, c) => sum + c.fileSize, 0);
const sizeKB = Math.round(totalSize / 1024);
const projects = [...new Set(conversations.map(c => c.projectName))];
const details = [
`${conversations.length} conversation${conversations.length === 1 ? '' : 's'}`,
`${projects.length} project${projects.length === 1 ? '' : 's'}: ${projects.join(', ')}`,
`Total size: ${sizeKB}KB`
].join('\n');
const confirmed = await p.confirm({
message: `Ready to import:\n\n${details}\n\nContinue?`,
initialValue: true
});
return !p.isCancel(confirmed) && confirmed;
}
/**
* Group conversations by date sections
*/
private groupConversationsByDate(conversations: ConversationItem[]): Map<string, ConversationItem[]> {
const groups = new Map<string, ConversationItem[]>();
for (const conv of conversations) {
const group = conv.dateGroup;
if (!groups.has(group)) {
groups.set(group, []);
}
groups.get(group)!.push(conv);
}
return groups;
}
/**
* Create options with date group headers
*/
private createGroupedOptions(groupedConversations: Map<string, ConversationItem[]>, allConversations: ConversationItem[]) {
const options: any[] = [];
// Add hint at top about selecting all/none
options.push({
value: 'hint',
label: '💡 Use Space to toggle, A to select all, I to invert',
disabled: true
});
options.push({ value: 'separator-hint', label: '─'.repeat(60), disabled: true });
// Define order of groups
const groupOrder = ['Today', 'Yesterday', 'This Week', 'Last Week', 'This Month', 'Older'];
for (const groupName of groupOrder) {
const conversations = groupedConversations.get(groupName);
if (!conversations || conversations.length === 0) continue;
// Add group header (disabled option for visual separation)
if (options.length > 2) { // Account for hint and separator
options.push({ value: `separator-${groupName}`, label: '─'.repeat(50), disabled: true });
}
options.push({
value: `header-${groupName}`,
label: `${groupName} (${conversations.length})`,
disabled: true
});
// Add conversations in this group
for (const conv of conversations) {
const index = allConversations.indexOf(conv);
const projectInfo = conv.projectName ? `[${conv.projectName}]` : '';
const workingDir = conv.cwd && conv.cwd !== 'undefined' ? path.basename(conv.cwd) : '';
const hint = `${projectInfo} ${workingDir}`.trim() || (conv.gitBranch ? `Branch: ${conv.gitBranch}` : '');
options.push({
value: index,
label: ` ${conv.displayName}`,
hint: hint
});
}
}
return options;
}
}
+414
View File
@@ -0,0 +1,414 @@
import { join, dirname, sep } from 'path';
import { homedir } from 'os';
import { existsSync, statSync } from 'fs';
import { execSync } from 'child_process';
import { fileURLToPath } from 'url';
/**
* PathDiscovery Service - Central path resolution for claude-mem
*
* Handles dynamic discovery of all required paths across different installation scenarios:
* - npm global installs, local installs, and development environments
* - Cross-platform path resolution (Windows, macOS, Linux)
* - Environment variable overrides for customization
* - Package resource discovery (hooks, commands)
*/
export class PathDiscovery {
private static instance: PathDiscovery | null = null;
// Cached paths for performance
private _dataDirectory: string | null = null;
private _packageRoot: string | null = null;
private _claudeConfigDirectory: string | null = null;
/**
* Get singleton instance
*/
static getInstance(): PathDiscovery {
if (!PathDiscovery.instance) {
PathDiscovery.instance = new PathDiscovery();
}
return PathDiscovery.instance;
}
// =============================================================================
// DATA DIRECTORIES - Where claude-mem stores its data
// =============================================================================
/**
* Base data directory for claude-mem
* Environment override: CLAUDE_MEM_DATA_DIR
*/
getDataDirectory(): string {
if (this._dataDirectory) return this._dataDirectory;
this._dataDirectory = process.env.CLAUDE_MEM_DATA_DIR || join(homedir(), '.claude-mem');
return this._dataDirectory;
}
/**
* Archives directory for compressed sessions
*/
getArchivesDirectory(): string {
return join(this.getDataDirectory(), 'archives');
}
/**
* Hooks directory where claude-mem hooks are installed
*/
getHooksDirectory(): string {
return join(this.getDataDirectory(), 'hooks');
}
/**
* Logs directory for claude-mem operation logs
*/
getLogsDirectory(): string {
return join(this.getDataDirectory(), 'logs');
}
/**
* Index directory for memory indexing
*/
getIndexDirectory(): string {
return this.getDataDirectory();
}
/**
* Index file path for memory indexing
*/
getIndexPath(): string {
return join(this.getIndexDirectory(), 'claude-mem-index.jsonl');
}
/**
* Trash directory for smart trash feature
*/
getTrashDirectory(): string {
return join(this.getDataDirectory(), 'trash');
}
/**
* Backups directory for configuration backups
*/
getBackupsDirectory(): string {
return join(this.getDataDirectory(), 'backups');
}
/**
* Chroma database directory
*/
getChromaDirectory(): string {
return join(this.getDataDirectory(), 'chroma');
}
/**
* Project-specific archive directory
*/
getProjectArchiveDirectory(projectName: string): string {
return join(this.getArchivesDirectory(), projectName);
}
/**
* User settings file path
*/
getUserSettingsPath(): string {
return join(this.getDataDirectory(), 'settings.json');
}
// =============================================================================
// CLAUDE INTEGRATION PATHS - Where Claude Code expects configuration
// =============================================================================
/**
* Claude configuration directory
* Environment override: CLAUDE_CONFIG_DIR
*/
getClaudeConfigDirectory(): string {
if (this._claudeConfigDirectory) return this._claudeConfigDirectory;
this._claudeConfigDirectory = process.env.CLAUDE_CONFIG_DIR || join(homedir(), '.claude');
return this._claudeConfigDirectory;
}
/**
* Claude settings file path
*/
getClaudeSettingsPath(): string {
return join(this.getClaudeConfigDirectory(), 'settings.json');
}
/**
* Claude commands directory where custom commands are installed
*/
getClaudeCommandsDirectory(): string {
return join(this.getClaudeConfigDirectory(), 'commands');
}
/**
* CLAUDE.md instructions file path
*/
getClaudeMdPath(): string {
return join(this.getClaudeConfigDirectory(), 'CLAUDE.md');
}
/**
* MCP configuration file path (user-level)
*/
getMcpConfigPath(): string {
return join(homedir(), '.claude.json');
}
/**
* MCP configuration file path (project-level)
*/
getProjectMcpConfigPath(): string {
return join(process.cwd(), '.mcp.json');
}
// =============================================================================
// PACKAGE DISCOVERY - Find claude-mem package resources
// =============================================================================
/**
* Discover the claude-mem package root directory
*/
getPackageRoot(): string {
if (this._packageRoot) return this._packageRoot;
// Method 1: Try require.resolve for package.json
try {
const packageJsonPath = require.resolve('claude-mem/package.json');
this._packageRoot = dirname(packageJsonPath);
return this._packageRoot;
} catch {}
// Method 2: Walk up from current module location
const currentFile = fileURLToPath(import.meta.url);
let currentDir = dirname(currentFile);
for (let i = 0; i < 10; i++) {
const packageJsonPath = join(currentDir, 'package.json');
if (existsSync(packageJsonPath)) {
try {
const packageJson = require(packageJsonPath);
if (packageJson.name === 'claude-mem') {
this._packageRoot = currentDir;
return this._packageRoot;
}
} catch {}
}
const parentDir = dirname(currentDir);
if (parentDir === currentDir) break;
currentDir = parentDir;
}
// Method 3: Try npm list command
try {
const npmOutput = execSync('npm list -g claude-mem --json 2>/dev/null || npm list claude-mem --json 2>/dev/null', {
encoding: 'utf8'
});
const npmData = JSON.parse(npmOutput);
if (npmData.dependencies?.['claude-mem']?.resolved) {
this._packageRoot = dirname(npmData.dependencies['claude-mem'].resolved);
return this._packageRoot;
}
} catch {}
throw new Error('Cannot locate claude-mem package root. Ensure claude-mem is properly installed.');
}
/**
* Find hooks directory in the installed package
*/
findPackageHooksDirectory(): string {
const packageRoot = this.getPackageRoot();
const hooksDir = join(packageRoot, 'hooks');
// Verify it contains expected hook files
const requiredHooks = ['pre-compact.js', 'session-start.js'];
for (const hookFile of requiredHooks) {
if (!existsSync(join(hooksDir, hookFile))) {
throw new Error(`Package hooks directory missing required file: ${hookFile}`);
}
}
return hooksDir;
}
/**
* Find commands directory in the installed package
*/
findPackageCommandsDirectory(): string {
const packageRoot = this.getPackageRoot();
const commandsDir = join(packageRoot, 'commands');
// Verify it contains expected command files
const requiredCommands = ['save.md'];
for (const commandFile of requiredCommands) {
if (!existsSync(join(commandsDir, commandFile))) {
throw new Error(`Package commands directory missing required file: ${commandFile}`);
}
}
return commandsDir;
}
// =============================================================================
// UTILITY METHODS
// =============================================================================
/**
* Ensure a directory exists, creating it if necessary
*/
ensureDirectory(dirPath: string): void {
if (!existsSync(dirPath)) {
require('fs').mkdirSync(dirPath, { recursive: true });
}
}
/**
* Ensure multiple directories exist
*/
ensureDirectories(dirPaths: string[]): void {
dirPaths.forEach(dirPath => this.ensureDirectory(dirPath));
}
/**
* Create all claude-mem data directories
*/
ensureAllDataDirectories(): void {
this.ensureDirectories([
this.getDataDirectory(),
this.getArchivesDirectory(),
this.getHooksDirectory(),
this.getLogsDirectory(),
this.getTrashDirectory(),
this.getBackupsDirectory(),
this.getChromaDirectory()
]);
}
/**
* Create all Claude integration directories
*/
ensureAllClaudeDirectories(): void {
this.ensureDirectories([
this.getClaudeConfigDirectory(),
this.getClaudeCommandsDirectory()
]);
}
/**
* Extract project name from a file path (improved from PathResolver)
*/
static extractProjectName(filePath: string): string {
const pathParts = filePath.split(sep);
// Look for common project indicators
const projectIndicators = ['src', 'lib', 'app', 'project', 'workspace'];
for (let i = pathParts.length - 1; i >= 0; i--) {
if (projectIndicators.includes(pathParts[i]) && i > 0) {
return pathParts[i - 1];
}
}
// Fallback to directory containing the file
if (pathParts.length > 1) {
return pathParts[pathParts.length - 2];
}
return 'unknown-project';
}
/**
* Get current project directory name
*/
static getCurrentProjectName(): string {
return require('path').basename(process.cwd());
}
/**
* Create a timestamped backup filename
*/
static createBackupFilename(originalPath: string): string {
const timestamp = new Date()
.toISOString()
.replace(/[:.]/g, '-')
.replace('T', '_')
.slice(0, 19);
return `${originalPath}.backup.${timestamp}`;
}
/**
* Check if a path exists and is accessible
*/
static isPathAccessible(path: string): boolean {
try {
return existsSync(path) && statSync(path).isDirectory();
} catch {
return false;
}
}
// =============================================================================
// STATIC CONVENIENCE METHODS
// =============================================================================
/**
* Quick access to singleton instance methods
*/
static getDataDirectory(): string {
return PathDiscovery.getInstance().getDataDirectory();
}
static getArchivesDirectory(): string {
return PathDiscovery.getInstance().getArchivesDirectory();
}
static getHooksDirectory(): string {
return PathDiscovery.getInstance().getHooksDirectory();
}
static getLogsDirectory(): string {
return PathDiscovery.getInstance().getLogsDirectory();
}
static getClaudeSettingsPath(): string {
return PathDiscovery.getInstance().getClaudeSettingsPath();
}
static getClaudeMdPath(): string {
return PathDiscovery.getInstance().getClaudeMdPath();
}
static findPackageHooksDirectory(): string {
return PathDiscovery.getInstance().findPackageHooksDirectory();
}
static findPackageCommandsDirectory(): string {
return PathDiscovery.getInstance().findPackageCommandsDirectory();
}
}
// Export singleton instance for immediate use
export const pathDiscovery = PathDiscovery.getInstance();
// Export static methods for convenience
export const {
getDataDirectory,
getArchivesDirectory,
getHooksDirectory,
getLogsDirectory,
getClaudeSettingsPath,
getClaudeMdPath,
findPackageHooksDirectory,
findPackageCommandsDirectory,
extractProjectName,
getCurrentProjectName,
createBackupFilename,
isPathAccessible
} = PathDiscovery;
+218
View File
@@ -0,0 +1,218 @@
import fs from 'fs';
import path from 'path';
import { log } from '../shared/logger.js';
import { PathDiscovery } from './path-discovery.js';
/**
* Interface for Claude Code JSONL conversation entries
*/
export interface ClaudeCodeMessage {
sessionId: string;
timestamp: string;
gitBranch?: string;
cwd: string;
type: 'user' | 'assistant' | 'system' | 'result';
message: {
role: string;
content: Array<{
type: string;
text?: string;
thinking?: string;
}> | string;
};
uuid: string;
version?: string;
isSidechain?: boolean;
userType?: string;
parentUuid?: string;
subtype?: string;
model?: string;
stop_reason?: string;
usage?: any;
}
/**
* Interface matching TranscriptCompressor's expected format
*/
export interface TranscriptMessage {
type: string;
message?: {
content?: string | Array<{
text?: string;
content?: string;
}>;
role?: string;
timestamp?: string;
created_at?: string;
};
content?: string | Array<{
text?: string;
content?: string;
}>;
role?: string;
uuid?: string;
session_id?: string;
timestamp?: string;
created_at?: string;
subtype?: string;
}
/**
* Parsed conversation with metadata
*/
export interface ParsedConversation {
sessionId: string;
filePath: string;
messageCount: number;
timestamp: string;
gitBranch?: string;
cwd: string;
messages: TranscriptMessage[];
}
/**
* Service for parsing Claude Code JSONL conversation files
*/
export class TranscriptParser {
/**
* Parse a single JSONL conversation file
*/
async parseConversation(filePath: string): Promise<ParsedConversation> {
const content = fs.readFileSync(filePath, 'utf-8');
const lines = content.trim().split('\n').filter(line => line.trim());
const claudeMessages: ClaudeCodeMessage[] = [];
let parseErrors = 0;
for (let i = 0; i < lines.length; i++) {
try {
const parsed = JSON.parse(lines[i]);
claudeMessages.push(parsed);
} catch (e) {
parseErrors++;
log.debug(`Parse error on line ${i + 1}: ${(e as Error).message}`);
}
}
if (claudeMessages.length === 0) {
throw new Error(`No valid messages found in ${filePath}`);
}
// Get metadata from first message
const firstMessage = claudeMessages[0];
const sessionId = firstMessage.sessionId;
const timestamp = firstMessage.timestamp;
const gitBranch = firstMessage.gitBranch;
const cwd = firstMessage.cwd;
// Convert to TranscriptMessage format
const messages = claudeMessages.map(msg => this.convertMessage(msg));
log.debug(`Parsed ${filePath}: ${messages.length} messages, ${parseErrors} errors`);
return {
sessionId,
filePath,
messageCount: messages.length,
timestamp,
gitBranch,
cwd,
messages
};
}
/**
* Convert ClaudeCodeMessage to TranscriptMessage format
*/
private convertMessage(claudeMsg: ClaudeCodeMessage): TranscriptMessage {
const converted: TranscriptMessage = {
type: claudeMsg.type,
uuid: claudeMsg.uuid,
session_id: claudeMsg.sessionId,
timestamp: claudeMsg.timestamp,
subtype: claudeMsg.subtype
};
// Handle message content
if (claudeMsg.message) {
converted.message = {
role: claudeMsg.message.role,
timestamp: claudeMsg.timestamp
};
if (Array.isArray(claudeMsg.message.content)) {
// Convert content array to expected format
converted.message.content = claudeMsg.message.content.map(item => ({
text: item.text || item.thinking || '',
content: item.text || item.thinking || ''
}));
} else if (typeof claudeMsg.message.content === 'string') {
converted.message.content = claudeMsg.message.content;
}
}
return converted;
}
/**
* Scan Claude projects directory for conversation files
*/
async scanConversationFiles(): Promise<string[]> {
const pathDiscovery = PathDiscovery.getInstance();
const claudeDir = path.join(pathDiscovery.getClaudeConfigDirectory(), 'projects');
if (!fs.existsSync(claudeDir)) {
return [];
}
const projectDirs = fs.readdirSync(claudeDir);
const conversationFiles: string[] = [];
for (const projectDir of projectDirs) {
const projectPath = path.join(claudeDir, projectDir);
if (!fs.statSync(projectPath).isDirectory()) continue;
const files = fs.readdirSync(projectPath);
for (const file of files) {
if (file.endsWith('.jsonl')) {
conversationFiles.push(path.join(projectPath, file));
}
}
}
return conversationFiles;
}
/**
* Get conversation metadata without fully parsing
*/
async getConversationMetadata(filePath: string): Promise<{
sessionId: string;
timestamp: string;
messageCount: number;
gitBranch?: string;
cwd: string;
fileSize: number;
}> {
const stats = fs.statSync(filePath);
const content = fs.readFileSync(filePath, 'utf-8');
const lines = content.trim().split('\n').filter(line => line.trim());
let firstMessage;
try {
firstMessage = JSON.parse(lines[0]);
} catch (e) {
throw new Error(`Invalid JSONL format in ${filePath}`);
}
return {
sessionId: firstMessage.sessionId,
timestamp: firstMessage.timestamp,
messageCount: lines.length,
gitBranch: firstMessage.gitBranch,
cwd: firstMessage.cwd,
fileSize: stats.size
};
}
}
+51
View File
@@ -0,0 +1,51 @@
import { readFileSync, existsSync } from 'fs';
import { join, dirname } from 'path';
import { fileURLToPath } from 'url';
// <Block> 5.1 ====================================
// Default values
const DEFAULT_PACKAGE_NAME = 'claude-mem';
// This MUST be replaced by build process with --define flag
// @ts-ignore
// For development, use fallback
const DEFAULT_PACKAGE_VERSION = typeof __DEFAULT_PACKAGE_VERSION__ !== 'undefined'
? __DEFAULT_PACKAGE_VERSION__
: '3.5.6-dev';
const DEFAULT_PACKAGE_DESCRIPTION = 'Memory compression system for Claude Code - persist context across sessions';
let packageName = DEFAULT_PACKAGE_NAME;
let packageVersion = DEFAULT_PACKAGE_VERSION;
let packageDescription = DEFAULT_PACKAGE_DESCRIPTION;
// </Block> =======================================
// Try to read package.json if it exists (for development)
// <Block> 5.2 ====================================
try {
const __filename = fileURLToPath(import.meta.url);
const __dirname = dirname(__filename);
const packageJsonPath = join(__dirname, '..', '..', 'package.json');
// <Block> 5.2a ====================================
if (existsSync(packageJsonPath)) {
const packageJson = JSON.parse(readFileSync(packageJsonPath, 'utf-8'));
// <Block> 5.2b ====================================
packageName = packageJson.name || DEFAULT_PACKAGE_NAME;
packageVersion = packageJson.version || DEFAULT_PACKAGE_VERSION;
packageDescription = packageJson.description || DEFAULT_PACKAGE_DESCRIPTION;
// </Block> =======================================
}
// </Block> =======================================
} catch {
// Use defaults if package.json can't be read
}
// </Block> =======================================
// <Block> 5.3 ====================================
// Export package configuration
export const PACKAGE_NAME = packageName;
export const PACKAGE_VERSION = packageVersion;
export const PACKAGE_DESCRIPTION = packageDescription;
// Export commonly used names
export const CLI_NAME = PACKAGE_NAME; // The CLI command name
// </Block> =======================================
+200
View File
@@ -0,0 +1,200 @@
import { existsSync, mkdirSync } from 'fs';
import { join, dirname } from 'path';
import { fileURLToPath } from 'url';
import { HookError, CompressionError, Logger, FileLogger } from './types.js';
const __filename = fileURLToPath(import.meta.url);
const __dirname = dirname(__filename);
export class ErrorHandler {
private logger: Logger;
private logDir: string;
// <Block> 7.1 ====================================
constructor(enableDebug = false) {
this.logDir = join(__dirname, '..', 'logs');
this.ensureLogDirectory();
const logFile = join(
this.logDir,
`claude-mem-${new Date().toISOString().slice(0, 10)}.log`
);
this.logger = new FileLogger(logFile, enableDebug);
}
// </Block> =======================================
// <Block> 7.2 ====================================
private ensureLogDirectory(): void {
if (!existsSync(this.logDir)) {
mkdirSync(this.logDir, { recursive: true });
}
}
// </Block> =======================================
// <Block> 7.3 ====================================
handleHookError(error: Error, hookType: string, payload?: unknown): never {
// <Block> 7.3a ====================================
const hookError =
error instanceof HookError
? error
: new HookError(
error.message,
hookType,
payload as any,
'HOOK_EXECUTION_ERROR'
);
// </Block> =======================================
this.logger.error(`Hook execution failed in ${hookType}`, hookError, {
hookType,
payload: payload ? JSON.stringify(payload) : undefined,
});
console.log(
JSON.stringify({
continue: false,
stopReason: `Hook error: ${hookError.message}`,
error: {
type: hookError.name,
message: hookError.message,
code: hookError.code,
},
})
);
process.exit(1);
}
// </Block> =======================================
// <Block> 7.4 ====================================
handleCompressionError(
error: Error,
transcriptPath: string,
stage: string
): never {
// <Block> 7.4a ====================================
const compressionError =
error instanceof CompressionError
? error
: new CompressionError(error.message, transcriptPath, stage as any);
// </Block> =======================================
this.logger.error(`Compression failed during ${stage}`, compressionError, {
transcriptPath,
stage,
});
console.error(`Compression error: ${compressionError.message}`);
console.error(`Stage: ${stage}`);
console.error(`Transcript: ${transcriptPath}`);
process.exit(1);
}
// </Block> =======================================
// <Block> 7.5 ====================================
handleValidationError(
message: string,
context?: Record<string, unknown>
): never {
this.logger.error('Validation error', undefined, { message, context });
console.error(`Validation error: ${message}`);
// <Block> 7.5a ====================================
if (context) {
console.error('Context:', JSON.stringify(context, null, 2));
}
// </Block> =======================================
process.exit(1);
}
// </Block> =======================================
// <Block> 7.6 ====================================
logSuccess(operation: string, details?: Record<string, unknown>): void {
this.logger.info(`Operation successful: ${operation}`, details);
}
// </Block> =======================================
// <Block> 7.7 ====================================
logWarning(message: string, details?: Record<string, unknown>): void {
this.logger.warn(message, details);
}
// </Block> =======================================
// <Block> 7.8 ====================================
logDebug(message: string, details?: Record<string, unknown>): void {
this.logger.debug(message, details);
}
// </Block> =======================================
}
// <Block> 7.9 ====================================
export function parseStdinJson<T = unknown>(input: string): T {
try {
return JSON.parse(input) as T;
} catch (error) {
throw new Error(
`Failed to parse JSON input: ${error instanceof Error ? error.message : 'Unknown error'}`
);
}
}
// </Block> =======================================
// <Block> 7.10 ===================================
export async function safeExecute<T>(
operation: () => Promise<T>,
errorHandler: ErrorHandler,
context: string
): Promise<T> {
try {
return await operation();
} catch (error) {
const message = `Safe execution failed in ${context}: ${error instanceof Error ? error.message : String(error)}`;
errorHandler.handleValidationError(message, { context, error });
throw error;
}
}
// </Block> =======================================
// <Block> 7.11 ===================================
export function validateFileExists(
filePath: string,
errorHandler: ErrorHandler
): void {
if (!existsSync(filePath)) {
errorHandler.handleValidationError(`File not found: ${filePath}`, {
filePath,
});
}
}
// </Block> =======================================
// <Block> 7.12 ===================================
/**
* Creates a standardized hook response using HookTemplates
* @deprecated Use HookTemplates.createHookSuccessResponse or createHookErrorResponse instead
* This function is maintained for backward compatibility but should be replaced with HookTemplates.
*/
export function createHookResponse(
success: boolean,
data?: Record<string, unknown>
): string {
// Log deprecation warning in development mode
if (process.env.NODE_ENV === 'development') {
console.warn('createHookResponse in error-handler.ts is deprecated. Use HookTemplates.createHookSuccessResponse or createHookErrorResponse instead.');
}
const response = {
continue: success,
suppressOutput: true, // Add standard suppressOutput field for Claude Code compatibility
...data,
};
return JSON.stringify(response);
}
// </Block> =======================================
export const globalErrorHandler = new ErrorHandler(
process.env.DEBUG === 'true'
);
+67
View File
@@ -0,0 +1,67 @@
/**
* Simple logging utility for claude-mem
*/
export interface LogLevel {
DEBUG: number;
INFO: number;
WARN: number;
ERROR: number;
}
const LOG_LEVELS: LogLevel = {
DEBUG: 0,
INFO: 1,
WARN: 2,
ERROR: 3,
};
class Logger {
// <Block> 2.1 ====================================
private level: number = LOG_LEVELS.INFO;
setLevel(level: keyof LogLevel): void {
this.level = LOG_LEVELS[level];
}
// </Block> =======================================
// <Block> 2.2 ====================================
debug(message: string, ...args: any[]): void {
if (this.level <= LOG_LEVELS.DEBUG) {
console.debug(`[DEBUG] ${message}`, ...args);
}
}
// </Block> =======================================
// <Block> 2.3 ====================================
info(message: string, ...args: any[]): void {
if (this.level <= LOG_LEVELS.INFO) {
console.info(`[INFO] ${message}`, ...args);
}
}
// </Block> =======================================
// <Block> 2.4 ====================================
warn(message: string, ...args: any[]): void {
if (this.level <= LOG_LEVELS.WARN) {
console.warn(`[WARN] ${message}`, ...args);
}
}
// </Block> =======================================
// <Block> 2.5 ====================================
error(message: string, error?: any, context?: any): void {
if (this.level <= LOG_LEVELS.ERROR) {
console.error(`[ERROR] ${message}`);
if (error) {
console.error(error);
}
if (context) {
console.error('Context:', context);
}
}
}
// </Block> =======================================
}
export const log = new Logger();
+91
View File
@@ -0,0 +1,91 @@
import { sep, basename } from 'path';
import { PathDiscovery } from '../services/path-discovery.js';
/**
* PathResolver utility for managing claude-mem file system paths
* Now delegates to PathDiscovery service for centralized path management
*/
export class PathResolver {
private pathDiscovery: PathDiscovery;
// <Block> 1.1 ====================================
constructor() {
this.pathDiscovery = PathDiscovery.getInstance();
}
// </Block> =======================================
// <Block> 1.2 ====================================
getConfigDir(): string {
return this.pathDiscovery.getDataDirectory();
}
// </Block> =======================================
// <Block> 1.3 ====================================
getIndexDir(): string {
return this.pathDiscovery.getIndexDirectory();
}
// </Block> =======================================
// <Block> 1.4 ====================================
getIndexPath(): string {
return this.pathDiscovery.getIndexPath();
}
// </Block> =======================================
// <Block> 1.5 ====================================
getArchiveDir(): string {
return this.pathDiscovery.getArchivesDirectory();
}
// </Block> =======================================
// <Block> 1.6 ====================================
getProjectArchiveDir(projectName: string): string {
return this.pathDiscovery.getProjectArchiveDirectory(projectName);
}
// </Block> =======================================
// <Block> 1.7 ====================================
getLogsDir(): string {
return this.pathDiscovery.getLogsDirectory();
}
// </Block> =======================================
// <Block> 1.8 ====================================
static ensureDirectory(dirPath: string): void {
PathDiscovery.getInstance().ensureDirectory(dirPath);
}
// </Block> =======================================
// <Block> 1.9 ====================================
static ensureDirectories(dirPaths: string[]): void {
PathDiscovery.getInstance().ensureDirectories(dirPaths);
}
// </Block> =======================================
// <Block> 1.10 ===================================
static extractProjectName(transcriptPath: string): string {
return PathDiscovery.extractProjectName(transcriptPath);
}
// <Block> 1.11 ===================================
/**
* DRY utility function: Canonical source for getting the current project prefix
* Replaces all instances of path.basename(process.cwd()) across the codebase
* @returns The current project directory name, sanitized for use as a prefix
*/
static getCurrentProjectPrefix(): string {
return PathDiscovery.getCurrentProjectName();
}
// </Block> =======================================
// <Block> 1.12 ===================================
/**
* DRY utility function: Gets raw project name without sanitization
* For use in contexts where original directory name is needed (e.g., display)
* @returns The current project directory name as-is
*/
static getCurrentProjectName(): string {
return PathDiscovery.getCurrentProjectName();
}
// </Block> =======================================
}
+98
View File
@@ -0,0 +1,98 @@
import { readFileSync, existsSync } from 'fs';
import { join } from 'path';
import { PathResolver } from './paths.js';
import type { Settings } from './types.js';
/**
* Settings utilities for managing ~/.claude-mem/settings.json
*/
export class SettingsManager {
private static settingsPath: string;
private static cachedSettings: Settings | null = null;
static {
const pathResolver = new PathResolver();
this.settingsPath = join(pathResolver.getConfigDir(), 'settings.json');
}
/**
* Safely read settings.json with error handling
* Returns empty object if file doesn't exist or is malformed
*/
static readSettings(): Settings {
// Return cached settings if available
if (this.cachedSettings !== null) {
return this.cachedSettings;
}
try {
if (existsSync(this.settingsPath)) {
const content = readFileSync(this.settingsPath, 'utf-8');
const settings = JSON.parse(content) as Settings;
this.cachedSettings = settings;
return settings;
}
} catch {
// File is malformed or unreadable - return empty settings
}
// File doesn't exist or failed to read
const emptySettings: Settings = {};
this.cachedSettings = emptySettings;
return emptySettings;
}
/**
* Get a specific setting value with optional fallback
*/
static getSetting<K extends keyof Settings>(
key: K,
fallback?: Settings[K]
): Settings[K] | undefined {
const settings = this.readSettings();
return settings[key] ?? fallback;
}
/**
* Get the Claude binary path from settings
* Falls back to 'claude' if not found or settings don't exist
*/
static getClaudePath(): string {
const claudePath = this.getSetting('claudePath', 'claude');
return claudePath as string;
}
/**
* Clear cached settings (useful for testing or after settings changes)
*/
static clearCache(): void {
this.cachedSettings = null;
}
}
/**
* Convenience function to get Claude binary path
* Can be imported directly for simple use cases
*/
export function getClaudePath(): string {
return SettingsManager.getClaudePath();
}
/**
* Convenience function to read all settings
* Can be imported directly for simple use cases
*/
export function readSettings(): Settings {
return SettingsManager.readSettings();
}
/**
* Convenience function to get a specific setting
* Can be imported directly for simple use cases
*/
export function getSetting<K extends keyof Settings>(
key: K,
fallback?: Settings[K]
): Settings[K] | undefined {
return SettingsManager.getSetting(key, fallback);
}
+263
View File
@@ -0,0 +1,263 @@
export interface HookPayload {
session_id: string;
transcript_path: string;
hook_event_name: string;
}
export interface PreCompactPayload extends HookPayload {
hook_event_name: 'PreCompact';
trigger: 'manual' | 'auto';
custom_instructions?: string;
}
export interface SessionStartPayload extends HookPayload {
hook_event_name: 'SessionStart';
source: 'startup' | 'compact' | 'vscode' | 'web';
}
export interface UserPromptSubmitPayload extends HookPayload {
hook_event_name: 'UserPromptSubmit';
prompt: string;
cwd: string;
}
export interface PreToolUsePayload extends HookPayload {
hook_event_name: 'PreToolUse';
tool_name: string;
tool_input: Record<string, unknown>;
}
export interface PostToolUsePayload extends HookPayload {
hook_event_name: 'PostToolUse';
tool_name: string;
tool_input: Record<string, unknown>;
tool_response: Record<string, unknown> & {
success?: boolean;
};
}
export interface NotificationPayload extends HookPayload {
hook_event_name: 'Notification';
message: string;
title?: string;
}
export interface StopPayload extends HookPayload {
hook_event_name: 'Stop';
stop_hook_active: boolean;
}
export interface BaseHookResponse {
continue?: boolean;
stopReason?: string;
suppressOutput?: boolean;
}
export interface PreCompactResponse extends BaseHookResponse {
decision?: 'approve' | 'block';
reason?: string;
}
export interface SessionStartResponse extends BaseHookResponse {
hookSpecificOutput?: {
hookEventName: 'SessionStart';
additionalContext?: string;
};
}
export interface PreToolUseResponse extends BaseHookResponse {
permissionDecision?: 'allow' | 'deny' | 'ask';
permissionDecisionReason?: string;
}
export interface CompressionResult {
compressedLines: string[];
originalTokens: number;
compressedTokens: number;
compressionRatio: number;
memoryNodes: string[];
}
export interface MemoryNode {
id: string;
type: 'document';
content: string;
timestamp: string;
metadata?: Record<string, unknown>;
}
export class HookError extends Error {
constructor(
message: string,
public hookType: string,
public payload?: HookPayload,
public code?: string
) {
super(message);
this.name = 'HookError';
}
}
export class CompressionError extends Error {
constructor(
message: string,
public transcriptPath: string,
public stage: 'reading' | 'analyzing' | 'compressing' | 'writing'
) {
super(message);
this.name = 'CompressionError';
}
}
export interface Logger {
info(message: string, meta?: Record<string, unknown>): void;
warn(message: string, meta?: Record<string, unknown>): void;
error(message: string, error?: Error, meta?: Record<string, unknown>): void;
debug(message: string, meta?: Record<string, unknown>): void;
}
export class FileLogger implements Logger {
constructor(
private logFile: string,
private enableDebug = false
) {}
info(message: string, meta?: Record<string, unknown>): void {
this.log('INFO', message, meta);
}
warn(message: string, meta?: Record<string, unknown>): void {
this.log('WARN', message, meta);
}
error(message: string, error?: Error, meta?: Record<string, unknown>): void {
const errorMeta = error ? { error: error.message, stack: error.stack } : {};
this.log('ERROR', message, { ...meta, ...errorMeta });
}
debug(message: string, meta?: Record<string, unknown>): void {
if (this.enableDebug) {
this.log('DEBUG', message, meta);
}
}
private log(
level: string,
message: string,
meta?: Record<string, unknown>
): void {
const timestamp = new Date().toISOString();
const metaStr = meta ? ` ${JSON.stringify(meta)}` : '';
const logLine = `[${timestamp}] ${level}: ${message}${metaStr}\n`;
console.error(logLine);
}
}
export function validateHookPayload(
payload: unknown,
expectedType: string
): HookPayload {
if (!payload || typeof payload !== 'object') {
throw new HookError(
`Invalid payload: expected object, got ${typeof payload}`,
expectedType
);
}
const hookPayload = payload as Record<string, unknown>;
if (!hookPayload.session_id || typeof hookPayload.session_id !== 'string') {
throw new HookError(
'Missing or invalid session_id',
expectedType,
hookPayload as unknown as HookPayload
);
}
if (
!hookPayload.transcript_path ||
typeof hookPayload.transcript_path !== 'string'
) {
throw new HookError(
'Missing or invalid transcript_path',
expectedType,
hookPayload as unknown as HookPayload
);
}
return hookPayload as unknown as HookPayload;
}
export function createSuccessResponse(
additionalData?: Record<string, unknown>
): BaseHookResponse {
return {
continue: true,
...additionalData,
};
}
export function createErrorResponse(
reason: string,
additionalData?: Record<string, unknown>
): BaseHookResponse {
return {
continue: false,
stopReason: reason,
...additionalData,
};
}
// =============================================================================
// SETTINGS AND CONFIGURATION TYPES
// =============================================================================
/**
* Main settings interface for claude-mem configuration
*/
export interface Settings {
autoCompress?: boolean;
projectName?: string;
installed?: boolean;
backend?: string;
embedded?: boolean;
saveMemoriesOnClear?: boolean;
claudePath?: string;
[key: string]: unknown; // Allow additional properties
}
// =============================================================================
// MCP CLIENT INTERFACE TYPES
// =============================================================================
/**
* Document structure for MCP operations
*/
export interface MCPDocument {
id: string;
content: string;
metadata?: Record<string, unknown>;
}
/**
* Search result structure from MCP operations
*/
export interface MCPSearchResult {
documents?: MCPDocument[];
ids?: string[];
metadatas?: Record<string, unknown>[];
distances?: number[];
[key: string]: unknown;
}
/**
* Interface for MCP client implementations (Chroma-based)
*/
export interface IMCPClient {
connect(): Promise<void>;
disconnect(): Promise<void>;
addDocuments(documents: MCPDocument[]): Promise<void>;
queryDocuments(query: string, limit?: number): Promise<MCPSearchResult>;
getDocuments(ids?: string[]): Promise<MCPSearchResult>;
}