Skip to content

Commit

Permalink
feat(commit-prompter): Add new configuration option in config.json
Browse files Browse the repository at this point in the history
- Introduce a new `model` field in the `config.json` file
- Parse the `model` configuration in the `gpt-commit-prompter` script
- Adapt the `config` section in the README.md file to reflect the changes
- Move the `config` section from the README.md to the config.json file
- Perform code cleanup and refactorings
  • Loading branch information
5n00py committed Dec 13, 2023
1 parent 4307ada commit c100550
Show file tree
Hide file tree
Showing 3 changed files with 42 additions and 49 deletions.
52 changes: 31 additions & 21 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ possible options, run `gc-smart --help`.
- Staged changes in the current repo to be commited
- `tmp_commit_msg.txt` has to be configured as commit template (see below)

## Installation and Setup
## Installation

1. Clone this repository to your local machine using git:

Expand Down Expand Up @@ -119,6 +119,36 @@ possible options, run `gc-smart --help`.
```
(Don't forget to replace `/path/to/gc-smart` with the actual path to the script.
## Configuration
The `gpt-commit-prompter` used in `gc-smart` allows configuration through the
[config.json](config.json) file. This flexibility ensures that the tool can be
easily adjusted to specific needs for generating commit messages.
### Commit Message Style
By default, the script uses the `imperative` style for commit messages. You can
choose a different style by using the `-s` or `--style` option when running the
script. The available styles, as defined in the `config.json` file, are:
- `imperative`: Generates a message in the imperative mood with a conventional
title and bullet points.
- `simple`: Produces a concise, one-line commit message.
- `detailed`: Creates a verbose commit message, elaborating on the changes.
- `conventional`: Follows the Conventional Commits specification for the commit
message format.
You can view and customize the styles by editing their corresponding entries in
the `config.json` file.
### GPT Model Configuration
In addition to style customization, the `config.json` file allows you to
configure the GPT model used by the script. This feature enables you to select
the most appropriate model version for your needs, ensuring optimal performance
and relevance of the generated commit messages. To change the model, simply
update the `model` section in the `config.json` file.
## Note on Commit Template Handling from gc-smart Version 0.4.0
Starting from version 0.4.0 of `gc-smart`, there is no longer a need to
Expand Down Expand Up @@ -193,26 +223,6 @@ message will appear as a template in your default Git editor. You can then
finalize the message or make any necessary modifications before committing the
changes to your repository.
## Commit Message Style
The `gpt-commit-prompter` script uses the `imperative` style as its default for
the commit messages it generates. However, you have the flexibility to
configure the style through the `-s` or `--style` option.
Available styles are:
- `imperative`: Generates a message in the imperative mood and uses
conventional title with bullets.
- `simple`: Generates a concise, straightforward commit message in one line.
- `detailed`: Generates a more verbose commit message, detailing the changes.
- `conventional`: Generates a commit message following the conventional commits
specification.
These styles can also be used in combination with the `gc-smart` script.
You can see how the styles are configured and further customize them by
checking the `config.json` file.
## License
This project is licensed under the MIT License - see the LICENSE.md file for
Expand Down
3 changes: 3 additions & 0 deletions config.json
Original file line number Diff line number Diff line change
@@ -1,4 +1,7 @@
{
"model": {
"name": "gpt-3.5-turbo"
},
"style": {
"imperative": {
"system_prompt": "You are a helpful assistant that generates meaningful commit messages based on a diff file. The commit message should include a title in the conventional commits style with possible types feat, fix, chore, docs, style, refactor, test or build and a scope and description. The details should be summarized using bullets '-' and imperatives. Each change should be on a new line in an imperative style. Please ensure that no line exceeds 80 characters in length."
Expand Down
36 changes: 8 additions & 28 deletions python/gpt-commit-prompter
Original file line number Diff line number Diff line change
Expand Up @@ -91,33 +91,6 @@ def get_changes(args):
else:
return args.changes

# def generate_commit_message(changes, instruction, style="imperative"):
# config = load_config()
#
# # Ensure the provided style is in the configuration
# if style not in config["style"]:
# raise ValueError(f"Style '{style}' not found in configuration.")
#
# system_prompt = config["style"][style]["system_prompt"]
#
# # Constructing the prompt for the AI
# prompt = f"{system_prompt}\n\nChanges:\n{changes}\n\n"
# if instruction:
# prompt += f"Instruction: {instruction}\n"
#
# # Call the OpenAI API with the constructed prompt
# response = openai.Completion.create(
# engine="text-davinci-003", # or another model of your choice
# prompt=prompt,
# max_tokens=150, # Adjust as needed
# n=1, # Number of completions to generate
# stop=None # String where the API will stop generating further content
# )
#
# # Extracting the commit message from the response
# commit_message = response.choices[0].text.strip()
# return commit_message

def generate_commit_message(changes, instruction, style="imperative"):
config = load_config()

Expand All @@ -135,13 +108,20 @@ def generate_commit_message(changes, instruction, style="imperative"):
# Instantiate the OpenAI client
client = OpenAI()

# Get the model name from the configuration
model_config = config.get("model")
if not model_config or "name" not in model_config:
raise ValueError("Model configuration not found in config.json")

model_name = model_config["name"]

# Call the OpenAI API with the constructed prompt
response = client.chat.completions.create(
messages=[
{"role": "system", "content": system_prompt},
{"role": "user", "content": prompt}
],
model="gpt-3.5-turbo",
model=model_name,
)

# Extracting the commit message from the response
Expand Down

0 comments on commit c100550

Please sign in to comment.