You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When fine-tuning language models in Keras 3, there are inconsistencies in how validation data should be provided. The documentation suggests validation_data should be in (x, y) format, but the actual requirements are unclear and the behavior differs between training and validation phases.
Current Behavior & Problems
Issue 1: Raw text arrays are not accepted for validation
train_texts= ["text1", "text2", ...]
val_texts= ["val1", "val2", ...]
# This fails with ValueError:model.fit(
train_texts,
validation_data=val_texts
)
# Error:ValueError: Dataisexpectedtobeinformat`x`, `(x,)`, `(x, y)`, or`(x, y, sample_weight)`, found: ("text1", "text2", ...)
The error suggests the tokenizer is being applied again to already tokenized data. I understand there is the preprocessor=None parameter, but I don't want to preprocess train data manually.
Working Solution (But Needs Documentation)
The working approach is to provide prompt-completion pairs:
# Prepare validation data as prompts and expected outputsval_inputs= [format_prompt(text) fortextinval_input_texts]
val_outputs= [format_output(text) fortextinval_output_texts]
val_inputs=np.array(val_inputs)
val_outputs=np.array(val_outputs)
model.fit(
train_texts,
validation_data=(val_inputs, val_outputs)
)
Expected Behavior
The documentation should clearly state that validation data for language models should be provided as prompt-completion pairs
The validation data handling should be consistent with how training data is processed
It should be clear whether token shifting is handled internally or needs to be done manually
Environment
Keras Version: 3.x
Python Version: 3.10
Model: Gemma LLM (but likely affects other LLMs too)
Additional Context
While there is a working solution using prompt-completion pairs, this differs from traditional language model training where each token predicts the next token. The documentation should clarify this architectural choice and explain the proper way to provide validation data.
The text was updated successfully, but these errors were encountered:
Hello @che-shr-cat!
Thank you for pointing out these issues! I’ve reproduced Issue 1 and made a fix to improve the error message, ensuring it’s more descriptive while maintaining backward compatibility. I’m ready to raise a PR for this if needed (the broader concern is clearly the documentation so I won't insist on the PR), but I'd require more information on how to edit the documentation to bridge the gap. I'd be happy to contribute.
Previous Error:
ValueError: Data is expected to be in format `x`, `(x,)`, `(x, y)`, or `(x, y, sample_weight)`, found: ('val1', 'val2', 'val3', 'val4')
Updated Error (with changes):
ValueError: Raw text data detected. Text data must be preprocessed before training. Please use a text preprocessing pipeline such as:
1. Tokenizer to convert text to sequences:
tokenizer = keras.preprocessing.text.Tokenizer()
tokenizer.fit_on_texts(texts)
sequences = tokenizer.texts_to_sequences(texts)
2. Pad sequences to uniform length:
padded = keras.preprocessing.sequence.pad_sequences(sequences)
Received raw text data: ['val1', 'val2', 'val3']... (showing first 3 items)
This should help one understand the need for preprocessing when raw text data is provided.
Interestingly, Issue 2 might actually apply to other models besides Gemma, since I tried the double-tokenization you mentioned, and it looks like it basically gives a TypeError that's labeled as a ValueError for way simpler models that have the same basic problem as what you mentioned.
Issue Description
When fine-tuning language models in Keras 3, there are inconsistencies in how validation data should be provided. The documentation suggests validation_data should be in (x, y) format, but the actual requirements are unclear and the behavior differs between training and validation phases.
Current Behavior & Problems
Issue 1: Raw text arrays are not accepted for validation
Issue 2: Pre-tokenized validation fails
The error suggests the tokenizer is being applied again to already tokenized data. I understand there is the preprocessor=None parameter, but I don't want to preprocess train data manually.
Working Solution (But Needs Documentation)
The working approach is to provide prompt-completion pairs:
Expected Behavior
Environment
Additional Context
While there is a working solution using prompt-completion pairs, this differs from traditional language model training where each token predicts the next token. The documentation should clarify this architectural choice and explain the proper way to provide validation data.
The text was updated successfully, but these errors were encountered: