Skip to content

Commit

Permalink
add flair adapter, update cli for flare and IDE for default distribution
Browse files Browse the repository at this point in the history
  • Loading branch information
rodrigopivi committed Jun 26, 2019
1 parent 885bcac commit 25f95be
Show file tree
Hide file tree
Showing 12 changed files with 343 additions and 99 deletions.
6 changes: 3 additions & 3 deletions examples/citySearch_medium.chatito
Original file line number Diff line number Diff line change
Expand Up @@ -31,15 +31,15 @@
places to eat
where to eat

~[newYork]
~[newYork]('synonym': 'true')
new york ~[city?]
ny ~[city?]

~[sanFrancisco]
~[sanFrancisco]('synonym': 'true')
san francisco
san francisco city

~[atlanta]
~[atlanta]('synonym': 'true')
atlanta
atlanta city

Expand Down
54 changes: 42 additions & 12 deletions package-lock.json

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

6 changes: 4 additions & 2 deletions package.json
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
{
"name": "chatito",
"version": "2.2.2",
"version": "2.3.0",
"description": "Generate training datasets for NLU chatbots using a simple DSL",
"bin": {
"chatito": "./dist/bin.js"
Expand Down Expand Up @@ -49,7 +49,8 @@
"homepage": "https://github.com/rodrigopivi/Chatito",
"dependencies": {
"chance": "1.0.18",
"minimist": "1.2.0"
"minimist": "1.2.0",
"wink-tokenizer": "5.2.1"
},
"jest": {
"transform": {
Expand Down Expand Up @@ -82,6 +83,7 @@
"@types/react-dom": "16.8.4",
"@types/react-helmet": "5.0.8",
"@types/react-router-dom": "4.3.3",
"@types/wink-tokenizer": "4.0.0",
"babel-loader": "8.0.5",
"babel-plugin-import": "1.11.0",
"babel-plugin-styled-components": "1.10.0",
Expand Down
26 changes: 20 additions & 6 deletions readme.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,9 @@ This project contains the:
For the full language specification and documentation, please refer to the [DSL spec document](https://github.com/rodrigopivi/Chatito/blob/master/spec.md).

### Adapters
The language is independent from the generated output format and because each model can receive different parameters and settings, there are 3 data format adapters provided. This section describes the adapters, their specific behaviors and use cases:
The language is independent from the generated output format and because each model can receive different parameters and settings, this are the currently implemented data formats, if your provider is not listed, at the Tools and resources section there is more information on how to support more formats.

NOTE: Samples may not not shuffled between intents for easier review.

#### Default format
Use the default format if you plan to train a custom model or if you are writing a custom adapter. This is the most flexible format because you can annotate `Slots` and `Intents` with custom entity arguments, and they all will be present at the generated output, so for example, you could also include dialog/response generation logic with the DSL. E.g.:
Expand All @@ -46,7 +48,7 @@ Custom entities like 'context', 'required' and 'type' will be available at the o

#### [Rasa NLU](https://rasa.com/docs/nlu/)
[Rasa NLU](https://rasa.com/docs/nlu/) is a great open source framework for training NLU models.
One particular behavior of the Rasa adapter is that when a slot definition sentence only contains one alias, the generated Rasa dataset will map the alias as a synonym. e.g.:
One particular behavior of the Rasa adapter is that when a slot definition sentence only contains one alias, and that alias defines the 'synonym' argument with 'true', the generated Rasa dataset will map the alias as a synonym. e.g.:

```
%[some intent]('training': '1')
Expand All @@ -55,13 +57,20 @@ One particular behavior of the Rasa adapter is that when a slot definition sente
@[some slot]
~[some slot synonyms]
~[some slot synonyms]
~[some slot synonyms]('synonym': 'true')
synonym 1
synonym 2
```

In this example, the generated Rasa dataset will contain the `entity_synonyms` of `synonym 1` and `synonym 2` mapping to `some slot synonyms`.

#### [Flair](https://github.com/zalandoresearch/flair)
[Flair](https://github.com/zalandoresearch/flair) A very simple framework for state-of-the-art NLP. Developed by Zalando Research. It provides state of the art (GPT, BERT, ELMo, etc...) pre trained models and embeddings for many languages that work out of the box. This adapter supports the `text classification` dataset in FastText format and the `named entity recognition` dataset in two column [BIO](https://en.wikipedia.org/wiki/Inside%E2%80%93outside%E2%80%93beginning_(tagging)) annotated words, as documented at [flair corpus documentation](https://github.com/zalandoresearch/flair/blob/master/resources/docs/TUTORIAL_6_CORPUS.md). This two data formats are very common and with many other providers or models.

The NER dataset requires a word tokenization processing that is currently done using [wink-tokenizer](https://github.com/winkjs/wink-tokenizer) npm package. Extending the adapter to add PoS tagging can be explored in the future, but it's not implemented.

NOTE: Flair adapter is only available for the NodeJS NPM CLI package, not for the IDE.

#### [LUIS](https://www.luis.ai/)
[LUIS](https://www.luis.ai/) is part of Microsoft's Cognitive services. Chatito supports training a LUIS NLU model through its [batch add labeled utterances endpoint](https://westus.dev.cognitive.microsoft.com/docs/services/5890b47c39e2bb17b84a55ff/operations/5890b47c39e2bb052c5b9c09), and its [batch testing api](https://docs.microsoft.com/en-us/azure/cognitive-services/LUIS/luis-how-to-batch-test).

Expand Down Expand Up @@ -108,7 +117,7 @@ npx chatito <pathToFileOrDirectory> --format=<format> --formatOptions=<formatOpt
```

- `<pathToFileOrDirectory>` path to a `.chatito` file or a directory that contains chatito files. If it is a directory, will search recursively for all `*.chatito` files inside and use them to generate the dataset. e.g.: `lightsChange.chatito` or `./chatitoFilesFolder`
- `<format>` Optional. `default`, `rasa` or `snips`
- `<format>` Optional. `default`, `rasa`, `luis`, `flair` or `snips`.
- `<formatOptions>` Optional. Path to a .json file that each adapter optionally can use
- `<outputPath>` Optional. The directory where to save the generated datasets. Uses the current directory as default.
- `<trainingFileName>` Optional. The name of the generated training dataset file. Do not forget to add a .json extension at the end. Uses `<format>`_dataset_training.json as default file name.
Expand All @@ -118,10 +127,15 @@ npx chatito <pathToFileOrDirectory> --format=<format> --formatOptions=<formatOpt

[Overfitting](https://en.wikipedia.org/wiki/Overfitting) is a problem that can be prevented if we use Chatito correctly. The idea behind this tool, is to have an intersection between data augmentation and a probabilistic description of possible sentences combinations. It is not intended to generate deterministic datasets, you should avoid generating all possible combinations.

### Visual Studio Code support
### Tools and resources

- [Visual Studio Code syntax highlighting plugin](https://marketplace.visualstudio.com/items?itemName=nimfin.chatito) Thanks to [Yuri Golobokov](https://github.com/nimf) for his [work on this](https://github.com/nimf/chatito-vscode).

- [AI Blueprints: How to build and deploy AI business projects](https://books.google.com.pe/books?id=sR2CDwAAQBAJ) implements practical full chatbot examples using chatito at chapter 7.

There is a [syntax highlighting plugin](https://marketplace.visualstudio.com/items?itemName=nimfin.chatito) at the VS Code marketplace. Thanks to [Yuri Golobokov](https://github.com/nimf) for his [work on this](https://github.com/nimf/chatito-vscode).
- [3 steps to convert chatbot training data between different NLP Providers](https://medium.com/@benoit.alvarez/3-steps-to-convert-chatbot-training-data-between-different-nlp-providers-fa235f67617c) details a simple way to convert the data format to non implemented adapters. You can use a generated dataset with providers like DialogFlow, Wit.ai and Watson.

- [Aida-nlp](https://github.com/rodrigopivi/aida) is a tiny experimental NLP deep learning library for text classification and NER. Built with Tensorflow.js, Keras and Chatito. Implemented in JS and Python.

### Author and maintainer
Rodrigo Pimentel
48 changes: 48 additions & 0 deletions src/adapters/flair.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,48 @@
import { WriteStream } from 'fs';
import * as Tokenizer from 'wink-tokenizer';
import * as gen from '../main';
import { ISentenceTokens } from '../types';

const tokenizer = new Tokenizer();

export interface IDefaultDataset {
[intent: string]: ISentenceTokens[][];
}
export interface IFlairWriteStreams {
trainClassification: WriteStream;
testClassification: WriteStream;
trainNER: WriteStream;
testNER: WriteStream;
}

// NOTE: Flair adapter uses write streams to text files and requires two different formats
// reference https://github.com/zalandoresearch/flair/blob/master/resources/docs/TUTORIAL_6_CORPUS.md
// E.G:
// npm run generate -- ./examples --format=flair --outputPath=./output --trainingFileName=training.txt --testingFileName=testing.txt
export async function streamAdapter(dsl: string, ws: IFlairWriteStreams, imp?: gen.IFileImporter, currPath?: string) {
// NOTE: the utteranceWriter is called with each sentences with aliases already replaced,
// so the sentence toke can only be text or slot types.
const utteranceWriter = (utterance: ISentenceTokens[], intentKey: string, isTrainingExample: boolean) => {
// classification dataset in FastText format
const classificationText = utterance.map(v => v.value).join('');
const classificationLabel = intentKey.replace(/\s+/g, '');
const writeStreamClassif = isTrainingExample ? ws.trainClassification : ws.testClassification;
writeStreamClassif.write(`__label__${classificationLabel} ${classificationText}` + '\n');
// named entity recognition dataset in two column with BIO-annotated NER tags (requires tokenization)
const writeStreamNER = isTrainingExample ? ws.trainNER : ws.testNER;
utterance.forEach(v => {
const wordTokens = tokenizer.tokenize(v.value);
if (v.type === 'Slot') {
wordTokens.forEach((wt, idx) => {
const slotBorI = idx === 0 ? 'B' : 'I';
const slotTag = v.slot!.toLocaleUpperCase().replace(/\s+/g, '');
writeStreamNER.write(`${wt.value} ${slotBorI}-${slotTag}` + '\n');
});
} else {
wordTokens.forEach(wt => writeStreamNER.write(`${wt.value} O` + '\n'));
}
});
writeStreamNER.write('\n'); // always write an extra EOL at the end of sentences
};
await gen.datasetFromString(dsl, utteranceWriter, imp, currPath);
}
Loading

0 comments on commit 25f95be

Please sign in to comment.