Language Interpretability Tool (LIT)
The Language Interpretability Tool (LIT) is a visual, interactive model-understanding tool for NLP models.
LIT is built to answer questions such as:
- What kind of examples does my model perform poorly on?
- Why did my model make this prediction? Can this prediction be attributed to adversarial behavior, or to undesirable priors in the training set?
- Does my model behave consistently if I change things like textual style, verb tense, or pronoun gender?