from transformers import pipeline , DistilBertTokenizer , DistilBertModel model = DistilBertModel . from_pretrained ('distilbert-base-cased-distilled-squad') # Load a model that is already trained on Question Answering extractor = pipeline ("question-answering") def get_emotion_cause (text, emotion): question = f"Show the reason why the text convey {emotion} symptoms?" # The model extracts the 'cause' span from the text result = extractor(question = question, context = text) return result ['answer'] # Example: text = "I am so anxious because my final exam is tomorrow and I haven't studied." print ( get_emotion_cause (text, "anxiety")) Recently I am exploring ready to go model that help me do question answering without any training data and I came across this pipeline pre-trained model that is capable of doing question answering on the spot. I research about its document and followed the instruction and that leads to my code above however pipeline has moved away from "question-answering" feature.
And it shows the list of feature: "Unknown task question-answering, available tasks are ['any-to-any', 'audio-classification', 'automatic-speech-recognition', 'depth-estimation', 'document-question-answering', 'feature-extraction', 'fill-mask', 'image-classification', 'image-feature-extraction', 'image-segmentation', 'image-text-to-text', 'keypoint-matching', 'mask-generation', 'ner', 'object-detection', 'sentiment-analysis', 'table-question-answering', 'text-classification', 'text-generation', 'text-to-audio', 'text-to-speech', 'token-classification', 'video-classification', 'zero-shot-audio-classification', 'zero-shot-classification', 'zero-shot-image-classification', 'zero-shot-object-detection']
Can anyone recommend me what to do about it like do I use another "question-answering" feature that is available. Or can anyone recommend me other modules who can do the same job.
P.s. Document-question answering and it requires image in the document, and I only work with text
[link] [comments]