API for Amazon SageMaker ML Sentiment Analysis

Assume you manage support department and want to automate some of the workload which comes from users requesting support through Twitter. Probably you already would be using chatbot to send back replies to users. Bu this is not enough — some of the support requests must be taken with special care and handled by humans. How to understand when tweet message should be escalated and when no? Machine Learning for Business book got an answer. I recommend to read this book, my today post is based on Chapter 4.

You can download source code for Chapter 4 from book website. Model is trained based on sample dataset from Kaggle — Customer Support on Twitter. Model is trained based on a subset of available data, using around 500 000 Twitter messages. Book authors converted and prepared dataset to be suitable to feed into Amazon SageMaker (dataset can be downloaded together with the source code).

Model is trained in such a way, that it doesn’t check if a tweet is simply positive or negative. Sentiment analysis is based on the fact if tweet should be escalated or not. It could be even positive tweet should be escalated.

I have followed instructions from the book and was able to train and host the model. I have created AWS Lambda function and API Gateway to be able to call model from the outside (this part is not described in the book, but you can check my previous post to get more info about it — Amazon SageMaker Model Endpoint Access from Oracle JET).

To test trained model, I took two random tweets addressed to Lufthansa account and passed them to predict function. I exposed model through AWS Lambda function and created API Gateway, this allows to initiate REST request from such tool as Postman. Response with __label__1 needs escalation and __label__0 doesn’t need. The second tweet is more direct and it refers to immediate feedback, it was labeled for escalation by our model for sentiment analysis. The first tweet is a bit abstract, for this tweet no escalation:

This is AWS Lambda function, it gets data from the request, calls model endpoint and returns back prediction:

Let’s have a quick look into training dataset. There are around 20% of tweets representing tweets marked for escalation. This shows — there is no need to have 50%/50% split in the training dataset. In real life probably number of escalations is less than half of all requests, this realistic scenario is represented in the dataset:

ML model is built using Amazon SageMaker BlazingText algorithm:

Once ML model is built, we deploy it to the endpoint. Predict function is invoked through the endpoint:

Originally published at andrejusb.blogspot.com on December 6, 2018.

TensorFlow Certified Developer | Machine Learning Expert | Oracle Wizard | Founder katanaml.io

TensorFlow Certified Developer | Machine Learning Expert | Oracle Wizard | Founder katanaml.io