attheoaks.com

Creating a Conversational AI Chatbot in 4 Easy Steps

Written on

Introduction to Conversational AI

Conversational AI chatbots represent the forefront of chatbot technology. These advanced bots utilize a combination of Natural Language Processing (NLP) and Artificial Intelligence (AI) to decipher user intent and deliver tailored responses. By training such models with proprietary datasets, businesses can streamline customer interactions, enabling tasks like filing insurance claims, upgrading service plans, and modifying flight details directly through chat interfaces.

In this guide, we will explore:

  • Utilizing a pre-trained PyTorch model for chatbot creation
  • Interacting with the model via FastAPI and Jinja
  • Deploying the custom model using Docker

Step 1: Utilize a Pre-trained Model

We’ll begin by installing essential Python packages to develop and test our chatbot. Run the following command in your terminal:

pip install --no-cache-dir transformers[torch] uvicorn fastapi jinja2 python-multipart

For our chatbot, we will leverage the pre-trained DialoGPT-large model from Microsoft, accessible through Hugging Face. To load this model, we can utilize the from_pretrained method for both the AutoTokenizer and AutoModelForCausalLM classes available in the transformers library.

To interact with the model, we must:

  1. Encode the user's message with the tokenizer.
  2. Generate the bot's response using the model.
  3. Decode the response with the tokenizer.

Copy the snippet below to your terminal or notebook cell for testing. If all the libraries were installed correctly, the output should be:

I'm good, you?

Now that we have confirmed the model's functionality, we will encapsulate the two code snippets into a reusable class. This class will feature methods for loading the model (load_model) and generating a reply based on user input (get_reply). The get_reply method has also been adjusted to maintain chat history and enhance conversational coherence by tweaking parameters like top_k, top_p, and temperature.

You can access the complete file here: chatbot/app/model.py.

Step 2: Construct the Backend

After finalizing the chatbot model, the next step is to expose it through standard HTTP methods by wrapping it in a FastAPI application. We will implement a single endpoint (/) to retrieve replies from the chatbot.

Note that we used the Form syntax in the FastAPI endpoint. This will be particularly useful when we develop the frontend. To test this snippet, execute the following code:

If everything is functioning properly, you should receive the following JSON response:

{'message': "I'm good, you?"}

You can find the complete file here: chatbot/app/main.py.

Step 3: Develop the Frontend

Our frontend will feature a straightforward dialog interface between the user and the bot, akin to popular messaging platforms like WhatsApp or Messenger. Rather than building this dialog from scratch, we will adapt an existing design created by pablocorezzola, which is available on bootsnipp.com under the MIT license. The user and bot avatar icons used in this project can be freely sourced from flaticon.com, with proper credit provided in the code and at the conclusion of this article.

To implement this, ensure you download the CSS file, as the HTML and JS files are not necessary. Structure your project directory as follows:

static/

styles.css

avatar_user.png

avatar_bot.png

templates/

index.html

Open index.html in editing mode and insert the following code. Pay attention to these crucial lines:

  • Line 8: Links the styles.css file to the FastAPI app.
  • Line 15: Utilizes Jinja to inject the user and bot dialog dynamically.

Additionally, you will need to include the following snippet in your FastAPI app:

  • Line 5: Enables the FastAPI app to serve files from the /static directory.
  • Line 9: Informs Jinja where to find the index.html template.

Finally, update your FastAPI endpoint to render the HTML dialog instead of sending a JSON response. This dialog will replace the {{ chat|safe }} placeholder in templates/index.html.

The build_html_chat() function, which you will need to define, will take three parameters:

  • is_me: A Boolean indicating if the message originated from the user or the bot.
  • text: The content of the user or bot message.
  • time: The timestamp of when the message was processed.

This function consists of standard HTML code, which can be found at the following link (along with the complete frontend script): chatbot/app/html_utils.py, chatbot/app/templates/index.html.

Step 4: Package the Application with Docker

The final step involves containerizing our application using Docker, enabling deployment locally, on a dedicated server, or in the cloud with minimal additional configuration.

Define the final structure of your app as shown below. Create a blank Dockerfile at this stage:

app/

static/

styles.css

avatar_user.png

avatar_bot.png

templates/

index.html

main.py

model.py

html_utils.py

Dockerfile

Your Dockerfile should include:

  • The Python libraries installed in Step 1.
  • The backend and frontend application files listed above.
  • An entrypoint file (main.py) that will execute when the container starts.

To build and run the container, enter the following commands in your terminal:

docker build . -t chatbot &&

docker run -p 8000:8000 chatbot

If the Docker build and run commands complete successfully, you should see:

INFO: Waiting for application startup.

INFO: Application startup complete.

INFO: Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)

To test the Docker container, open a new tab in your preferred browser and navigate to http://0.0.0.0:8000/. Enter 'Hi. How are you?' in the text box and observe the responses.

Example of the finalized ChatBot app after the user inputs 'Hi. How are you?'

Conclusion

The era of intelligent chatbots is here. These sophisticated models empower various companies to offer a broad spectrum of services efficiently and at scale.

In this guide, I introduced the foundational concepts for developing your own chatbot, covering everything from model creation to backend and frontend development. This serves merely as a basic illustration of how to implement a simple conversational bot.

For further insights on deploying applications like this in a serverless manner, check out my previous articles "Build a serverless API with Amazon Lambda and API Gateway" and "Deploy a 'RemindMe' Reddit Bot Using AWS Lambda and EventBridge".

Join my mailing list to receive updates on new content as soon as it is published!

References

[1] Yizhe Zhang et al., "DialoGPT: Large-Scale Generative Pre-training for Conversational Response Generation" (2020), arXiv

[2] Hugging Face Team, "A State-of-the-Art Large-scale Pretrained Response generation model (DialoGPT)"

[4] Freepik, Bot free icon

[5] Freepik, Man free icon

[6] A. Ribeiro. "Build a serverless API with Amazon Lambda and API Gateway"

[7] A. Ribeiro. "Deploy a 'RemindMe' Reddit Bot Using AWS Lambda and EventBridge"

Share the page:

Twitter Facebook Reddit LinkIn

-----------------------

Recent Post:

Lab-Grown Meat: A Step Toward a Sustainable Future?

Exploring the rise of lab-grown meat and its implications for animal welfare and sustainability.

# Rethinking Our Understanding of External Reality

Exploring a new perspective on how we interpret the external world, challenging conventional theories of brain function and perception.

The Surprising Deadliest Creature on Earth: The Mosquito

Discover the astonishing truth about the mosquito, the deadliest creature that claims a million lives annually.