How to Connect GPT-3 to Your Chatbot

With SendPulse, you can connect GPT-3 to your chatbot to provide your users with even more proficient automated replies and help them solve additional tasks.

Let’s learn how to create an OpenAI account and connect it to your chatbot and find out what AI models you can use and how to train your bot to solve your business tasks.

Getting Started

GPT-3 (Generative Pre-trained Transformer) is a third-generation AI model developed by the OpenAI company. It is a large-scale neural network you can use to generate text and code.

There are four primary models that can perform different tasks: analyze text materials of various difficulty levels, provide answers to questions, optimize text for SEO and SMM tasks, categorize text in tables, help with brainstorming, edit and translate text, work with code and mathematical tasks, and support conversations on any or specific topics.

To set up GPT-3 and perform your business tasks using a chatbot, you need to choose a model and prompts — for example, you can add reply sentiments, limit your list of questions or topics, and add additional information about your business or an example of what you want to receive in response.

Create an Account

Go to OpenAI, and create an account. Click Sign up, enter your email address, and click Continue, or continue with your Google or Microsoft account.

If you entered your email address, enter a password in the next window. You will receive a confirmation email in your inbox. Click Verify in the email, and enter your name and the name of your organization.

Enter your phone number, and a confirmation code will be sent to it via SMS. Enter the code, and log into your account.

Before choosing a phone number to use, check OpenAI’s list of supported countries and territories.

Copy Your API Key

In the right-upper corner, click your avatar, and select “Manage account.”

Go to the “API Keys” tab, click Create new secret Key, and copy your key.

You need to save a key on your device because you cannot copy the same key on this page a second time. If you lose the key, you will need to generate a new one.

Set Up the Integration

Enter Your API Key

Choose a bot, and go to “Bot settings” > “Integrations.” Next to “OpenAI,” click Enable.

Enter your key.

Choose a Model

Choose what AI model to use to generate bot replies.

Model Description Recommended use cases
Davinci

The most comprehensive yet expensive and slower model, as it works with a large amount of data. It can perform the same tasks as other models but requires fewer prompts in the “Bot instructions” field.

Use it to perform tasks where it needs to analyze the context deeper and generate more complex text or code. Also, you can use it to solve logic problems involving cause and effect.

Analyzing complex intent and cause-and-effect problems, summarization, and explaining and generating code
Curie

The model can analyze text, answer direct questions, and provide key points.

Use it for Q&A in chatbots. For example, in the “Instruction for bot” field, you can enter your questions and answers that the bot will use.

Translation, complex classification, text sentiment analysis, and summarization
Babbage

The model is good at picking up obvious text patterns and using them as references to generate new text.

Use it to classify information and assign categories. For creative applications, Babbage is able to understand structure just enough to be able to create simple plots and titles.

Moderate classification, and semantic search classification
Ada The fastest and cheapest model. Use it when you need to parse text faster without too much nuance. Parsing text, simple classification, address correction, keywords

You can see how to use these models in “Examples” and “OpenAI Cookbook,” and experiment with models in “Playground.”

Add a Prompt to the Bot

GPT-3 Models can perform various tasks — starting with complex text analysis to generating replies on various topics. To limit certain topics you don’t want your bot to discuss and add text sentiment or information about your company, you need to add prompts.

When creating a prompt, keep the following recommendations from OpenAI in mind:

  1. Show what you want to receive using examples. For example, if you want your model to sort a list of items in alphabetical order or classify paragraphs by sentiment, show a request example and your expected result format. If you need the bot to answer in a certain way, provide examples of questions and answers.
  2. Provide high-quality and accurate data. Check your examples — your model is usually smart enough to identify basic spelling mistakes and reply, but it also might assume that this is intentional, and it can affect the reply. If you want your model to answer in a certain language, specify it. Also, try to use words instead of figures. Remember that AI takes your prompts literally.

Read more: the “Prompt design” and “Prompt Optimization” sections. Note that OpenAI has moderation rules — read more about them in the “Usage policies” and “Moderation” sections.

You can test models with different bot prompts on the Prompt Compare page.

In the “Bot instructions” field, provide your free-form prompts, following the recommendations.

AI analyzes text in all languages and can answer in a language you specify, but it interacts better in English. If you do not specify a language, the bot will answer in English by default.

If you have any questions about how to create bot prompts or possible scenarios, you can check existing discussions or start a new one in the OpenAI community.

Add a Token Number

Token is a part of a word used for natural language processing. For English text, 1 token equals approximately 4 characters or 0,75 words.

For each request, the token count takes into account the number of words in the following places:

  • in the “Bot Promt” field;
  • in the last messages in a chat with a bot;
  • in the current question that a user asks a bot;
  • in the current answer that a bot provides a user with.

Read more: What are tokens and how to count them and about OpenAI’s pricing plans in the Pricing section.

During the first registration, OpenAI gives $18 dollars for 3 months. This money will be withdrawn when you use tokens. Token fees vary depending on the model used. For example, the Davinci model in a live environment costs $0.1200 per a thousand tokens, while the Ada model in a test environment costs $0.0004.

In the “Maximum number of tokens in response” field, specify a number. For the Davinci model, you can enter up to 2,048 tokens, and for all other models — up to 1,024 tokens.

Click Save, and you can test your bot.

To see how many tokens you have left, log in to your OpenAI account, and go to the “Usage” tab.

To check your token usage history, scroll the page down to the “Daily usage breakdown (UTC)” section. You can see the whole history or filter it by specific date or team member.

Usage Features

When you integrate with OpenAI, your "Standard reply" flow will be disabled for your chatbot. Therefore, you need to make sure users know that your bot can reply to them. For example, add your bot communication guidelines to a welcome or trigger flow that you add to your menu.

When using OpenAI with chatbots, note that AI uses an internal information library — it processes users’ requests and gives results directly in the chat with a client.

AI does not have a long memory. When processing a request, only the last couple of user messages are taken into account. We recommend you monitor your bot’s conversations with customers to correct its prompts.

AI does not integrate with additional applications and does not process client data in your bot's audience. For these features, add a menu, or create commands to run flows where you can add the "API request," "User input," and "Action" elements.

Use Cases

Let's see various examples of how you can use a chatbot with an OpenAI integration. You can see more examples on the Examples page.

Business Q&A

If you have a bot for a feature-loaded service and have collected an FAQ database, you can teach your bot to give answers when requested.

For our fisrt example, we used the Davinci model with 2048 tokens. We added short info about the company, its business, and contacts. The bot can develop a dialog based on the data received.

For the second example, the Davinci model used 700 tokens. In the “Bot instructions” field, we added basic questions and answers to them. Users don’t have to ask these questions verbatim, and the AI ​​will know enough about your business to be able to answer naturally.

Encyclopedia Q&A

Let’s say that your bot is of entertaining and informational nature. It does not need to answer specific questions, but simply provide facts on, for example, history or any other topic from the Internet.

For example, the Davinci model used 700 tokens. We only specified the language in which we want the bot to answer and the bot’s name. If you don’t choose specific topics, the bot will provide answers to questions about any topic without limits.

Solving Life Issues

If you have a helper bot, users can describe their life situation and ask for advice on what to do.

For example, the Davinci model used 700 tokens. In the “Bot instructions” field, we added specific prompts only for vital questions. If a certain situation happens, the bot will tell users who to contact.

Solving Math Tasks

If you have a student helper bot, users can add their task and indicate what needs to be solved.

For example, the Davinci model used 700 tokens. In the “Bot instructions” field, we added prompts that do not allow the bot to answer non-math-related questions.

Text Optimization

If you have an SMM or SEO tool, your bot can help users choose keywords or proofread and optimize the entered text for SEO.

For example, the Ada model used 1024 tokens. In the “Bot instructions” field, we added prompts that allow the bot optimize text for SEO and search for keywords. If users do not run a command, the bot will remind them how to work with it.

Code Decryption

If you teach programming, your bot can help users decode parts of code or errors and tell you how a particular element or function works. Also, users can ask your bot to generate code using natural language.

For example, the Davinci model used 700 tokens. In the “Bot instructions” field, we added prompts that allow the bot to decrypt and explain code or parts of code. We also added a tone so that the bot responds in a sarcastic manner using simple words.

Text Generation

If you have a creative marketing agency, your bot can offer to generate text for an advertising campaign or come up with a brand name, and so on.

For example, the Davinci model used 1024 tokens. In the “Bot instructions” field, we added the bot’s name and wrote that it works at a marketing agency and helps clients create advertising slogans and create plans.

Rate this article about "How to Connect GPT-3 to Your Chatbot"

User Rating: 4 / 5

    Popular in Our Blog

    Try creating a chatbot for Facebook Messenger for free