How to connect ChatGPT from OpenAI to your chatbot
With SendPulse, you can connect the GPT model from OpenAI to your chatbot to provide your users with even more proficient automated replies and help them solve additional tasks.
Let’s learn how to create an OpenAI account and connect it to your chatbot and find out what AI models you can use and how to train your bot to solve your business tasks.
To use ChatGPT models, top up your OpenAI account. The pricing varies based on your selected model and the number of tokens used. Read more: Add a token number.
Getting started
GPT (Generative Pre-trained Transformer) is an AI model developed by the OpenAI company. It is a large-scale neural network you can use to generate text and code.
The primary models can perform different tasks: analyze text materials of various difficulty levels, provide answers to questions, optimize text for SEO and SMM tasks, categorize text in tables, help with brainstorming, edit and translate text, work with code and mathematical tasks, and support conversations on any or specific topics.
To set up GPT and perform your business tasks using a chatbot, you need to choose a model and prompts — for example, you can add reply sentiments, limit your list of questions or topics, and add additional information about your business or an example of what you want to receive in response.
Create an account
Go to OpenAI > Products > Documentation, and create an account. Click Sign up in the upper right corner, enter your email address, and click Continue, or continue with your Google or Microsoft account.
If you entered your email address, enter a password in the next window. You will receive a confirmation email in your inbox. Click Verify in the email, and enter your name and the name of your organization.
Enter your phone number, and a confirmation code will be sent to it via SMS. Enter the code, and log into your account.
Before choosing a phone number to use, check OpenAI’s list of supported countries and territories.
Copy your API key
Once you have logged in to your account, click the settings icon in the upper right corner.
Go to the Your profile section in the left panel > the User API keys tab.
You can create multiple API keys for projects in your OpenAI account. It helps manage your team and enhance data security. An account owner can generate keys in all projects. You do not need to create any separate projects — all data will be saved in the Default project automatically. Read more about project creation options in the OpenAI documentation.
Click View project secret Key.
Click Create new secret key.
In the modal window, select an owner, and configure your key parameters.
You |
This API key is tied to your user and can make requests to your selected project. If you leave the project, this key will be disabled. Enter your key name, and select a project and permissions. We recommend giving full access to your project. Read more in the OpenAI documentation. |
Service account |
A new bot member (service account) will be added to your project, and an API key will be created. You can implement this feature if you use multiple OpenAI tools. Enter your service ID, and select a project. |
Click Create secret key, and copy it in the next modal window.
You need to save a key on your device because you cannot copy the same key on this page a second time. If you lose the key, you will need to generate a new one.
Set up the integration
Enter your API key
Choose a bot, and go to Bot settings > Integrations. Next to OpenAI, click Enable.
Select a connection method.
Use the token from the account settings | If you use one OpenAI account for different SendPulse services, including the chatbot builder, you can add your token to general account settings.
To add a token, go to Account settings > the API tab. In the Integrations > OpenAI & ChatGPT section, click Connect. Enter your key, and click Save. Once you add a key, you can select the Use the token from the account settings option, and connect OpenAI to your chatbot. |
Use a separate token for this bot | If you need to use a dedicated OpenAI account for your current chatbot, select this option, and in the next field, enter your key. |
Choose a model
Select an AI model to generate bot replies.
ChatGPT (gpt-3.5-turbo) | A model trained on human conversation data. Able to generate human-like responses with a more natural tone than other models and personalize its replies based on the topic and previous user messages. |
ChatGPT (gpt-3.5-turbo-16k) | A model with the same capabilities as the gpt-3.5-turbo model but with 4 times the context length. |
ChatGPT (gpt-3.5-turbo-16k-instruct) | |
Custom fine-tuned model |
Basic model, which you can train using your data and fine-tuning tools provided by OpenAI and create your original model. To connect a custom model to your SendPulse chatbot, specify a unique model name in the OpenAI library. |
Custom fine-tuned model (Instruct) | |
GPT-4 |
A model of the ChatGPT family, designed to facilitate multi-turn conversations. The model is also useful for single-turn tasks without conversations. Available only to users who received access to the model from OpenAI. |
GPT-4 Turbo (gpt-4-1106-preview) |
The GPT-4 Turbo models are fast, have information on all events up until April 2023, and can handle large text arrays. |
GPT-4 gpt-4-32k |
This is the same model as GPT-4 but with a larger context window. |
GPT-4o |
The fastest model of the GPT-4 family with the largest number of tokens, which can be used to make direct requests to the OpenAI API and perform image analysis (in the SendPulse templates, you will find the corresponding flow). It also excels at processing non-English texts. OpenAI recommends using it with other GPT-4 models. |
You can see how to use these models in Examples and OpenAI Cookbook, and experiment with models in Playground.
Add a prompt to the bot
GPT Models can perform various tasks ranging from complex text analysis to generating replies on various topics. You need to add prompts to limit certain topics you don’t want your bot to discuss, tailor your bot to a specific character or person, or add text sentiment or information about your company.
When creating a prompt, keep the following recommendations in mind:
- Add as much context as possible in each case. List all the bot interaction instructions: specify which users will contact your bot and when, which details should be included in bot replies, and which topics should be avoided.
Give your model the task of generating several results so that you can compare and specify the one that suits your needs best. - Show what you want to receive using examples. For example, if you want your model to sort a list of items alphabetically or classify paragraphs by sentiment, list your example queries, expected result format, or the effect you want to achieve. If you need the bot to reply in a certain way, provide examples of questions and answers.
- Provide high-quality and accurate data. Check your examples — your model is usually smart enough to identify basic spelling mistakes, but it also might assume that this is intentional and mistakes can affect the reply. If you want your model to reply in a certain language, specify it. Also, try to use words instead of figures. Remember that AI takes your prompts literally.
- Personify the model. To help your model reply how a certain person or character would, describe what they do, what characteristics they possess, their tone of voice, lexicon, and other aspects of your virtual assistant's persona.
- Test the result, and update your prompt. After setting up the prompts, make sure to test the result, review chats with users and, if necessary, adjust the bot's instructions by adding or removing details. Train the model until you get the results you want.
Read more: the Prompt design and Prompt Optimization sections. Note that OpenAI has moderation rules — read more about them in the Usage policies and Moderation sections.
You can test models with different bot prompts on the Prompt Compare page.
If you need inspiration for context prompts, take a look at the following examples: 160 ChatGPT Prompts You Can't Miss To Try Out In 2023.You can also use and add your own prompts in the GitHub repository: Awesome ChatGPT Prompts.
In the Bot instructions field, provide your free-form prompts, following the recommendations.
Note: models have different context length limits, which refers to the amount of tokens you can use. Learn more in the model overview table.
AI analyzes text in all languages and can answer in a language you specify, but it interacts better in English. If you do not specify a language, the bot will answer in English by default.
If you have any questions about how to create bot prompts or possible scenarios, you can check existing discussions or start a new one in the OpenAI community.
Add a token number
A token is a part of a word used for natural language processing. For English text, 1 token equals approximately 4 characters or 0,75 words. For other languages and more accurate calculations, you can use the OpenAI calculator.
In the Maximum number of tokens in response field, specify a number from the last column.
Model | The max number of characters in the "Instructions for the bot" field* | The max number of tokens in a reply to a subscriber* |
ChatGPT (gpt-3.5-turbo) | up to 4096 | up to 2048 |
ChatGPT (gpt-3.5-turbo-16k) | up to 16348 | up to 8174 |
ChatGPT (gpt-3.5-turbo-16k-instruct) | ||
Custom fine-tuned model | up to 2048 | up to 1024 |
Custom fine-tuned model (Instruct) | ||
GPT-4 | up to 8192 | up to 4096 |
GPT-4 Turbo (gpt-4-1106-preview) | up to 256000 | up to 128000 |
GPT-4 gpt-4-32k | up to 65536 | up to 32768 |
GPT-4o | up to 256000 | up to 128000 |
*For each request, the tokens are counted in the following places: the Bot Promt field; last messages in a chat with a bot; current question that a user asks a bot; current answer that a bot provides a user with.
If you use the max amount of tokens in the Bot instructions field, your total token data may exceed the OpenAI limit. As a result, your request will end with an error, and your subscriber won't receive a reply.
In such cases, you can check the error appeared as a notification in the upper-right corner in your SendPulse account: OpenAI: This model's maximum context length is 4097 tokens, however you requested 4131 tokens (2083 in your prompt; 2048 for the completion). Please reduce your prompt; or completion length
. To resolve it, lower the amount of tokens in the "Bot instructions" field or the amount of tokens in a reply.
During the first registration, OpenAI gives $18 dollars for 3 months. This money will be withdrawn when you use tokens.
Note: Token fees vary depending on the model used. For example, gpt-3.5-turbo-16k
model costs twice as much as the gpt-3.5-turbo
model, because it uses more context. Read more: What are tokens and how to count them and about OpenAI’s pricing plans in the Pricing section.
To see how many tokens you have left, log in to your OpenAI account, and go to the Usage tab.
To check your token usage history, scroll the page down to the Daily usage breakdown (UTC) section. You can see the whole history or filter it by specific date or team member.
Set up the temperature
Choose a temperature indicator value from 0 to 2.
Temperature is a parameter that affects the response abstractness. For example, if you ask a question, the output will vary according to the selected temperature: abstract or more precise. A higher temperature closer to 2, for example, 1.3, will make the answers more random. Lower temperature closer to 0, for example, 0.2, will make them more comprehensive while retaining the same meaning.
Set conversation context size
In the Conversation context size field, type in the number of recent messages from your subscriber and chatbot you want to include in your AI request as conversation context.
This feature is available for all models except ChatGPT 3.5 Instruct
and Fine-tuned Instruct
.
Request costs increase with the number of messages specified.
Limit the number of subscriber requests
To avoid subscribers from submitting an excessive number of paid requests to your chatbot, you can set limits.
In the Limiting AI bot triggering to one contact field, specify an amount of requests and a number of days, hours, or minutes within which users can send them.
By default, a subscriber can send 100 requests daily.
Once you fill in the fields, click Save and test your bot.
Read also: How to set up voice recognition of subscribers’ messages in your chatbot and How to add an image generator to your chatbot using OpenAI tools.
Usage features
When you integrate with OpenAI, your Standard reply flow will be disabled for your chatbot. Therefore, you need to make sure users know that your bot can reply to them. For example, add your bot communication guidelines to a welcome or trigger flow that you add to your menu.
When using OpenAI with chatbots, note that AI uses an internal information library — it processes users’ requests and gives results directly in the chat with a client.
AI does not have a long memory. When processing a request, only the last couple of user messages are taken into account. We recommend you monitor your bot’s conversations with customers to correct its prompts.
AI does not integrate with additional applications and does not process client data in your bot's audience. For these features, add a menu, or create commands to run flows where you can add the API request, User input, and Action elements.
You can add an OpenAI bot to your Telegram group. You will be able to trigger this bot using @mentions
, /commands
, keywords, and OpenAI requests if this integration is enabled.
Make sure your bot has admin rights in your Telegram group, including permissions to assign other admins.
Read more: How to create posts in a Telegram channel or group via your SendPulse chatbot.
Use cases
Let's see various examples of how you can use a chatbot with an OpenAI integration. You can see more examples on the Examples page.
If you have a bot for a feature-loaded service, you can train it to independently manage conversations with clients and generate answers to questions about your products and working hours.
For this example, the ChatGPT model used 2048 tokens. We added short info about the company, its business, and contacts. The bot can develop a dialog based on the data received.
Prompt example: You are a bot assistant to the "Paper and pencil" company. Our company sells stationery and office supplies. A lot of items are in stock, but it is better to clarify by phone: (856) 267-5442.
Store address: 4472 Central Avenue, Newark.
Working hours: 9:00 AM – 7 PM.
Use the following text to answer questions about product availability and price: "Let us check and get back to you."
Use the following text to answer questions not related to office supplies: "Sorry, I have no information about it."
For the second example, we collected a database of frequently asked questions and answers to limit the scope of bot answers and provide accurate information about our services.
For this example, the model used 700 tokens. In the Bot instructions field, we added basic questions and answers to them. Users don’t have to ask these questions verbatim, and the AI will know enough about your business to be able to answer naturally.
Prompt example: The bot analyzes and provides information only from the given Q&A list.
Question: What is a bio link page?
Answer: A bio link page is a one-page site that can help you promote your brand on social media. Create a SendPulse account and build a bio link page using the landing page builder. Read more: https://sendpulse.com/en/features/landing-page-builder
Question: What elements can be added to my bio link page?
Answer: Text, Cover, Gallery, Button, Subscription form, Payments. Read more: https://sendpulse.com/knowledge-base/landing-page/builder/create-landing-page#adding-elements
Last Updated: 04.07.2024
or