Open AI Chat Module Parameter Breakdown

Walkthrough of the different fields in the OpenAI Chat Completion Module

Dissecting The OpenAI Chat Completion Module

Yesterday, I gave you a video walkthough of my SMM Automation with Make.com.

Today, I thought it would be beneficial go go deeper into the settings I am using on an individual OpenAI Module for a Chat Completion that is creating the Twitter Tweets in the automation.

  1. My OpenAI preconfigured API Connection

  2. The completion method. Currently, Create a Prompt Completion or Create a Chat Completion. Prompt Completions only go to GPT-3.5 where Chat Completions include 3.5, 3.5 Turbo and GPT-4. Chat Completions also seem to be more creative.

  3. The Chat GPT model. In this case, it is set to gpt-4-0314 (Chat GPT-4 March 14th release). Note: GPT-4 is the most expensive model at $0.03 per 1K Tokens. A token is about 4 characters on average.

The Messages Section

The next section is the messages section which is where we add the commands we want to send OpenAI.

  1. The Role: This can be System, User, Assistant. Here we use system to set the role of ChatGPT based on what it is creating.

  2. The Message: Here we actually describe the role being played based on what we want to be generated. I have also tested adding the formatting instructions in this section but they worked much better in the next section.

Message Section Continued

  1. In the second message we use the Role of User to send the more detailed prompt of what we want it to create and any instruction on how it should be formatted to improve our ability to work with it later in our automation.

  2. The actual prompt text. We start with the prompt to get the desired output, followed by the formating instructions. In this case, separating the tweets by a “~” so they can be parsed and added as individual rows in Google Sheets later on. Next omitting any kind of numbering or text prefix so they can be used as is later without a lot of manual text formatting. Lastly, I give it an exampie of what the output should look like.

Request Parameters

  1. Number of Token: This is the maximum number of tokens that will be used in the completion output. If you are requesting several versions of output but this is too small, it will either not generate the requested amount or cause a failure.

  2. Temperature: 0-1 value used to determine how creative the output should be. 0 not creative, 1 very creative

  3. Alternative to Temperature that uses numeric indices to establish probability on wether something should be included in the output. I have not tested this option.

  4. I have not used this parameter yet.

FInal Section

  1. Echo: This determines whether or not to include the prompt in the response, you would not normally want this and adds to your token usage.

  2. Other Input Parameters: I have not looked into this parameter yet and how it may be used. If you have, please send me an email, I would love to know how you’ve used it.

That’s it, easy peasy right?

Let me know if yu have any questions as you are setting up your own automations. Besides the prompt the most important parts are the temperature, and token limit for the response.

Until tomorrow,
Kevin Davis
760-835-8347