ChatGPT API allows developers to integrate the capabilities of ChatGPT into their applications easily. OpenAI provides the ChatGPT API, which can be accessed via an API key. The API endpoint allows you to send a prompt and receive a generated response. With its ability to understand and develop human-like text, the ChatGPT API is a valuable resource for any developer working with natural language data.

This guide will walk you through the simple steps to interact with the ChatGPT Chat API using Postman and Powershell.

Prerequisites

You can use any programming language you want. In the end, we will interact with JSON and some HTTPS calls. But for this post, I will use Postman and also PowerShell.

  • API Key: You can obtain the API key from OpenAI, its free at beta.openai.com.
  • Network Access: The OpenAI endpoints won’t work offline, nor ChatGPT, so you must be online to use the service.
  • API documentation: Contains information about the structure of requests and responses. Check it out on the OpenAI website.
  • Familiarity with JSON: If you don’t know how to use JSON, it’s still OK. You can follow, but it’s recommended to have a basic knowledge of the JSON structure.
  • PowerShell 5.1 or 7: Available on almost all computers. If not, download it from the Microsoft site.
  • Postman: this is optional, but we will use it to test the request and see the returned structure. The application is free and easy to use. Download it from the Postman website

You can read more about Chat GPT v3 and how can ChatGPT write your PowerShell Script in this post

On the 1st of March 2023, OpenAI released a new model named GPT-3.5, which is optimized for chat but works well for traditional completions tasks as well.

Understanding the ChatGPT API Model And Options

Before using the ChatGPT API, we need to understand the GPT API options and what we need to send in the request.

You can play with all the option without the need for any tool, go to OpenAI Playground and start discovering.

– Model [Required]

The ChatGPT got multiple models. Each model has its feature, strength point, and use case. You need to select one model to use while building the request. The models are:

gpt-3.5-turbo (1st of March 2023)This module is similar to other modules, it’s much cheaper, and the price is 0.002$ per 1k token. This module is the best for non-chat use cases
Text-davinci-003Most capable GPT-3 model. It can do any task the other models can do, often with higher quality, longer output, and better instruction-following. It also supports inserting completions within the text.
Text-curie-001Very capable, but faster and lower cost than Davinci.
Text-babbage-001Capable of straightforward tasks, very fast, and lower cost.
text-ada-001Capable of very simple tasks, usually the fastest model in the GPT-3 series, and lowest cost
GPT 3 Model

Usually, the Text-davinci-003 is the most common model used.

– User Input (prompt) [Required]

The Prompt is the input text the model will use to generate the response. This text is what the user will pass to the GPT service.

The Type is String

– Control the Creativity (temperature) [Optional]

The Temperature parameter controls the creativity of the generated text. The temperature value is used to adjust the probability distribution of the predicted words. A higher temperature value results in more creative output, and a lower temperature value results in more predictable output.

The value starts from 0 to 1. This value can be 0.45 up to 1. It’s not recommended to use the temperature with the Top_p parameter

– Maximum Length (max_tokens) [Optional]

This parameter controls the maximum number of tokens that can be generated in a single output sequence. The value can go up to 4096 tokens. Keep in mind that each token represents around four English characters. This parameter controls the returned text length.

The value starts from 1 till 4096

– Top P (top_p) [Optional]

The Top_P controls the proportion of words that are considered when generating the final output sequence. It is used to control the level of randomness in the generated text, and to prevent the model from generating low-quality text. When Top_P is set to a lower value, the model will select the words from a smaller proportion of the possible words, which may result in more consistent but potentially less diverse output. On the other side and if Top_P is set to a higher value, the model will select words from a larger proportion of the possible words, resulting in more diverse but potentially less consistent output.

The value starts from 0 to 1. This value can be 0.45 up to 1

– Repeat the Question (echo)[Optional]

Repeat the same prompt in the response along with the answer.

The value is boolean. True or False

Stop Processing (stop) [Optional]

You can add up to 4 sequences where the API stop generating more text. For example, you want GPT to return only one sentence, so you can use the stop parameter and set the value of “.”

There are other parameters also you can use but for now these are good enough to start and see the GPT in action

Using Postman with ChatGPT API

Assuming you have already installed and have your Postman ready to use, start by opening the Postman.

  • Open Postman and create a new request by clicking the “New” button.
New
  • From the Create New window, select HTTP Request
HTTP Request
  • You will see a new tab open with an empty HTTP request. Change the settings to be as the following:
    • Change the request method to Post by clicking on Get on and selecting Post.
    • Type the following URL in the Enter request URL textbox https://api.openai.com/v1/completions
    • In the Headers section, add the Content-Type in the Key with a value of application/json
    • In the next line, add Authorization in the Key header with a value of Bearer Your_API_KEY
    • In the Body tab section, select raw and JSON as the format and add the following text to the body
{
  "prompt": "What is the capital of the moon?",
  "model": "text-davinci-003",
  "temperature": 0.5
}

In the body you can add the parameters explained above, such as top_p, max_tokens, frequency_penalty…etc

So the overall request should look like the following

The Header tab section, Post, and URL

And the body should look like this.

Body

Click on the big blue Send button and see the magic in the response section at the bottom. The ChatGPT returns the response as JSON with the answer part of the response.

{
    "id": "cmpl-6cGrVuMNing83q3bSAePovTmvv5BF",
    "object": "text_completion",
    "created": 1674579301,
    "model": "text-davinci-002",
    "choices": [
        {
            "text": "\n\nThere is no capital of the moon.",
            "index": 0,
            "logprobs": null,
            "finish_reason": "stop"
        }
    ],
    "usage": {
        "prompt_tokens": 8,
        "completion_tokens": 10,
        "total_tokens": 18
    }
}

This looks cool; we manage to consume the API and get a valid response successfully, but what about doing the same using PowerShell?!

Using PowerShell with ChatGPT API

What actually needs to be done is to construct the HTTP request. This includes the header and all the other parameters we clicked through using Postman. But don’t worry. It’s a simple task. here we go.

  • Start by opening PowerShell 7

Get your API key and start by storing it in an environment variable.

$env:OpenAIKey = "YOUR_API_KEY"

Storing this key in the environement variable instead of regular variable can help in

  • Securing the API key away from the rest of the code, so there is no need to hardcode the value.
  • It can be easily revoked or changed without having to update the code. This also makes Key management much easier.
  • Easier code sharing, as there won’t be any key inside the code, and the key can be set in a separate method.
  • Create a variable to hold your prompt, model, and other HTTP Body parameters
$RequestBody = @{
    prompt = "What is the capital of France?"
    model = "text-davinci-003"
    temperature=1
    Stop="."
}

Don’t forget the header. Let’s add the HTTP header

$Header =@{ Authorization = "Bearer $($env:OpenAIKey) " }
  • Use the Invoke-RestMethod cmdlet to make a POST request to the OpenAI completions endpoint with your prompt, model, and other parameters.
$RestMethodParameter=@{
    Method='Post'
    Uri ='https://api.openai.com/v1/completions'
    body=$RequestBody
    Headers=$Header
    ContentType='application/json'
}


(Invoke-RestMethod @RestMethodParameter).choices[0].text

So the full PowerShell code is like this


$env:OAIKEY = "sk-xxxxxxxxxxxxxxxx"
$RequestBody = @{
    prompt = "What is the capital of France?"
    model = "text-davinci-003"
    temperature=1
    stop="."
}
$Header =@{ Authorization = "Bearer $($env:OAIKEY) " }
$RequestBody=$RequestBody | ConvertTo-Json

$RestMethodParameter=@{
    Method='Post'
    Uri ='https://api.openai.com/v1/completions'
    body=$RequestBody
    Headers=$Header
    ContentType='application/json'
}


(Invoke-RestMethod @RestMethodParameter).choices[0].text

The response looks like this


Paris.

Conclusion

It is important to note that GPT-3 has limitations, and its output should be critically evaluated before being used in sensitive or high-stakes applications. Overall, GPT-3 API has the potential to improve efficiency and accuracy in various industries greatly.

4.1/5 - (49 votes)