I have build simple app to generate your bio for any social media target. This app is powered by AI and it’s free to use. Find the code here.
I uses OpenAI gpt-3.5-turbo to generate the bio. It’s very simple to use, just input the personal description and the target social media and the bio will be generated.
We only need small setup the payload, this payload is an object that contains parameters for a request to the OpenAI API.
Notice the OPENAI_API_KEY, you can find it in your OpenAI account.
const payload: OpenAIStreamPayload = {
model: "gpt-3.5-turbo",
messages: [{ role: "user", content: prompt }],
temperature: 0.7,
top_p: 1,
frequency_penalty: 0,
presence_penalty: 0,
max_tokens: 400,
stream: true,
n: 1,
};
Let’s break down the payload object:
-
model: "gpt-3.5-turbo"
This specifies the model to be used by the OpenAI API. In this case, it’s “gpt-3.5-turbo”, one of the most advanced language models available. -
messages: [{ role: "user", content: prompt }]
: This is an array of message objects. Each object has a role and content. The role can be “user” or “assistant”, and content is the text of the message. In this case, there’s one user message with the content being the value of the prompt variable. -
temperature: 0.7
: This controls the randomness of the model’s output. A higher value (closer to 1) makes the output more random, while a lower value (closer to 0) makes it more deterministic. -
top_p: 1
: This is a parameter for nucleus sampling, a method used to generate text. A value of 1 means that the model will consider all possible next tokens when generating text. -
frequency_penalty: 0 and presence_penalty: 0:
These parameters control the penalties for frequency and presence. A higher frequency penalty makes the model less likely to use common words, while a higher presence penalty makes the model more likely to talk about new topics. -
max_tokens: 400
: This is the maximum number of tokens in the generated text. A token can be as short as one character or as long as one word. -
stream: true
: This indicates that the API should return the results as a stream of Server-Sent Events (SSE), rather than a single response. -
n: 1
: This is the number of completions to generate. In this case, it’s set to 1, so the API will generate one completion.
That’s it! With this payload, we can send a request to the OpenAI API and get a bio generated by the GPT-3.5 model
. The possibilities are endless, and the results are often surprisingly human-like. Give it a try and see what you can create!
Grab the code
You can grab the code from the Github repository