<div class="statcounter"><a title="Web Analytics" href="https://statcounter.com/" target="_blank"><img class="statcounter" src="https://c.statcounter.com/12795394/0/d64e9537/1/" alt="Web Analytics" referrerPolicy="no-referrer-when-downgrade">

AI Quick Tips 264: Google Gemini 3 model parameters

AI Quick Tips

This post is not for everyone.  And, no, that isn’t a marketing/copywriting trick to instantly make you more interested.

If you are a casual AI tool user then this will likely have too much technical jargon for you.  You’ve been warned.


I heard a few years ago that the 3 most important things to consider when using AI tools - 


Since Gemini 3 came out recently and a lot of people don’t like reading through documentation, I wanted to share the section of Google’s API Docs that talks about experimenting with model parameters.  If you don’t access Gemini 3 through API calls, Google AI Studio, or similar, you won’t have access to these parameters.


I don’t want to change their wordings in case I remove or change the meaning of something so here is the section -

Experiment with model parameters

Each call that you send to a model includes parameter values that control how the model generates a response. The model can generate different results for different parameter values. Experiment with different parameter values to get the best values for the task. The parameters available for different models may differ. The most common parameters are the following:


  1. Max output tokens: Specifies the maximum number of tokens that can be generated in the response. A token is approximately four characters. 100 tokens correspond to roughly 60-80 words.
  2. Temperature: The temperature controls the degree of randomness in token selection. The temperature is used for sampling during response generation, which occurs when topP and topK are applied. Lower temperatures are good for prompts that require a more deterministic or less open-ended response, while higher temperatures can lead to more diverse or creative results. A temperature of 0 is deterministic, meaning that the highest probability response is always selected.

Note: When using Gemini 3 models, we strongly recommend keeping the temperature at its default value of 1.0. Changing the temperature (setting it below 1.0) may lead to unexpected behavior, such as looping or degraded performance, particularly in complex mathematical or reasoning tasks.


  1. topK: The topK parameter changes how the model selects tokens for output. A topK of 1 means the selected token is the most probable among all the tokens in the model's vocabulary (also called greedy decoding), while a topK of 3 means that the next token is selected from among the 3 most probable using the temperature. For each token selection step, the topK tokens with the highest probabilities are sampled. Tokens are then further filtered based on topP with the final token selected using temperature sampling.
  2. topP: The topP parameter changes how the model selects tokens for output. Tokens are selected from the most to least probable until the sum of their probabilities equals the topP value. For example, if tokens A, B, and C have a probability of 0.3, 0.2, and 0.1 and the topP value is 0.5, then the model will select either A or B as the next token by using the temperature and exclude C as a candidate. The default topP value is 0.95.
  3. stop_sequences: Set a stop sequence to tell the model to stop generating content. A stop sequence can be any sequence of characters. Try to avoid using a sequence of haracters that may appear in the generated content.

 


Missed our video?

Watch here:


 An illustration showing a smiling robot and a man sitting across from each other at a desk, collaborating. The robot is writing in an open book while the man rests his chin on his hand in a thoughtful pose. Multiple glowing light bulbs hover over them, representing ideas. The desk is covered with notes, a flowchart with a gear icon, and a rabbit sitting next to a mug. This image represents human-AI collaboration in brainstorming, creative work, and generating new ideas.

Still Here? You’re Probably Serious About This.

Get new video AI insights —short, useful, and just uncomfortable enough to make you better.

Comments

Related posts