Issue with Model.G_3_0_PRO: Response identifies as Flash 2.5 #109

Closed
opened 2026-02-13 17:27:52 -06:00 by mirrors · 2 comments
Owner

Originally created by @MatveyDM15911 on GitHub (Nov 23, 2025).

How can I ensure that the version specified in model=Model.G_3_0_PRO is the one actually being used?
When testing with the prompt from the Readme:
async def main():
response1 = await client.generate_content(
"What's you language model version? Reply version number only.",
model=Model.G_3_0_Pro, # Note: You might want to change this to G_3_0_PRO in your example code context
)
print(f"Model version ({Model.G_3_0_Pro.model_name}): {response1.text}")

The output is "Flash 2.5". Additionally, the response quality suggests that it is not Gemini 3 Pro.
It appears that I am hitting some limits/restrictions, as I am no longer able to extract the "Thinking" trace from the response like I could before. I have attempted to use different Secure_1PSID keys and various VPN locations, but the issue persists.
Do you have any advice on how to definitely bypass these limitations?

Originally created by @MatveyDM15911 on GitHub (Nov 23, 2025). How can I ensure that the version specified in model=Model.G_3_0_PRO is the one actually being used? When testing with the prompt from the Readme: async def main(): response1 = await client.generate_content( "What's you language model version? Reply version number only.", model=Model.G_3_0_Pro, # Note: You might want to change this to G_3_0_PRO in your example code context ) print(f"Model version ({Model.G_3_0_Pro.model_name}): {response1.text}") The output is "Flash 2.5". Additionally, the response quality suggests that it is not Gemini 3 Pro. It appears that I am hitting some limits/restrictions, as I am no longer able to extract the "Thinking" trace from the response like I could before. I have attempted to use different Secure_1PSID keys and various VPN locations, but the issue persists. Do you have any advice on how to definitely bypass these limitations?
Author
Owner

@faithleysath commented on GitHub (Nov 23, 2025):

Actually, Google will do redirections automatically in the backend. If you have run out of your daily quota, you can still use 3.0's headers string, It won't raise any error. However actually you are using the Flash Model because google redirect it silently.

It's not a problem of this project, it's just a truth. No way to bypass the limitations unless you change an account.

@faithleysath commented on GitHub (Nov 23, 2025): Actually, Google will do redirections automatically in the backend. If you have run out of your daily quota, you can still use 3.0's headers string, It won't raise any error. However actually you are using the Flash Model because google redirect it silently. It's not a problem of this project, it's just a truth. No way to bypass the limitations unless you change an account.
Author
Owner

@faithleysath commented on GitHub (Nov 23, 2025):

By the way, you may want to look at this. https://github.com/HanaokaYuzu/Gemini-API/pull/168

@faithleysath commented on GitHub (Nov 23, 2025): By the way, you may want to look at this. https://github.com/HanaokaYuzu/Gemini-API/pull/168
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
mirrors/Gemini-API#109
No description provided.