Natvps.id – After discussing the N8N and GPT4Free installation in the previous article, we can integrate both for the automation process with the help of AI. For example, if you want to make a webhook -based whatsapp chatbot, we can use N8N to do automatic trigger then access AI through GPT4Free.
However, because the initial goal as a liaison to various providers, GPT4Free cannot be directly used as end point For AI Client. Therefore, we need to use Proxy like Llm-proxy To be able to use a provider and specific models. For GPT4Free, there is a Proxy which we can use, namely N8N-G4F-PROXY (
Check the available providers
Before you can use a proxy, we need to choose the provider that we will use. This provider will later determine the list of LLM models that we can use in N8N.
To check it, open GPT4Free Uda>/V1/Provider. For example, if your GPT4 domain is free ai.tutorial.mdinata.my.idthen the address becomes

This is a list of providers available at GPT4Free, in the form of JSON. If you have difficulty reading it, please use it tool like JSON Formatter.

id is the name of the provider that we can use. Please select one of the available providers. As an example, Pollincationsi.

Tip: Find a free provider!
Save ID from this provider, because it will be used later.
Update Docker Arranging GPT4 -Free
We need to add Proxy This is as serve just to the compose compose GPT4Free configuration.
If you follow the previous article about the GPT4Free installation in NAT VPS, then the compose docker file is located at gpt4free/docker-compose.yml. Adjust if using a different location.
Open the compose docker file using a text editor like Nano:
cd gpt4free/ nano docker-compose.yml
llm-proxy-openai:
image: ghcr.io/korotovsky/n8n-g4f-proxy:latest
ports:
- "12434:3000" # port 12434 become accessible on host via host.docker.internal:12434 in the credentials popup
environment:
- LLM_PROXY_PROVIDER=PollinationsAI # choose the provider from /v1/providers
- LLM_UPSTREAM=
Change Pollincationsi In accordance with your previous choice provider.

Save the file with CTRL-X, Y, and Enter.
Restart Docker Compose:
docker compose down docker compose up -d

Regulate port prosecution
To facilitate the installation, we can use a port instead of using a domain through reverse-proxy. But you are free if you still want to use a domain.
Create a new forwarding port that leads to the port 12434 on VPS. Example:

Try accessing the forward port to test whether the proxy works. Example:

If it appears as above, the proxy is running.
Integration with N8N
LLM proxy can now be used as The end of the fire For OPENAI In N8N.

Fill credentials
On the tab Parameterpart Credentialclick Create new credentials To add proxy credentials.

- API Key: Fill in the provisions:
- If using a provider paid Like OpenAI: Fill in your main fire
- If using a provider free Like Pollinationsi: Fill in anything because it does not require a fire key.
- Base URL: Fill in your Port Forward address + “/V1”. Example: http://195.154.94.231:19013/V1.

Select Models
If an option Model On an empty N8N like this, you must manually enter the name of the model.

Open Your Port Forward Address + “/V1/Model”. Example: http://195.154.94.231:19013/V1/models.


For example, GPT-4O-MINI.

In N8N, column ModelMove to By IDThen fill in the ID of the desired model.

Change other configurations according to the needs of automation, then do a step trial to test whether the AI response has been successfully obtained.

Game Center
Game News
Review Film
Rumus Matematika
Anime Batch
Berita Terkini
Berita Terkini
Berita Terkini
Berita Terkini
review anime