Skip to Content
GuidesAdvanced

Advanced

Next up, let’s have some fun!

Example projects:

Service to service communication

You can launch multiple containers with @uncloud/run, by running multiple sessions in separate terminals (we love GhosttyΒ  for that!).

When you do that, you can effortlessly activate internal, service-to-service communication by setting a --name for your containers:

# Launch a backend container npx @uncloud/run -n backend # In a separate terminal / split-view πŸ‘‡πŸ» # Launch a frontend container npx @uncloud/run -n frontend # Now the frontend can send requests to the backend # by simply sending requests internally to this URL: # http://backend:80 # Technically, the backend can reach the frontend # by calling their internal URL: # http://frontend:80

Internal requests never leave the cluster - they are truly internal.

--internal Mode

When running containers as internal services (e.g. a worker, a database or a queue), you can mark them as --internal:

# Launch a worker container internally: npx @uncloud/run -n worker --internal # Don't forget to set a --name so that the # other containers can reach it internally!

The URL for this container will be shown as <internal only> and it will not be accessible from the outside.

Custom ports --port / -p

Often databases and other services like caches, message queues etc. will come with their own default ports.

You can launch your internal containers with a custom port by setting the --port / -p flag:

# Launch a postgres container internally: npx @uncloud/run -n postgres -p 5432 --internal # This makes the container available internally as: # psql://postgres:5432

Internal services use a L4 load balancer (TCP / UDP) and support all sort of traffic (not just HTTP).

Why can’t I set custom ports for my public URL?

Public facing traffic is typically HTTP (port 80) & HTTPs (port 443) traffic - and that’s what we have optimized for at the moment.

Your non-HTTP services (databases, caches, message queues) are typically kept private anyways so we think this is sufficient for now.

Disagree? Let us know in the Github communityΒ  or contact Support

Config files -config / -c

To simplify running @uncloud/run you can set all command line flags in a JSON / YAML file as well:

# Create a config file: echo '{"name": "postgres", "port": 5432, "internal": true}' > uncloud.json # Then run it npx @uncloud/run -c uncloud.json # You can still overwrite config params by setting them explicitly: npx @uncloud/run -c uncloud.json -n say-my-name # This will run as "say-my-name" no matter what the config file says

We’ll make uncloud.json the default in the future, so it auto-detects config files

This is a great best-practise to use when running multiple containers often, or when working as a team.

Faster builds with .dockerignore

With the automatic @uncloud/run remote-builds, your containers build incredibly fast.

One aspect that makes this process fast is that Docker only transmits the context (your code) for the builds (KBs to MBs), instead of the finished image (hundreds of MBs).

Sometimes even your context can be huge though (you’ll see it in the build logs) and it can help to add a .dockerignore. Consider this example Dockerfile πŸ‘‡πŸ»

# Dockerfile FROM node:24-alpine WORKDIR /app COPY package*.json ./ RUN npm ci # Notice how it's copying *everything* COPY . . RUN npm run build EXPOSE 80 ENV PORT=80 CMD ["npm", "start"]

In this case, this was actually a Next.js project which was slow to build, since the node_modules and the .next folder were gigantic.

In that case, adding a simple .dockerignore can help:

# .dockerignore (needs to be stored where your Dockerfile is) node_modules .next _pagefind

This now ignores the dependencies (node_modules), the cache folder (.next) and the search index (_pagefind), reducing the context dramatically, speeding up (re)builds of the container.

More often than not, your .gitignore and .dockerignore are identical