gmosx.ninja

How to deploy a Rust Service to Google Cloud Run

One of my bad habits is wasting time on trying out new programming languages, every so often. I guess, in 2020, Rust is the new cool thing, so I had to indulge myself. I will reserve judgement on Rust, for the time being. Suffice it to say, there are aspects of Rust that excite me, other aspects are just plain annoying. You can't deny it's versatility though, that's Rust's main selling point, if you ask me.

As an experiment, I wrote a hello-world Web Service to be deployed as a container. Being the lazy, operations-averse engineer that I am, I use Google Cloud Run (that's a great name) which is based on KNative (what a stupid name).

Here is the simple Web Service:

#![deny(warnings)]

use std::convert::Infallible;
use std::env;

use hyper::service::{make_service_fn, service_fn};
use hyper::{Body, Request, Response, Server};

async fn hello(_: Request<Body>) -> Result<Response<Body>, Infallible> {
    Ok(Response::new(Body::from(
        "hello, world",
    )))
}

#[tokio::main]
pub async fn main() -> Result<(), Box<dyn std::error::Error + Send + Sync>> {
    pretty_env_logger::init();

    let make_svc = make_service_fn(|_conn| async { Ok::<_, Infallible>(service_fn(hello)) });

    let port = if let Ok(port_env_var) = env::var("PORT") {
        port_env_var
            .parse()
            .unwrap_or_else(|_| panic!("Invalid port env variable (PORT={})", port_env_var))
    } else {
        3000
    };

    let addr = ([0, 0, 0, 0], port).into();

    let server = Server::bind(&addr).serve(make_svc);

    println!("Listening on http://{}", addr);

    server.await?;

    Ok(())
}

As you can see, I decided to use the Tokio runtime. I believe it will emerge as the winner of the Rust 'async wars'. The fact that it powers the wonderful Deno project is a strong reason to bet on Tokio.

On the other hand, I see no clear winner in the race for the dominant Rust Web Framework (Actix and Rocket look like strong candidates) so I plan to 'wait it out'. Thus, I used the low-level Hyper library for this experiment.

Pay attention to one important detail: You have to listen to 0.0.0.0, listening to 127.0.0.1 or localhost will not work.

Here is the Dockerfile I used to build the container image:

FROM rust:1.46.0 as builder

ARG SERVICE_NAME=hello_service

RUN USER=root cargo new --bin /usr/src/${SERVICE_NAME}

WORKDIR /usr/src/${SERVICE_NAME}

COPY ./Cargo.toml ./Cargo.toml

RUN cargo build --release \
    && rm -rf src

ADD . ./

RUN rm ./target/release/deps/${SERVICE_NAME}* \
    && cargo build --release

FROM gcr.io/distroless/cc-debian10

ARG SERVICE_NAME=hello_service

COPY --from=builder /usr/src/${SERVICE_NAME}/target/release/${SERVICE_NAME} /usr/local/bin/${SERVICE_NAME}

CMD ["hello_service"]

Following best-practices, this Dockerfile supports a two-stage build and engages in Cargo shenanigans to enable caching a layer of dependencies. Unlike most Dockerfiles for Rust you can find online, it leverages a Distroless base image instead of Alpine Linux. The prospect of messing with musl libc doesn't sound very enticing.

Sweet, let's try this locally:

sudo podman build -t hello_service -f ./Dockerfile .
sudo podman run -p 3000:3000 hello_service

Huh, you still use docker? Sigh:

alias docker=podman

Now you can browse http://localhost:3000 to get your greeting.

Finally, let's deploy to production:

gcloud config set run/platform managed
gcloud config set builds/use_kaniko True
gcloud config set builds/kaniko_cache_ttl 24
gcloud builds submit --tag gcr.io/your-namespace/hello_service
gcloud run deploy --image gcr.io/your-namespace/hello_service

Please note that I enabled the Kaniko Cache option to leverage the aforementioned cached dependencies layer to speed up the deployment.

Mission accomplished!

Update:

Reduced the number of RUN commands, in the Dockerfile, to minimize the generated layers (thanks @mlk0981).

Also, I need to investigate how Docker BuildKit can further optimize the caching of Cargo dependencies (thanks @dkarlovi).

Discussion:

You can discuss this post on this Twitter thread.