Technical: Serverless MCP on AWS Lambda using Go
Growth vs. Savings. Gain++

In this article, we will uncover an important part of the puzzle that no one speaks about; which is running your MCP server on AWS Lambda.
Before we start
Please ensure you are pro-efficient in one of the popular programming languages, such as; Go, Python, Java, etc.
I am comfortable using Go, so I will be using Go to develop the MCP Server.
We’ll be using the official MCP library maintained by Anthropic, which is available on GitHub.
It is good to have Docker Desktop installed, as we will be containerizing our MCP server for local testing. Docker would also be used to build the final Amazon ECR image so AWS Lambda can run the MCP server.
Ensure the LTS version of Go is installed on your machine. Go can be installed on all major operating systems. Please refer here for more information.
Ensure you have an IDE installed on your local machine. I will be using VSCode for its simplicity. Please refer here for more information.
Setting up MCP Environment
Create your project directory locally
mkdir ~/path-to-project-directoryInitialize a new Go project
go mod init github.com/InspectorGadget/mcp-http-exampleCreate
main.goin the parent directoryPopulate the
main.gofile with the following contentspackage main func main() { // ....add logic here }Import the
go-sdklibrary from official Model Context Protocol packagego get github.com/modelcontextprotocol/go-sdkReplace the existing lines in
main.goto the followingpackage main func main() { server := mcp.NewServer( &mcp.Implementation{Name: "http-streamer", Version: "v0.0.1"}, &mcp.ServerOptions{ GetSessionID: func() string { return uuid.New().String() }, }, ) // Add new tool mcp.AddTool( server, &mcp.Tool{ Name: "greet", Description: "say hi to someone", InputSchema: json.RawMessage(`{ "type": "object", "properties": { "name": { "type": "string", "description": "name of the person to greet" } }, "required": ["name"] }`), }, ...function } // Create HTTPStreamableHandler handler := mcp.NewStreamableHTTPHandler( func(*http.Request) *mcp.Server { return server }, &mcp.StreamableHTTPOptions{ SessionTimeout: 30 * time.Minute, }, ) http.HandleFunc( "/mcp", func(w http.ResponseWriter, r *http.Request) { switch r.Method { case http.MethodGet: w.WriteHeader(http.StatusOK) _, _ = w.Write([]byte("ok")) return case http.MethodDelete: w.WriteHeader(http.StatusAccepted) return case http.MethodPost: handler.ServeHTTP(w, r) return default: w.WriteHeader(http.StatusMethodNotAllowed) return } }, ) // Lastly, start your MCP Server log.Println("Starting MCP (http-streamer) server on :8080") if err := http.ListenAndServe(":8080", nil); err != nil { log.Fatal(err) } }Now, the MCP server can be tested locally by running:
go run .To understand how the MCP server interacts with an LLM Model, you can use Claude Desktop locally and register the custom MCP server with it by editing the config file of Claude Desktop.
Preparing the MCP server for ChatGPT or any popular web-hosted tools
In order to ensure the MCP server can be used in popular tools such as ChatGPT, Gemini, Self-hosted Open Web UI with Docker Model Runner / Ollama, etc. We will need to understand how we can get it deployed on the Cloud.
With Cloud, there comes cost. I believe we are all naturally elevated to save even a dollar ($1) when it comes to public cloud. Hence, introducing “Serverless”. I will be using AWS Lambda on AWS for its serverless offering.
In order to prepare the MCP, we will need to understand the file structure and how each programming languages compile.
Fortunately, Go is a statically compiled language. This means your hundreds of Go files, libraries, etc can be all compiled into one single executable file and exportable to all major operating systems, such as;
.exe (Windows).sh (Mac & Linux)
Dockerfile
FROM golang:1.25.3-alpine AS build
# Install git (required for Go modules)
RUN apk add --no-cache git
# Set working directory
WORKDIR /app
# Copy go module files and download dependencies
COPY go.mod ./
# If you have a go.sum file, copy it here as well:
# COPY go.sum ./
RUN go mod download
# Copy the source code
COPY . .
# Build a statically linked Linux binary named 'bootstrap' for Lambda.
RUN CGO_ENABLED=0 GOOS=linux GOARCH=arm64 go build -o bootstrap .
# ------------------ Runtime stage ------------------
FROM alpine:latest AS production
RUN apk add --no-cache ca-certificates
COPY --from=inspectorgadget12/lambda-runtime-adapter:latest /lambda-runtime-adapter /opt/extensions/lambda-adapter
COPY --from=build /app/bootstrap /app/bootstrap
WORKDIR /app
ENV PORT=8080 \
AWS_LWA_ASYNC_INIT=true \
AWS_LWA_ENABLE_RESPONSE_STREAMING=true \
AWS_LWA_INVOKE_MODE=response_stream \
AWS_LWA_READINESS_CHECK_PATH=/mcp \
AWS_LWA_LOG_LEVEL=debug \
AWS_REGION=ap-southeast-1
EXPOSE 8080
CMD ["/app/bootstrap"]
inspectorgadget12/lambda-runtime-adapter:latestis a custom Container Image which allows your Go backend or runtime to speak to AWS Lambda’s backend services using the Service IP / Address. Along with it, it expects (optionally):AWS_LWA_ASYNC_INIT- Ensures the Go backend is started asynchronously before Lambda handler receives the HTTP request.AWS_LWA_ENABLE_RESPONSE_STREAMING- Ensures the MCP response is streamed back to the LLM, enabling smooth streaming either to API Gateway or Lambda Function URL.AWS_LWA_INVOKE_MODE- Useful especially for Lambda Function URL as it supports the streaming capability.AWS_LWA_READINESS_CHECK_PATH- A healthcheck path that constantly checks if the Lambda’s backend is actually operational or not.AWS_LWA_LOG_LEVEL- A useful flag that allows you to further debug your custom runtime on Lambda, and the responses streaming to CloudWatch can be piped to Open Telemetry, Grafana or Prometheus.AWS_REGION- The AWS region the Lambda function gets deployed to. Not required, but good to have.
The MCP’s Docker Container needs a port to be exposed to, and we gave it Port 8080. Please note: Lambda Functions do not expose to any port below port 1000 as the runtime itself does not have root within the Lambda function. Any effort to expose the backend to Port 80 or 443 would not succeed.
CMD ["/app/bootstrap"]- Is basically referring to the compiled Go binary that was built during the Image’s build environment.
With this method of Docker Multi-Stage Build, your final Container Image would be lesser than 30 MB which is lightweight and ideal for AWS Lambda so it can start it up easily.
deploy.sh
This shell file is used to deploy the Backend easily to AWS Lambda. This file is responsible for building a latest container image → pushing to Amazon ECR → getting it deployed to AWS Lambda.
#!/bin/bash
set -euo pipefail
# -------------------------------
# CONFIGURATION
# -------------------------------
APP_NAME="mcp-http"
AWS_ACCOUNT_ID="your-aws-account-id"
AWS_REGION="your-target-aws=region"
ECR_REPO="${APP_NAME}-repo"
ROLE_ARN="arn:aws:iam::${AWS_ACCOUNT_ID}:role/your-role"
IMAGE_TAG="latest"
IMAGE_URI="${AWS_ACCOUNT_ID}.dkr.ecr.${AWS_REGION}.amazonaws.com/${ECR_REPO}:${IMAGE_TAG}"
# -------------------------------
# BUILD DOCKER IMAGE
# -------------------------------
echo "Building Docker image..."
docker build --platform linux/arm64 --provenance false -t "${ECR_REPO}:${IMAGE_TAG}" .
# -------------------------------
# CREATE (OR VERIFY) ECR REPO
# -------------------------------
if ! aws ecr describe-repositories --repository-names "${ECR_REPO}" --region "${AWS_REGION}" >/dev/null 2>&1; then
echo "Creating ECR repository: ${ECR_REPO}"
aws ecr create-repository --repository-name "${ECR_REPO}" --region "${AWS_REGION}"
fi
# -------------------------------
# LOGIN AND PUSH IMAGE
# -------------------------------
echo "Logging in to Amazon ECR..."
aws ecr get-login-password --region "${AWS_REGION}" | \
docker login --username AWS --password-stdin "${AWS_ACCOUNT_ID}.dkr.ecr.${AWS_REGION}.amazonaws.com"
echo "Pushing image to ECR..."
docker tag "${ECR_REPO}:${IMAGE_TAG}" "${IMAGE_URI}"
docker push "${IMAGE_URI}"
# -------------------------------
# CREATE OR UPDATE LAMBDA FUNCTION
# -------------------------------
if ! aws lambda get-function --function-name "${APP_NAME}" --region "${AWS_REGION}" >/dev/null 2>&1; then
echo "🪄 Creating new Lambda function..."
aws lambda create-function \
--function-name "${APP_NAME}" \
--package-type Image \
--code ImageUri="${IMAGE_URI}" \
--role "${ROLE_ARN}" \
--architectures arm64 \
--region "${AWS_REGION}" \
--environment "Variables={PORT=8080}"
else
echo "Updating existing Lambda function..."
aws lambda update-function-code \
--function-name "${APP_NAME}" \
--image-uri "${IMAGE_URI}" \
--region "${AWS_REGION}"
fi
echo "Deployment complete!"
Please ensure
chmod +x deploy.shis performed prior to running this bash file.Please ensure AWS CLI is installed, and that you are authenticated to your AWS Environment.
Post deployment
Upon a successful deployment, please head to the Lambda function’s configuration page and enable
Lambda Function URLwith the following options:Response type: RESPONSE_STREAM
Authentication: OFF (For now)
Head to your favorite inferencing tool, I used ChatGPT
Please refer to this blog for more information on how to register your custom MCP with ChatGPT.
Conclusion
Woila! - if you followed along all the steps listed in this document, you now should be able to prompt your AI Model with your newly registered custom MCP server.
MCP libraries are always evolving, since the protocol itself is new. As long as you are kept up-to-date with all the new changes, and amend your code accordingly - You should be all good.



