<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Raeveen Pasupathy | Blog]]></title><description><![CDATA[Raeveen Pasupathy | Blog]]></description><link>https://blog.raeveen.dev</link><generator>RSS for Node</generator><lastBuildDate>Sat, 25 Apr 2026 22:29:07 GMT</lastBuildDate><atom:link href="https://blog.raeveen.dev/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[Read on: MCP on AWS Lambda]]></title><description><![CDATA[Read more (Technical): https://blog.raeveen.dev/serverless-mcp-on-aws-lambda-using-go
The Model Context Protocol (MCP) is rapidly becoming the "USB-C for AI," allowing LLMs to seamlessly interface with local and remote tools. However, most MCP implem...]]></description><link>https://blog.raeveen.dev/read-on-mcp-on-aws-lambda</link><guid isPermaLink="true">https://blog.raeveen.dev/read-on-mcp-on-aws-lambda</guid><category><![CDATA[aws lambda]]></category><category><![CDATA[Go Language]]></category><category><![CDATA[AWS]]></category><category><![CDATA[Docker]]></category><category><![CDATA[mcp]]></category><category><![CDATA[serverless]]></category><category><![CDATA[agents]]></category><dc:creator><![CDATA[Raeveen Pasupathy]]></dc:creator><pubDate>Tue, 13 Jan 2026 16:52:56 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1768323098565/fb251f33-af75-467e-b244-2c585097b5b5.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><strong>Read more (Technical):</strong> <a target="_blank" href="https://blog.raeveen.dev/serverless-mcp-on-aws-lambda-using-go">https://blog.raeveen.dev/serverless-mcp-on-aws-lambda-using-go</a></p>
<p>The Model Context Protocol (MCP) is rapidly becoming the "USB-C for AI," allowing LLMs to seamlessly interface with local and remote tools. However, most MCP implementations rely on persistent connections (SSE) or local processes (stdio). What if you want to host your MCP server in a scalable, cost-effective cloud environment without managing a 24/7 server?</p>
<p>Enter <strong>Serverless MCP</strong>. By leveraging AWS Lambda and a specialized HTTP streaming bridge, we can deploy robust AI tools that only cost money when they are actually being called.</p>
<h2 id="heading-the-challenge-stdio-vs-the-cloud">The Challenge: Stdio vs. The Cloud</h2>
<p>MCP servers usually communicate in two ways:</p>
<ol>
<li><p><strong>Stdio:</strong> Great for local use (e.g., Claude Desktop), but impossible to host as a web service.</p>
</li>
<li><p><strong>SSE (Server-Sent Events):</strong> The standard for remote MCP, but problematic for AWS Lambda because Lambda is inherently stateless and struggles with long-lived streaming connections without complex workarounds.</p>
</li>
</ol>
<p>The <code>mcp-http-streamer</code> project solves this by providing a lightweight wrapper that adapts MCP’s JSON-RPC over HTTP, making it compatible with serverless request/response cycles while maintaining the ability to stream tool outputs.</p>
<h2 id="heading-architecture-overview">Architecture Overview</h2>
<p>The setup involves three main components:</p>
<ol>
<li><p><strong>The MCP Server:</strong> Your core logic (Typescript/Python) using the standard MCP SDK.</p>
</li>
<li><p><strong>The Streamer Bridge:</strong> A layer that converts incoming HTTP POST requests into the JSON-RPC format the MCP server expects.</p>
</li>
<li><p><strong>AWS Lambda + API Gateway:</strong> The hosting environment that triggers your code on-demand.</p>
</li>
</ol>
<h2 id="heading-why-serverless-mcp">Why Serverless MCP?</h2>
<ol>
<li><p><strong>Zero-Scale:</strong> Pay $0 when your AI isn't using the tools.</p>
</li>
<li><p><strong>Security:</strong> Use AWS IAM and Lambda Authorizers to ensure only your specific AI agent can access your private data or tools.</p>
</li>
<li><p><strong>Scale:</strong> If your AI agent needs to perform 100 parallel data lookups, Lambda scales instantly to handle the load.</p>
</li>
</ol>
<hr />
<h2 id="heading-implementation-guide">Implementation Guide</h2>
<p>To build this, we use the official <code>aws-lambda-go</code> SDK alongside an MCP server implementation.</p>
<ol>
<li><p><strong>Defining the MCP Server</strong></p>
<p> First, we set up the basic structure to handle tool definitions.</p>
<pre><code class="lang-go"> <span class="hljs-keyword">package</span> main

 <span class="hljs-keyword">import</span> (
     <span class="hljs-string">"context"</span>
     <span class="hljs-string">"encoding/json"</span>
     <span class="hljs-string">"github.com/aws/aws-lambda-go/events"</span>
     <span class="hljs-string">"github.com/aws/aws-lambda-go/lambda"</span>
 )

 <span class="hljs-comment">// MCPRequest represents the standard JSON-RPC 2.0 structure</span>
 <span class="hljs-keyword">type</span> MCPRequest <span class="hljs-keyword">struct</span> {
     Method <span class="hljs-keyword">string</span>          <span class="hljs-string">`json:"method"`</span>
     Params json.RawMessage <span class="hljs-string">`json:"params"`</span>
     ID     <span class="hljs-keyword">interface</span>{}     <span class="hljs-string">`json:"id"`</span>
 }

 <span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-title">HandleRequest</span><span class="hljs-params">(ctx context.Context, request events.APIGatewayProxyRequest)</span> <span class="hljs-params">(events.APIGatewayProxyResponse, error)</span></span> {
     <span class="hljs-keyword">var</span> mcpReq MCPRequest
     <span class="hljs-keyword">if</span> err := json.Unmarshal([]<span class="hljs-keyword">byte</span>(request.Body), &amp;mcpReq); err != <span class="hljs-literal">nil</span> {
         <span class="hljs-keyword">return</span> events.APIGatewayProxyResponse{StatusCode: <span class="hljs-number">400</span>}, <span class="hljs-literal">nil</span>
     }

     <span class="hljs-comment">// Logic to route MCP methods (e.g., tools/call, tools/list)</span>
     <span class="hljs-keyword">switch</span> mcpReq.Method {
     <span class="hljs-keyword">case</span> <span class="hljs-string">"tools/list"</span>:
         <span class="hljs-keyword">return</span> handleListTools()
     <span class="hljs-keyword">case</span> <span class="hljs-string">"tools/call"</span>:
         <span class="hljs-keyword">return</span> handleCallTool(mcpReq.Params)
     <span class="hljs-keyword">default</span>:
         <span class="hljs-keyword">return</span> events.APIGatewayProxyResponse{StatusCode: <span class="hljs-number">404</span>}, <span class="hljs-literal">nil</span>
     }
 }

 <span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-title">main</span><span class="hljs-params">()</span></span> {
     lambda.Start(HandleRequest)
 }
</code></pre>
</li>
<li><p><strong>Handling Tool Execution</strong></p>
<p> Go's <code>struct</code> tags make it incredibly easy to define the schemas that the LLM needs to understand your tool.</p>
<pre><code class="lang-go"> <span class="hljs-keyword">type</span> GetWeatherArgs <span class="hljs-keyword">struct</span> {
     Location <span class="hljs-keyword">string</span> <span class="hljs-string">`json:"location"`</span>
 }

 <span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-title">handleCallTool</span><span class="hljs-params">(params json.RawMessage)</span> <span class="hljs-params">(events.APIGatewayProxyResponse, error)</span></span> {
     <span class="hljs-keyword">var</span> args GetWeatherArgs
     json.Unmarshal(params, &amp;args)

     <span class="hljs-comment">// Your tool logic here</span>
     result := fmt.Sprintf(<span class="hljs-string">"The weather in %s is 72°F and sunny."</span>, args.Location)

     responseBody, _ := json.Marshal(<span class="hljs-keyword">map</span>[<span class="hljs-keyword">string</span>]<span class="hljs-keyword">interface</span>{}{
         <span class="hljs-string">"result"</span>: <span class="hljs-keyword">map</span>[<span class="hljs-keyword">string</span>]<span class="hljs-keyword">interface</span>{}{
             <span class="hljs-string">"content"</span>: []<span class="hljs-keyword">map</span>[<span class="hljs-keyword">string</span>]<span class="hljs-keyword">string</span>{
                 {<span class="hljs-string">"type"</span>: <span class="hljs-string">"text"</span>, <span class="hljs-string">"text"</span>: result},
             },
         },
     })

     <span class="hljs-keyword">return</span> events.APIGatewayProxyResponse{
         StatusCode: <span class="hljs-number">200</span>,
         Body:       <span class="hljs-keyword">string</span>(responseBody),
         Headers:    <span class="hljs-keyword">map</span>[<span class="hljs-keyword">string</span>]<span class="hljs-keyword">string</span>{<span class="hljs-string">"Content-Type"</span>: <span class="hljs-string">"application/json"</span>},
     }, <span class="hljs-literal">nil</span>
 }
</code></pre>
</li>
</ol>
<hr />
<h2 id="heading-why-go-for-serverless-mcp">Why Go for Serverless MCP?</h2>
<ol>
<li><p><strong>Cold Start Performance:</strong> Go binaries are compiled to machine code. Unlike Node.js or Python, which have to parse scripts at startup, Go Lambdas typically start in under 10ms. This is vital for AI UX where latency is already high.</p>
</li>
<li><p><strong>Type Safety:</strong> MCP relies heavily on specific JSON-RPC structures. Go’s strong typing ensures your tool schemas are strictly followed.</p>
</li>
<li><p><strong>Single Binary:</strong> Deployment is a simple <code>.zip</code> file containing one executable. No <code>node_modules</code> or virtual environments required.</p>
</li>
</ol>
<h2 id="heading-deployment-tips">Deployment Tips</h2>
<p>When deploying your Go MCP server to AWS:</p>
<ol>
<li><p><strong>Use Function URLs:</strong> For the simplest setup, use AWS Lambda Function URLs to get an HTTPS endpoint without the overhead of API Gateway.</p>
</li>
<li><p><strong>Memory Settings:</strong> Since Go is efficient, you can often run these functions at the minimum memory setting (128MB), keeping costs extremely low.</p>
</li>
<li><p><strong>Security:</strong> Always implement a simple Authorization header check within your Go code to ensure only your specific AI client (like Claude or a custom frontend) can trigger your tools.</p>
</li>
</ol>
<h2 id="heading-conclusion">Conclusion</h2>
<p>By porting the <code>mcp-http-streamer</code> philosophy to Go, you get a production-ready, ultra-fast toolset for your AI agents. You pay nothing when the tools aren't in use, and you get lightning-fast execution when they are.</p>
<p>#go #mcp #ai #agents #serverless #aws #docker</p>
]]></content:encoded></item><item><title><![CDATA[Technical: Serverless MCP on AWS Lambda using Go]]></title><description><![CDATA[In this article, we will uncover an important part of the puzzle that no one speaks about; which is running your MCP server on AWS Lambda.


Before we start

Please ensure you are pro-efficient in one of the popular programming languages, such as; Go...]]></description><link>https://blog.raeveen.dev/serverless-mcp-on-aws-lambda-using-go</link><guid isPermaLink="true">https://blog.raeveen.dev/serverless-mcp-on-aws-lambda-using-go</guid><category><![CDATA[mcp]]></category><category><![CDATA[Go Language]]></category><category><![CDATA[AWS]]></category><category><![CDATA[Docker]]></category><category><![CDATA[docker model runner]]></category><category><![CDATA[AI]]></category><dc:creator><![CDATA[Raeveen Pasupathy]]></dc:creator><pubDate>Wed, 31 Dec 2025 02:58:59 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/_0iV9LmPDn0/upload/5dcf78a312a967d1d9887a6d90edc08c.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<blockquote>
<p>In this article, we will uncover an important part of the puzzle that no one speaks about; which is running your MCP server on AWS Lambda.</p>
</blockquote>
<hr />
<h2 id="heading-before-we-start">Before we start</h2>
<ol>
<li><p>Please ensure you are pro-efficient in one of the popular programming languages, such as; Go, Python, Java, etc.</p>
</li>
<li><p>I am comfortable using Go, so I will be using Go to develop the MCP Server.</p>
</li>
<li><p>We’ll be using the official MCP library maintained by Anthropic, which is available on GitHub.</p>
</li>
<li><p>It is good to have Docker Desktop installed, as we will be containerizing our MCP server for local testing. Docker would also be used to build the final Amazon ECR image so AWS Lambda can run the MCP server.</p>
</li>
<li><p>Ensure the LTS version of Go is installed on your machine. Go can be installed on all major operating systems. Please refer <a target="_blank" href="https://go.dev/doc/install">here</a> for more information.</p>
</li>
<li><p>Ensure you have an IDE installed on your local machine. I will be using VSCode for its simplicity. Please refer <a target="_blank" href="https://code.visualstudio.com/download">here</a> for more information.</p>
</li>
</ol>
<hr />
<h2 id="heading-setting-up-mcp-environment">Setting up MCP Environment</h2>
<ol>
<li><p>Create your project directory locally</p>
<pre><code class="lang-bash"> mkdir ~/path-to-project-directory
</code></pre>
</li>
<li><p>Initialize a new Go project</p>
<pre><code class="lang-bash"> go mod init github.com/InspectorGadget/mcp-http-example
</code></pre>
</li>
<li><p>Create <code>main.go</code> in the parent directory</p>
</li>
<li><p>Populate the <code>main.go</code> file with the following contents</p>
<pre><code class="lang-go"> <span class="hljs-keyword">package</span> main

 <span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-title">main</span><span class="hljs-params">()</span></span> {
     <span class="hljs-comment">// ....add logic here</span>
 }
</code></pre>
</li>
<li><p>Import the <code>go-sdk</code> library from official Model Context Protocol package</p>
<pre><code class="lang-bash"> go get github.com/modelcontextprotocol/go-sdk
</code></pre>
</li>
<li><p>Replace the existing lines in <code>main.go</code> to the following</p>
<pre><code class="lang-go"> <span class="hljs-keyword">package</span> main

 <span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-title">main</span><span class="hljs-params">()</span></span> {
     server := mcp.NewServer(
         &amp;mcp.Implementation{Name: <span class="hljs-string">"http-streamer"</span>, Version: <span class="hljs-string">"v0.0.1"</span>},
         &amp;mcp.ServerOptions{
             GetSessionID: <span class="hljs-function"><span class="hljs-keyword">func</span><span class="hljs-params">()</span> <span class="hljs-title">string</span></span> {
                 <span class="hljs-keyword">return</span> uuid.New().String()
             },
         },
     )

     <span class="hljs-comment">// Add new tool</span>
     mcp.AddTool(
         server,
         &amp;mcp.Tool{
             Name:        <span class="hljs-string">"greet"</span>,
             Description: <span class="hljs-string">"say hi to someone"</span>,
             InputSchema: json.RawMessage(<span class="hljs-string">`{
                 "type": "object",
                 "properties": {
                     "name": { 
                         "type": "string", 
                         "description": "name of the person to greet"
                     }
                 },
                 "required": ["name"]
             }`</span>),
         },
         ...function
     }

     <span class="hljs-comment">// Create HTTPStreamableHandler</span>
     handler := mcp.NewStreamableHTTPHandler(
         <span class="hljs-function"><span class="hljs-keyword">func</span><span class="hljs-params">(*http.Request)</span> *<span class="hljs-title">mcp</span>.<span class="hljs-title">Server</span></span> { <span class="hljs-keyword">return</span> server },
         &amp;mcp.StreamableHTTPOptions{
             SessionTimeout: <span class="hljs-number">30</span> * time.Minute,
         },
     )

     http.HandleFunc(
         <span class="hljs-string">"/mcp"</span>, 
         <span class="hljs-function"><span class="hljs-keyword">func</span><span class="hljs-params">(w http.ResponseWriter, r *http.Request)</span></span> {
             <span class="hljs-keyword">switch</span> r.Method {
             <span class="hljs-keyword">case</span> http.MethodGet:
                 w.WriteHeader(http.StatusOK)
                 _, _ = w.Write([]<span class="hljs-keyword">byte</span>(<span class="hljs-string">"ok"</span>))
                 <span class="hljs-keyword">return</span>
             <span class="hljs-keyword">case</span> http.MethodDelete:
                 w.WriteHeader(http.StatusAccepted)
                 <span class="hljs-keyword">return</span>
             <span class="hljs-keyword">case</span> http.MethodPost:
                 handler.ServeHTTP(w, r)
                 <span class="hljs-keyword">return</span>
             <span class="hljs-keyword">default</span>:
                 w.WriteHeader(http.StatusMethodNotAllowed)
                 <span class="hljs-keyword">return</span>
             }
         },
     )

     <span class="hljs-comment">// Lastly, start your MCP Server</span>
     log.Println(<span class="hljs-string">"Starting MCP (http-streamer) server on :8080"</span>)
     <span class="hljs-keyword">if</span> err := http.ListenAndServe(<span class="hljs-string">":8080"</span>, <span class="hljs-literal">nil</span>); err != <span class="hljs-literal">nil</span> {
         log.Fatal(err)
     }
 }
</code></pre>
</li>
<li><p>Now, the MCP server can be tested locally by running:</p>
<pre><code class="lang-go"> <span class="hljs-keyword">go</span> run .
</code></pre>
</li>
<li><p>To understand how the MCP server interacts with an LLM Model, you can use Claude Desktop locally and register the custom MCP server with it by editing the config file of Claude Desktop.</p>
</li>
</ol>
<hr />
<h2 id="heading-preparing-the-mcp-server-for-chatgpt-or-any-popular-web-hosted-tools">Preparing the MCP server for ChatGPT or any popular web-hosted tools</h2>
<blockquote>
<p>In order to ensure the MCP server can be used in popular tools such as ChatGPT, Gemini, Self-hosted Open Web UI with Docker Model Runner / Ollama, etc. We will need to understand how we can get it deployed on the Cloud.</p>
<p>With Cloud, there comes cost. I believe we are all naturally elevated to save even a dollar ($1) when it comes to public cloud. <mark>Hence, introducing “Serverless”. I will be using AWS Lambda on AWS for its serverless offering.</mark></p>
</blockquote>
<ol>
<li><p>In order to prepare the MCP, we will need to understand the file structure and how each programming languages compile.</p>
</li>
<li><p>Fortunately, Go is a statically compiled language. This means your hundreds of Go files, libraries, etc can be all compiled into one single executable file and exportable to all major operating systems, such as;</p>
<ol>
<li><p><code>.exe (Windows)</code></p>
</li>
<li><p><code>.sh (Mac &amp; Linux)</code></p>
</li>
</ol>
</li>
</ol>
<hr />
<h3 id="heading-dockerfile">Dockerfile</h3>
<pre><code class="lang-dockerfile"><span class="hljs-keyword">FROM</span> golang:<span class="hljs-number">1.25</span>.<span class="hljs-number">3</span>-alpine AS build

<span class="hljs-comment"># Install git (required for Go modules)</span>
<span class="hljs-keyword">RUN</span><span class="bash"> apk add --no-cache git</span>

<span class="hljs-comment"># Set working directory</span>
<span class="hljs-keyword">WORKDIR</span><span class="bash"> /app</span>

<span class="hljs-comment"># Copy go module files and download dependencies</span>
<span class="hljs-keyword">COPY</span><span class="bash"> go.mod ./</span>
<span class="hljs-comment"># If you have a go.sum file, copy it here as well:</span>
<span class="hljs-comment"># COPY go.sum ./</span>
<span class="hljs-keyword">RUN</span><span class="bash"> go mod download</span>

<span class="hljs-comment"># Copy the source code</span>
<span class="hljs-keyword">COPY</span><span class="bash"> . .</span>

<span class="hljs-comment"># Build a statically linked Linux binary named 'bootstrap' for Lambda.</span>
<span class="hljs-keyword">RUN</span><span class="bash"> CGO_ENABLED=0 GOOS=linux GOARCH=arm64 go build -o bootstrap .</span>

<span class="hljs-comment"># ------------------ Runtime stage ------------------</span>
<span class="hljs-keyword">FROM</span> alpine:latest AS production

<span class="hljs-keyword">RUN</span><span class="bash"> apk add --no-cache ca-certificates</span>

<span class="hljs-keyword">COPY</span><span class="bash"> --from=inspectorgadget12/lambda-runtime-adapter:latest /lambda-runtime-adapter /opt/extensions/lambda-adapter</span>

<span class="hljs-keyword">COPY</span><span class="bash"> --from=build /app/bootstrap /app/bootstrap</span>
<span class="hljs-keyword">WORKDIR</span><span class="bash"> /app</span>

<span class="hljs-keyword">ENV</span> PORT=<span class="hljs-number">8080</span> \
    AWS_LWA_ASYNC_INIT=true \
    AWS_LWA_ENABLE_RESPONSE_STREAMING=true \
    AWS_LWA_INVOKE_MODE=response_stream \ 
    AWS_LWA_READINESS_CHECK_PATH=/mcp \
    AWS_LWA_LOG_LEVEL=debug \
    AWS_REGION=ap-southeast-<span class="hljs-number">1</span>

<span class="hljs-keyword">EXPOSE</span> <span class="hljs-number">8080</span>
<span class="hljs-keyword">CMD</span><span class="bash"> [<span class="hljs-string">"/app/bootstrap"</span>]</span>
</code></pre>
<ol>
<li><p><code>inspectorgadget12/lambda-runtime-adapter:latest</code> is a custom Container Image which allows your Go backend or runtime to speak to AWS Lambda’s backend services using the Service IP / Address. Along with it, it expects (optionally):</p>
<ol>
<li><p><code>AWS_LWA_ASYNC_INIT</code> - Ensures the Go backend is started asynchronously before Lambda handler receives the HTTP request.</p>
</li>
<li><p><code>AWS_LWA_ENABLE_RESPONSE_STREAMING</code> - Ensures the MCP response is streamed back to the LLM, enabling smooth streaming either to API Gateway or Lambda Function URL.</p>
</li>
<li><p><code>AWS_LWA_INVOKE_MODE</code> - Useful especially for Lambda Function URL as it supports the streaming capability.</p>
</li>
<li><p><code>AWS_LWA_READINESS_CHECK_PATH</code> - A healthcheck path that constantly checks if the Lambda’s backend is actually operational or not.</p>
</li>
<li><p><code>AWS_LWA_LOG_LEVEL</code> - A useful flag that allows you to further debug your custom runtime on Lambda, and the responses streaming to CloudWatch can be piped to Open Telemetry, Grafana or Prometheus.</p>
</li>
<li><p><code>AWS_REGION</code> - The AWS region the Lambda function gets deployed to. Not required, but good to have.</p>
</li>
</ol>
</li>
<li><p>The MCP’s Docker Container needs a port to be exposed to, and we gave it Port 8080. Please note: <mark>Lambda Functions do not expose to any port below port 1000 as the runtime itself does not have root within the Lambda function. Any effort to expose the backend to Port 80 or 443 would not succeed.</mark></p>
</li>
<li><p><code>CMD ["/app/bootstrap"]</code> - Is basically referring to the compiled Go binary that was built during the Image’s build environment.</p>
</li>
</ol>
<blockquote>
<p>With this method of Docker Multi-Stage Build, your final Container Image would be lesser than 30 MB which is lightweight and ideal for AWS Lambda so it can start it up easily.</p>
</blockquote>
<hr />
<h3 id="heading-deploysh">deploy.sh</h3>
<blockquote>
<p>This shell file is used to deploy the Backend easily to AWS Lambda. This file is responsible for building a latest container image → pushing to Amazon ECR → getting it deployed to AWS Lambda.</p>
</blockquote>
<pre><code class="lang-bash"><span class="hljs-meta">#!/bin/bash</span>
<span class="hljs-built_in">set</span> -euo pipefail

<span class="hljs-comment"># -------------------------------</span>
<span class="hljs-comment"># CONFIGURATION</span>
<span class="hljs-comment"># -------------------------------</span>
APP_NAME=<span class="hljs-string">"mcp-http"</span>
AWS_ACCOUNT_ID=<span class="hljs-string">"your-aws-account-id"</span>
AWS_REGION=<span class="hljs-string">"your-target-aws=region"</span>
ECR_REPO=<span class="hljs-string">"<span class="hljs-variable">${APP_NAME}</span>-repo"</span>
ROLE_ARN=<span class="hljs-string">"arn:aws:iam::<span class="hljs-variable">${AWS_ACCOUNT_ID}</span>:role/your-role"</span>

IMAGE_TAG=<span class="hljs-string">"latest"</span>
IMAGE_URI=<span class="hljs-string">"<span class="hljs-variable">${AWS_ACCOUNT_ID}</span>.dkr.ecr.<span class="hljs-variable">${AWS_REGION}</span>.amazonaws.com/<span class="hljs-variable">${ECR_REPO}</span>:<span class="hljs-variable">${IMAGE_TAG}</span>"</span>

<span class="hljs-comment"># -------------------------------</span>
<span class="hljs-comment"># BUILD DOCKER IMAGE</span>
<span class="hljs-comment"># -------------------------------</span>
<span class="hljs-built_in">echo</span> <span class="hljs-string">"Building Docker image..."</span>
docker build --platform linux/arm64 --provenance <span class="hljs-literal">false</span> -t <span class="hljs-string">"<span class="hljs-variable">${ECR_REPO}</span>:<span class="hljs-variable">${IMAGE_TAG}</span>"</span> .

<span class="hljs-comment"># -------------------------------</span>
<span class="hljs-comment"># CREATE (OR VERIFY) ECR REPO</span>
<span class="hljs-comment"># -------------------------------</span>
<span class="hljs-keyword">if</span> ! aws ecr describe-repositories --repository-names <span class="hljs-string">"<span class="hljs-variable">${ECR_REPO}</span>"</span> --region <span class="hljs-string">"<span class="hljs-variable">${AWS_REGION}</span>"</span> &gt;/dev/null 2&gt;&amp;1; <span class="hljs-keyword">then</span>
  <span class="hljs-built_in">echo</span> <span class="hljs-string">"Creating ECR repository: <span class="hljs-variable">${ECR_REPO}</span>"</span>
  aws ecr create-repository --repository-name <span class="hljs-string">"<span class="hljs-variable">${ECR_REPO}</span>"</span> --region <span class="hljs-string">"<span class="hljs-variable">${AWS_REGION}</span>"</span>
<span class="hljs-keyword">fi</span>

<span class="hljs-comment"># -------------------------------</span>
<span class="hljs-comment"># LOGIN AND PUSH IMAGE</span>
<span class="hljs-comment"># -------------------------------</span>
<span class="hljs-built_in">echo</span> <span class="hljs-string">"Logging in to Amazon ECR..."</span>
aws ecr get-login-password --region <span class="hljs-string">"<span class="hljs-variable">${AWS_REGION}</span>"</span> | \
  docker login --username AWS --password-stdin <span class="hljs-string">"<span class="hljs-variable">${AWS_ACCOUNT_ID}</span>.dkr.ecr.<span class="hljs-variable">${AWS_REGION}</span>.amazonaws.com"</span>

<span class="hljs-built_in">echo</span> <span class="hljs-string">"Pushing image to ECR..."</span>
docker tag <span class="hljs-string">"<span class="hljs-variable">${ECR_REPO}</span>:<span class="hljs-variable">${IMAGE_TAG}</span>"</span> <span class="hljs-string">"<span class="hljs-variable">${IMAGE_URI}</span>"</span>
docker push <span class="hljs-string">"<span class="hljs-variable">${IMAGE_URI}</span>"</span>

<span class="hljs-comment"># -------------------------------</span>
<span class="hljs-comment"># CREATE OR UPDATE LAMBDA FUNCTION</span>
<span class="hljs-comment"># -------------------------------</span>
<span class="hljs-keyword">if</span> ! aws lambda get-function --function-name <span class="hljs-string">"<span class="hljs-variable">${APP_NAME}</span>"</span> --region <span class="hljs-string">"<span class="hljs-variable">${AWS_REGION}</span>"</span> &gt;/dev/null 2&gt;&amp;1; <span class="hljs-keyword">then</span>
  <span class="hljs-built_in">echo</span> <span class="hljs-string">"🪄 Creating new Lambda function..."</span>
  aws lambda create-function \
    --function-name <span class="hljs-string">"<span class="hljs-variable">${APP_NAME}</span>"</span> \
    --package-type Image \
    --code ImageUri=<span class="hljs-string">"<span class="hljs-variable">${IMAGE_URI}</span>"</span> \
    --role <span class="hljs-string">"<span class="hljs-variable">${ROLE_ARN}</span>"</span> \
    --architectures arm64 \
    --region <span class="hljs-string">"<span class="hljs-variable">${AWS_REGION}</span>"</span> \
    --environment <span class="hljs-string">"Variables={PORT=8080}"</span>
<span class="hljs-keyword">else</span>
  <span class="hljs-built_in">echo</span> <span class="hljs-string">"Updating existing Lambda function..."</span>
  aws lambda update-function-code \
    --function-name <span class="hljs-string">"<span class="hljs-variable">${APP_NAME}</span>"</span> \
    --image-uri <span class="hljs-string">"<span class="hljs-variable">${IMAGE_URI}</span>"</span> \
    --region <span class="hljs-string">"<span class="hljs-variable">${AWS_REGION}</span>"</span>
<span class="hljs-keyword">fi</span>

<span class="hljs-built_in">echo</span> <span class="hljs-string">"Deployment complete!"</span>
</code></pre>
<ol>
<li><p>Please ensure <code>chmod +x deploy.sh</code> is performed prior to running this bash file.</p>
</li>
<li><p>Please ensure AWS CLI is installed, and that you are authenticated to your AWS Environment.</p>
</li>
</ol>
<hr />
<h2 id="heading-post-deployment">Post deployment</h2>
<ol>
<li><p>Upon a successful deployment, please head to the Lambda function’s configuration page and enable <code>Lambda Function URL</code> with the following options:</p>
<ol>
<li><p><strong>Response type:</strong> RESPONSE_STREAM</p>
</li>
<li><p><strong>Authentication:</strong> OFF (For now)</p>
</li>
</ol>
</li>
<li><p>Head to your favorite inferencing tool, I used ChatGPT</p>
</li>
<li><p>Please <a target="_blank" href="https://apidog.com/blog/chatgpt-mcp-support/">refer</a> to this blog for more information on how to register your custom MCP with ChatGPT.</p>
</li>
</ol>
<hr />
<h2 id="heading-conclusion">Conclusion</h2>
<ol>
<li><p><em>Woila!</em> - if you followed along all the steps listed in this document, you now should be able to prompt your AI Model with your newly registered custom MCP server.</p>
</li>
<li><p>MCP libraries are always evolving, since the protocol itself is new. As long as you are kept up-to-date with all the new changes, and amend your code accordingly - You should be all good.</p>
</li>
</ol>
]]></content:encoded></item><item><title><![CDATA[How do you think effectively?]]></title><description><![CDATA[In life, there are few bodily processes that happen automatically: Your hair grows, or you breathing, without having a thought crossing your mind. But pretty much everything else you do in daily life requires thinking.
However, you often rely on thou...]]></description><link>https://blog.raeveen.dev/how-do-you-think-effectively</link><guid isPermaLink="true">https://blog.raeveen.dev/how-do-you-think-effectively</guid><category><![CDATA[psychology]]></category><category><![CDATA[Human Psychology]]></category><dc:creator><![CDATA[Raeveen Pasupathy]]></dc:creator><pubDate>Tue, 30 Dec 2025 02:20:52 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1767059315846/ed2aedd8-3982-45f4-a1f0-82df7bf00c57.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In life, there are few bodily processes that happen automatically: Your hair grows, or you breathing, without having a thought crossing your mind. But pretty much everything else you do in daily life requires thinking.</p>
<p>However, you often rely on thought to formulate ideas as work, evaluating your relationships, be creative, and hold interesting conversations. So, how much time do you spend trying to improve the way you think? If you’re anything like most people, probably not much.</p>
<p><mark>Let’s devote more time to evaluating your ideas, values and goals</mark></p>
<blockquote>
<p>**<em>So, how good are you at prioritizing?</em><br />**If you didn’t know how to answer, then you aren’t!</p>
</blockquote>
<p>In life, most of us believe we’re first-rate. As rational people, we like to think to attend to the most important things in life first, and only then turn our attention to less pressing tasks. In other words, we think our priorities are pretty much in order.</p>
<p>But, are they really? Look <strong>closely</strong>. Many people’s priority are actually mixed up. We spend little time engaging with serious, important questions about the value of our goals — And instead we jump straight to trying to make those goals happen.</p>
<p><strong>Do you ever ask yourself if more money would make you really happy?</strong> Or do you just thoughtlessly pursue a greater income? <strong>And have you also asked yourself whether you’d be better off single</strong>, or do you grudgingly plod on in familiar but <strong>unhappy relationships</strong>?</p>
<blockquote>
<p><strong><em>But, remember, things don’t have to be that way!</em></strong></p>
</blockquote>
<h2 id="heading-accept-that-ideas-develop-in-fits-and-starts">Accept that ideas develop in fits and starts</h2>
<p>When you read a book, listen to a speech, or even reading this article :). It’s easy to imagine that the person who composed it came across their ideas and the corresponding words in a straightforward, almost effortless way. Because the words seem to flow together seamlessly, you might think the creative process was breezy and painless.</p>
<blockquote>
<p>Please remember that such thought is often <strong>“delusional”</strong>.</p>
</blockquote>
<p>Our brain is a fitful instrument, which doesn’t chug along at full power for hours at a time. It proceeds in fits and starts — Kicking into life briefly, making a sudden leap forward or an interesting new connection, and then lapsing into idleness again for a prolonged stretch.</p>
<p><strong><em>However, this shouldn’t dispirit us.</em></strong></p>
<h2 id="heading-envy-can-help-you-identify-your-true-desires">Envy can help you identify your true desires</h2>
<p>Hmm, envy — It is an emotion we all feel from time to time, but it’s not one we often like to acknowledge. We’re told that it’s wrong to envy others’ successes and talents or luck. Good people, after all, are happy to see others doing well.</p>
<p>But, what if envy has something to teach you after all? What if, instead of repressing the envious thoughts that occurred to you, you examined them and teased out their implications?</p>
<p>Don’t forget that the true value of envy lies in the way it reveals your true ambitions. You feel envy when you identify in others something that you desire and lack. By tracing each envious feeling back to its source, you can come a few steps closer to discovering what it is you truly want from life.</p>
<blockquote>
<p><strong>Envy can help you identify your true desires, but always remember, to be in control of it, not to be controlled by it.</strong></p>
</blockquote>
<h2 id="heading-lastly-be-skeptical-about-your-own-beliefs">Lastly, be skeptical about your own beliefs</h2>
<p>You might imagine that effective thinkers rarely doubt their own opinions — and, in a way, that would actually make sense. After all, thinking’s what they’re good at. <strong><em>So, why should they be skeptical about the conclusions they reach?</em></strong></p>
<p>For an instance, you might assume that persuasive lawyers rarely doubt their arguments, and convincing actors rarely doubt their performances. But they do — and for a good reason. Experiencing doubt is one of the core aspects of thinking well. In fact, the best aspects of thinking well. Thus, the best thinkers are very often the most skeptical.</p>
<p>On the other hand, if you can’t conceive if being wrong, then you can’t examine you own beliefs in a critical manner. And if you can’t interrogate what you believe, <strong>then all of your intelligence counts for nothing!</strong></p>
<p>The golden rule towards this section would be that if you want to become a more skeptical and effective thinker, then you need to take in order to start; Genuinely entertain the idea that everything you believe could be wrong.</p>
<blockquote>
<p><strong><em>So, you still don’t believe it?</em></strong><br />Good! That means you’re already halfway there.</p>
</blockquote>
<h2 id="heading-good-luck"><strong><em><mark>Good luck!</mark></em></strong></h2>
]]></content:encoded></item><item><title><![CDATA[Deep dive into Docker Model Runner]]></title><description><![CDATA[TL;DR: Docker Model Runner represents a paradigm shift in local AI development, bringing the familiarity and reliability of Docker workflows to large language model inference. Unlike traditional containerized solutions, it runs models directly on the...]]></description><link>https://blog.raeveen.dev/deep-dive-into-docker-model-runner-9b3f790bb6a7</link><guid isPermaLink="true">https://blog.raeveen.dev/deep-dive-into-docker-model-runner-9b3f790bb6a7</guid><category><![CDATA[Docker]]></category><category><![CDATA[docker model runner]]></category><category><![CDATA[Machine Learning]]></category><category><![CDATA[AI]]></category><category><![CDATA[AI models]]></category><dc:creator><![CDATA[Raeveen Pasupathy]]></dc:creator><pubDate>Tue, 30 Dec 2025 02:19:01 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1767059310125/db4b4dd5-a9ea-4eb6-b106-31b019b8ef78.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><strong>TL;DR:</strong> Docker Model Runner represents a paradigm shift in local AI development, bringing the familiarity and reliability of Docker workflows to large language model inference. Unlike traditional containerized solutions, it runs models directly on the host for optimal performance while maintaining Docker’s ecosystem benefits.</p>
<h2 id="heading-introduction">Introduction</h2>
<p>The landscape of AI development is undergoing a fundamental transformation. Local development for applications powered by LLMs is gaining momentum, and for good reason. Privacy concerns, cost optimization, and the need for offline functionality are driving developers away from cloud-based APIs toward local inference solutions.</p>
<p>Enter Docker Model Runner — a beta feature introduced with Docker Desktop 4.40 for macOS on Apple silicon (Now to many other Platforms) that promises to revolutionize how developers build, test, and deploy AI-powered applications. This isn’t just another local inference tool; it’s a complete reimagining of how AI models fit into modern development workflows.</p>
<h2 id="heading-what-is-docker-model-runner">What is Docker Model Runner</h2>
<p>Docker Model Runner is designed to make AI model execution as simple as running a container. With this Beta release, we’re giving developers a fast, low-friction way to run models, test them, and iterate on application code that uses models locally, without all the usual setup headaches.</p>
<p>At its core, Docker Model Runner is a lightweight runtime integrated directly into Docker Desktop that allows developers to pull, run, and manage AI models using familiar Docker commands. But there’s a crucial architectural difference that sets it apart from traditional containerized solutions.</p>
<h2 id="heading-key-characteristics">Key Characteristics</h2>
<ul>
<li><p><strong>Host-Native Execution</strong>: Unlike our usual Docker containers, Model Runner doesn’t run the AI model in a Docker container. Instead; Docker Desktop runs the inference engine (currently `llama.cpp`) directly on your host machine.</p>
</li>
<li><p><strong>OCI Artifact Distribution:</strong> Models are packaged as OCI Artifacts, an open standard that allows you to distribute and version them through the same registries and workflows that already use for containers.</p>
</li>
<li><p><strong>OpenAI API Compatibility:</strong> Docker Model Runner exposes an OpenAI-compatible API, making integration with existing tools and libraries seamless.</p>
</li>
</ul>
<h2 id="heading-host-native-approach">Host-Native Approach</h2>
<p>The most significant architectural decision in Docker Model Runner is its departure from traditional containerization for model execution. When you run a model, Docker calls an Inference Server API endpoint hosted by the Model Runner through Docker Desktop, and provide an OpenAI compatible API. The Inference Server will use `llama.cpp` as the Inference Engine, running as a native host process</p>
<p>This design choice delivers several critical advantages:</p>
<ul>
<li><p><strong>Performance Optimization</strong>: By using host-based execution, we avoid the performance limitations of running models inside virtual machines. This translates to significantly faster inference times, especially on Apple Silicon where direct Metal API access is crucial.</p>
</li>
<li><p><strong>GPU Acceleration</strong>: Apple Silicon’s Metal API is used for GPU acceleration, providing native performance without the overhead of virtualization layers.</p>
</li>
<li><p><strong>Memory Efficiency</strong>: The model will stay in memory until another model is requested, or until a pre-defined inactivity timeout (currently 5 minutes) is reached.</p>
</li>
</ul>
<h2 id="heading-api-architecture">API Architecture</h2>
<pre><code class="lang-plaintext">GET /engines/llama.cpp/v1/models
POST /engines/llama.cpp/v1/chat/completions
POST /engines/llama.cpp/v1/completions
POST /engines/llama.cpp/v1/embeddings
</code></pre>
<ul>
<li><p>From host processes: <a target="_blank" href="http://localhost:12434/">http://localhost:12434/</a></p>
</li>
<li><p>From containers: <a target="_blank" href="http://model-runner.docker.internal/">http://model-runner.docker.internal/</a></p>
</li>
</ul>
<h2 id="heading-enabling-docker-model-runner">Enabling Docker Model Runner</h2>
<p>Run:</p>
<pre><code class="lang-bash">docker desktop <span class="hljs-built_in">enable</span> model-runner --tcp 12434
</code></pre>
<h2 id="heading-verifying-docker-model-runner">Verifying Docker Model Runner</h2>
<pre><code class="lang-bash">docker model status
</code></pre>
<h2 id="heading-usage-of-docker-model-runner">Usage of Docker Model Runner</h2>
<pre><code class="lang-bash">docker model list - Lists down all the models
docker model pull ai/smollm2 - Pulls a Model from Docker Hub
docker model ls - Lists all download models
docker model rm ai/smollm2 - Removes a model
docker model version - Presents the version of Docker Model Runner
docker model run ai/smollm2 <span class="hljs-string">"Explain Cloud Computing"</span> - Running a model with a prompt
docker model run ai/smollm2 - Interactive model with the model
</code></pre>
]]></content:encoded></item><item><title><![CDATA[Building & Hosting a Discord Bot on AWS]]></title><description><![CDATA[Discord is a pretty amazing Platform. We all have been envisioning a collaborative platform among your family & friends, at least vigorously during the Pandemic. Discord has been evolving across the industry faster than any other Chat Platform. Initi...]]></description><link>https://blog.raeveen.dev/building-hosting-a-discord-bot-on-aws-e157bd7faf78</link><guid isPermaLink="true">https://blog.raeveen.dev/building-hosting-a-discord-bot-on-aws-e157bd7faf78</guid><category><![CDATA[Devops]]></category><category><![CDATA[AWS]]></category><category><![CDATA[ec2]]></category><category><![CDATA[Discord bot]]></category><category><![CDATA[discord]]></category><dc:creator><![CDATA[Raeveen Pasupathy]]></dc:creator><pubDate>Tue, 30 Dec 2025 02:17:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1767059329997/93f98faa-d917-4e4f-aa41-6c3d05fb54de.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><strong>Discord</strong> is a pretty amazing Platform. We all have been envisioning a collaborative platform among your family &amp; friends, at least vigorously during the Pandemic. Discord has been evolving across the industry faster than any other Chat Platform. Initially, Discord was targeted purely for Games, and now making its way across millions of people across the globe! Discord can be used for free, and thankfully not hidden behind a paywall. However, Discord can be ramped up with Server Emojis, Custom Emotes, and more with the paid subscription called <strong>Discord Nitro</strong>. Let’s not forget something, they support streaming up to <strong>4K (60 FPS).</strong></p>
<p>In 2020, Discord introduced two major and useful features which are Slash Commands &amp; Interaction Endpoints. However, there are not many use-cases for both of these features, just yet.</p>
<p>Okay, enough of storytelling. Let’s not go too deep into the history, let’s get right into the <strong>“cool stuff” —</strong> We have Uncle Google for the history :)</p>
<h2 id="heading-why-aws">Why AWS?</h2>
<p>I’m not going to put in a lot of details on this, as the results or cost would be self-explanatory. <strong>So, why AWS?</strong> Because they are reliable, cheap, and cost-effective. However, the steps listed below could work on any Virtual Private Server, Dedicated Server, or any Linux-based Servers.</p>
<h2 id="heading-okay-lets-get-started">Okay, let’s get started</h2>
<p>This is for hosting a standard Python Discord Bot on AWS, I will be walking you through some cheesy boilerplate to get things started. However, I’m assuming that you already have adequate knowledge on developing a Discord Bot, including attaining the relevant Tokens and more!</p>
<h2 id="heading-step-1"><strong>Step 1</strong></h2>
<p>Below is a simple gist on getting started with your own Discord Bot</p>
<div class="gist-block embed-wrapper" data-gist-show-loading="false" data-id="8a03eb73c4277c1669e110a7ecadd580"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a href="https://gist.github.com/8a03eb73c4277c1669e110a7ecadd580" class="embed-card">https://gist.github.com/8a03eb73c4277c1669e110a7ecadd580</a></div><p> </p>
<h2 id="heading-step-2"><strong>Step 2</strong></h2>
<p>Let’s create our Python Environment, with the installation of the dependency.<br />To get started, simply run the commands below to giddy things up. (Also, assuming you have adequate knowledge on setting up Virtual Env, etc).</p>
<p># Assuming you have created the file in Step 1 on a separate folder.<br /># Note: Activating the venv can be different on Windows. Or with any terminal extensions like <strong>fish, etc</strong>.</p>
<pre><code class="lang-bash">python3 -m venv venv
pip install discord
pip freeze &gt; requirements.txt
<span class="hljs-built_in">source</span> venv/bin/activate
</code></pre>
<h2 id="heading-step-3"><strong>Step 3</strong></h2>
<p>Now, assuming that you have gone through all the necessary steps to attain your own Discord Bot Token. If you are unsure, you may visit this <a target="_blank" href="https://discord.com/developers/applications">link</a>.</p>
<h2 id="heading-step-4">Step 4</h2>
<p>**That is all on the setting up part. Let’s pack our Code Editor and store it in a safe place. In this step, we’ll be using <strong>Amazon EC2</strong> to set up our bot &amp; keep it always online.</p>
<h2 id="heading-step-5"><strong>Step 5</strong></h2>
<p>Let’s now go to your <a target="_blank" href="http://console.aws.amazon.com">AWS Console</a>.</p>
<p>Search for EC2 in AWS Console</p>
<h2 id="heading-step-6"><strong>Step 6</strong></h2>
<p>Now, let’s choose the EC2 instance of our choice. In this tutorial, I will be choosing <strong>“t2.micro”.</strong> If you’d like to estimate the cost &amp; compare across all the types of instances AWS has to offer, simply visit this <a target="_blank" href="https://calculator.aws/#/createCalculator/EC2">link</a>.</p>
<p><strong>Instance Configuration (Most default):</strong></p>
<pre><code class="lang-plaintext">OS: Ubuntu 20.04 LTS (x86)
vCPU: 1
Memory: 1 GB
Storage: 8 GB
</code></pre>
<h2 id="heading-step-7"><strong>Step 7</strong></h2>
<p>Now, once we’ve configured our instance. Let’s look into your security group. Please ensure port 22 is allowed from <strong>“Anywhere”</strong> as we’ll be using SSH to connect to the EC2. Please refer to the screenshot below.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1767059322332/8e33f881-4b47-4dd7-a786-0c71393d4dbb.png" alt /></p>
<h2 id="heading-step-8"><strong>Step 8</strong></h2>
<p>Alright. Once that is selected appropriately, let’s jump into the next step. Let’s now choose our Key Pair (If present) or create a separate Key Pair file.<br /><strong>NOTE:</strong> This Key Pair serves as the credential in order for you to log into your EC2. Please do not lose this file. The downloaded Key Pair file will come with the extension of <strong>“.pem”</strong>.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1767059324059/a551208b-4b96-4725-ba31-b1f4bbd49c00.png" alt /></p>
<h2 id="heading-step-9"><strong>Step 9</strong></h2>
<p>Now, we have got our EC2 fired up. Hold on! Before we move forward, let’s verify if our EC2 is running.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1767059325983/5b4a3292-9d90-41e6-87f0-c3cd0d14a896.png" alt /></p>
<h2 id="heading-step-10"><strong>Step 10</strong></h2>
<p><strong>Woila!</strong> The EC2 server is now live. Let’s try connecting to it.<br /><strong>NOTE:</strong> You will have to use the .pem file to authenticate to the Server.</p>
<p><mark># Open up your Terminal / Favorite SSH Client</mark><br /><mark># If you are facing issues as "Bad Permission" while trying to authenticate, please change the file permission as below;</mark></p>
<blockquote>
<p><strong>chmod 600 discord-ec2.pem</strong></p>
</blockquote>
<p>Then, run this command to connect to the EC2 Server.</p>
<blockquote>
<p><strong>ssh -i "discord-ec2.pem" ubuntu@Your Public iPv4 DNS</strong></p>
</blockquote>
<h3 id="heading-example"><strong>Example:</strong></h3>
<blockquote>
<p>ssh -i "discord-ec2.pem" ubuntu@ec2-18-140-70-118.ap-southeast-1.compute.amazonaws.com</p>
</blockquote>
<h2 id="heading-step-11"><strong>Step 11</strong></h2>
<p>Upon successful authentication, you’ll be directly in the Shell of the EC2.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1767059327280/321e597e-864e-4d59-8a31-1f9480a1e73e.png" alt /></p>
<h2 id="heading-step-12"><strong>Step 12</strong></h2>
<p>Now, let’s prepare our EC2 for some Rock &amp; Roll.</p>
<p>Firstly, let’s update the dependencies. Simply run the command below in your EC2 Instance:</p>
<blockquote>
<p><strong>sudo apt-get update &amp;&amp; sudo apt-get upgrade -y</strong></p>
</blockquote>
<h2 id="heading-step-13"><strong>Step 13</strong></h2>
<p>Hold on right there!</p>
<blockquote>
<p><strong><em>How do we transfer the file from our Local Machine to the EC2?</em></strong><br />Well, you <strong>“could”</strong> find many other ways of transferring the file to the EC2. In this Tutorial, I will be using <code>rsync</code> to transfer the file to the EC2.</p>
</blockquote>
<p><strong><mark>Other methods you could consider:</mark></strong><br />- GitHub (Pull code from EC2)<br />- SFTP<br />- More</p>
<p>Let’s now giddy up <code>rsync</code> on our Local Machine. You may install <code>rsync</code> using <strong>Homebrew (Mac)</strong> or any other Package Manager for your corresponding Platform.</p>
<p><mark>Example (Mac OS):</mark></p>
<pre><code class="lang-bash">brew install rsync
</code></pre>
<h2 id="heading-step-14"><strong>Step 14</strong></h2>
<p>Once <code>rsync</code> is installed on your local machine, let’s get the files moving.</p>
<p>Simply run the command below</p>
<p><mark># Please replace the values accordingly when executing.</mark></p>
<pre><code class="lang-bash">rsync -azvv --progress -e <span class="hljs-string">"ssh -i discord-ec2.pem"</span> \
/path/to/your/code/folder \
ubuntu@ec2-18-140-70-118.ap-southeast-1.compute.amazonaws.com:~/
</code></pre>
<hr />
<p>Now, that your files are on the Server. Quickly, verify the files with the following commands</p>
<pre><code class="lang-bash"><span class="hljs-built_in">cd</span> folder
ls -all
</code></pre>
<hr />
<p>Once all files are intact, let’s set up the Python Environment.</p>
<p>Firstly, create the Python Virtual Environment.</p>
<pre><code class="lang-bash"><span class="hljs-built_in">cd</span> path/to/your/code/<span class="hljs-keyword">in</span>/the/server
sudo apt install python3.8-venv -y
python3 -m venv venv
</code></pre>
<p>Secondly, let’s activate the newly created Virtual Environment &amp; Install the dependencies</p>
<pre><code class="lang-bash"><span class="hljs-built_in">source</span> venv/bin/activate
pip install -r requirements.txt
</code></pre>
<p>Lastly, assuming you have replaced the Discord Auth Token with a Token attained from Discord’s Developer Portal. Let’s run it</p>
<p># Simply run the command below</p>
<pre><code class="lang-bash">python3 run.py
</code></pre>
<hr />
<h2 id="heading-conclusion">Conclusion</h2>
<p>All the steps listed are curated in such a way where it can be understood and followed along. Rest assured, your Discord Bot should now be online!</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1767059328521/6e6300ac-718a-4f0c-949c-e55963a99d60.png" alt /></p>
<p>Stay Safe. Happy Coding &amp; Happy Deploying!</p>
]]></content:encoded></item><item><title><![CDATA[Goodbye Access Keys, Hello OIDC: Unleashing Secure CI/CD with AWS IAM Roles (OIDC)]]></title><description><![CDATA[We all have been in a situation where we were worried what if our AWS Access Keys or Secret Access Keys got leaked. I mean, what can go wrong right? Well, let’s hold on our thoughts over here.
Let’s now assume we have 100 IAM Access Keys active curre...]]></description><link>https://blog.raeveen.dev/goodbye-access-keys-hello-oidc-unleashing-secure-ci-cd-with-aws-iam-roles-oidc-a8b213b37feb</link><guid isPermaLink="true">https://blog.raeveen.dev/goodbye-access-keys-hello-oidc-unleashing-secure-ci-cd-with-aws-iam-roles-oidc-a8b213b37feb</guid><category><![CDATA[Devops]]></category><category><![CDATA[Docker]]></category><category><![CDATA[AWS]]></category><category><![CDATA[github-actions]]></category><category><![CDATA[GitHub]]></category><category><![CDATA[GitHub Actions]]></category><category><![CDATA[OIDC]]></category><dc:creator><![CDATA[Raeveen Pasupathy]]></dc:creator><pubDate>Tue, 30 Dec 2025 02:09:09 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1767059336965/0ee4ea22-b42e-45e9-9d90-1000588b4e87.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>We all have been in a situation where we were worried what if our AWS Access Keys or Secret Access Keys got leaked. I mean, <em>what can go wrong right?</em> Well, let’s hold on our thoughts over here.</p>
<p>Let’s now assume we have 100 IAM Access Keys active currently, and rotating them could be nerve-wrecking because in order to rotate them — You must remember them first!w</p>
<p>It’s better to be safe than sorry. Exposing your AWS Access Keys or Secret Access keys are not fun!</p>
<h2 id="heading-prerequisites">Prerequisites</h2>
<ul>
<li><p>A GitHub Account</p>
</li>
<li><p>An AWS Account</p>
</li>
</ul>
<hr />
<h2 id="heading-step-1-create-a-new-github-repository">Step 1: Create a new GitHub repository</h2>
<p>You know the drill. Just create the repository you’d like to test this with on GitHub.</p>
<p>Alternatively, you can use any existing repositories!</p>
<h2 id="heading-step-2-login-to-your-aws-account">Step 2: Login to your AWS Account</h2>
<p>Head to <a target="_blank" href="https://console.aws.amazon.com">https://console.aws.amazon.com</a> and authenticate with your IAM User.</p>
<blockquote>
<p><strong>Note:</strong> If you are authenticating through AWS SSO — Then, proceed with the respective AWS SSO login page.</p>
</blockquote>
<h2 id="heading-step-3-prepare-your-aws-environment-to-support-oidc-authentication">Step 3: Prepare your AWS Environment to support OIDC Authentication</h2>
<ol>
<li><p>Head to the <a target="_blank" href="https://us-east-1.console.aws.amazon.com/iamv2/home#/identity_providers">AWS IAM OIDC providers</a> page.</p>
</li>
<li><p>Click on “Add provider”.</p>
</li>
<li><p>Fill / select the following:<br /> <strong>Provider Type:</strong> OpenID Connect<br /> <strong>Provider URL:</strong> <a target="_blank" href="https://token.actions.githubusercontent.com">https://token.actions.githubusercontent.com</a><br /> ** Click on “Get thumbprint”<br /> *<strong>Audience:</strong> sts.amazonaws.com</p>
</li>
<li><p>Head to the <a target="_blank" href="https://us-east-1.console.aws.amazon.com/iamv2/home#/roles">AWS IAM roles</a> page.</p>
</li>
<li><p>Create a new IAM role with the details as below:<br /> <strong>Role Name:</strong> Any name you’d prefer<br /> <strong>Role Policy:</strong> Any policy you’d prefer. <em>(I am using AdministratorAccess for testing purposes)</em>.<br /> <strong>Role Trust Relationship:</strong> <em>(As below)</em></p>
</li>
</ol>
<pre><code class="lang-json">{
    <span class="hljs-attr">"Version"</span>: <span class="hljs-string">"2012-10-17"</span>,
    <span class="hljs-attr">"Statement"</span>: [
        {
            <span class="hljs-attr">"Effect"</span>: <span class="hljs-string">"Allow"</span>,
            <span class="hljs-attr">"Principal"</span>: {
                <span class="hljs-attr">"Federated"</span>: <span class="hljs-string">"&lt;YOUR_OIDC_ARN&gt;"</span>
            },
            <span class="hljs-attr">"Action"</span>: <span class="hljs-string">"sts:AssumeRoleWithWebIdentity"</span>,
            <span class="hljs-attr">"Condition"</span>: {
                <span class="hljs-attr">"StringEquals"</span>: {
                    <span class="hljs-attr">"token.actions.githubusercontent.com:aud"</span>: <span class="hljs-string">"sts.amazonaws.com"</span>,
                    <span class="hljs-attr">"token.actions.githubusercontent.com:sub"</span>: [
                        <span class="hljs-string">"repo:&lt;YOUR_GITHUB_ORG&gt;/&lt;YOUR_REPO_NAME&gt;:ref:refs/heads/&lt;YOUR_BRANCH_NAME&gt;"</span>
                    ]
                }
            }
        }
    ]
}
</code></pre>
<p>Upon creation of the IAM role with the appropriate configurations as mentioned above, you may now attach the IAM role to the OIDC provider by;</p>
<ol>
<li><p>Clicking on your OIDC Provider from the <a target="_blank" href="https://us-east-1.console.aws.amazon.com/iamv2/home#/identity_providers">OIDC Providers page</a>.</p>
</li>
<li><p>Attaching the IAM role by click on the “Assign role” button on the top right of the page.</p>
</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1767059332966/60a77158-73fc-4071-8b3c-21a2287f966a.png" alt /></p>
<h2 id="heading-step-4-setting-up-your-github-cicd-workflow-configuration">Step 4: Setting up your GitHub CI/CD workflow configuration</h2>
<p>This is a simple GHA workflow, please update wherever neccessary;</p>
<pre><code class="lang-yaml"><span class="hljs-attr">name:</span> <span class="hljs-string">"Testing CI/CD"</span>
<span class="hljs-attr">on:</span>
  <span class="hljs-attr">workflow_dispatch:</span>
<span class="hljs-attr">permissions:</span>
  <span class="hljs-attr">id-token:</span> <span class="hljs-string">write</span> <span class="hljs-comment"># This is required for requesting the JWT</span>
  <span class="hljs-attr">contents:</span> <span class="hljs-string">read</span> <span class="hljs-comment"># This is required for actions/checkout</span>
<span class="hljs-attr">jobs:</span>
  <span class="hljs-attr">deploy:</span>
    <span class="hljs-attr">runs-on:</span> <span class="hljs-string">ubuntu-latest</span>
    <span class="hljs-attr">steps:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">uses:</span> <span class="hljs-string">actions/checkout@v2</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Authenticate</span> <span class="hljs-string">to</span> <span class="hljs-string">AWS</span>
        <span class="hljs-attr">uses:</span> <span class="hljs-string">aws-actions/configure-aws-credentials@v2</span>
        <span class="hljs-attr">with:</span>
        <span class="hljs-attr">role-to-assume:</span> <span class="hljs-string">&lt;YOUR_AWS_IAM_ROLE_ARN&gt;</span>
        <span class="hljs-attr">role-session-name:</span> <span class="hljs-string">GitHub_to_AWS_via_FederatedOIDC</span>
        <span class="hljs-attr">aws-region:</span> <span class="hljs-string">&lt;YOUR_AWS_REGION&gt;</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">GetCallerIdentity</span> <span class="hljs-string">(STS)</span>
        <span class="hljs-attr">run:</span> <span class="hljs-string">|
          aws sts get-caller-identity
</span>        <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">S3</span> <span class="hljs-string">(List)</span>
        <span class="hljs-attr">run:</span> <span class="hljs-string">|</span>
          <span class="hljs-string">aws</span> <span class="hljs-string">s3</span> <span class="hljs-string">ls</span>
</code></pre>
<h2 id="heading-step-5-lets-test-it-out">Step 5: Let’s test it out</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1767059334435/b0885240-8b41-4089-8812-0a28efbf93f3.png" alt /></p>
<p>The above is possible with our approach. No Secret Access Key or Access Keys needed in the GH environment.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1767059335739/c7528644-33b2-4925-9ee7-691d0670919c.png" alt /></p>
<h2 id="heading-conclusion">Conclusion</h2>
<p>Now that we have setup and successfully authenticated to our AWS Account through GHA. You may have some questions, and I had too! Let me answer them for you in numbered points.</p>
<ol>
<li><strong>Is this secure?</strong></li>
</ol>
<p>Yes. Since we created an IAM Role policy in Step 3 of this document. We have limited the scope, so only GitHub repositories that we own in our GH Organization could exchange STS token among themself.</p>
<ol>
<li><strong>How do we handle rotations? Are there anything to rotate?</strong></li>
</ol>
<p>To be honest, there is nothing to rotate as we are not dealing with any IAM Access Keys or Secret Access Keys as we’re aiming to get rid of them anyways! It is secure in a way where we don’t even need to store the IAM Access Key / Secret Access Key within our local machine related to CI/CD.</p>
<ol>
<li><strong>Can I add more repo to have access to exchange the STS token so they could also use CI/CD?</strong></li>
</ol>
<p>Definitely! Referring to our IAM role trust relationship policy — You can add multiple repository names in the following format to <strong><mark>“token.actions.githubusercontent.com:sub”</mark></strong>.</p>
<p><strong>Example:</strong></p>
<pre><code class="lang-json"><span class="hljs-string">"token.actions.githubusercontent.com:sub"</span>: [
    <span class="hljs-string">"repo:InspectorGadget/assume-role-with-oidc:ref:refs/heads/main"</span>,
    <span class="hljs-string">"repo:InspectorGadget/assume-role-with-oidc:ref:refs/heads/main"</span>
]
</code></pre>
<p>#GoodbyeAccessKeys  </p>
<p>#OIDCSecurity  </p>
<p>#CI_CD  </p>
<p>#AWSIAMRoles  </p>
<p>#SecureDevOps  </p>
<p>#NoMoreKeys  </p>
<p>#IdentityAccessManagement</p>
]]></content:encoded></item><item><title><![CDATA[Running scalable httpd service on AWS]]></title><description><![CDATA[Prerequisites

An AWS Account

Basic knowledge regarding EFS, ASG, LaunchConfig, ALB, and EC2.

We’ll be using **t2.micro** instance type as it is under AWS Free Tier, but I will still use Spot Instance :D.

We’ll be deploying our instances in **Publ...]]></description><link>https://blog.raeveen.dev/running-scalable-httpd-service-on-aws-4cd65f70fe5c</link><guid isPermaLink="true">https://blog.raeveen.dev/running-scalable-httpd-service-on-aws-4cd65f70fe5c</guid><category><![CDATA[Devops articles]]></category><category><![CDATA[AWS]]></category><category><![CDATA[Docker]]></category><category><![CDATA[httpd]]></category><category><![CDATA[Scalable web applications]]></category><dc:creator><![CDATA[Raeveen Pasupathy]]></dc:creator><pubDate>Tue, 30 Dec 2025 02:04:09 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1767059346649/4ab46884-6d43-4aa1-a137-ac1bb63f812e.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-prerequisites">Prerequisites</h2>
<ol>
<li><p>An AWS Account</p>
</li>
<li><p>Basic knowledge regarding <strong>EFS, ASG, LaunchConfig, ALB, and EC2</strong>.</p>
</li>
<li><p>We’ll be using <code>**t2.micro**</code> instance type as it is under AWS Free Tier, but I will still use Spot Instance :D.</p>
</li>
<li><p>We’ll be deploying our instances in <code>**Public Subnet**</code>, using the default VPC inside of AWS that was created for you by default.</p>
</li>
<li><p>Basic VPC Knowledge; <strong>CIDR, Subnet, Route Tables, etc</strong></p>
</li>
</ol>
<hr />
<h2 id="heading-1-creating-your-custom-ec2-security-group">#1: Creating your custom EC2 Security Group</h2>
<blockquote>
<p><strong>SG #1</strong></p>
<p><strong>Name:</strong> efs-sg-default<br /><strong>Description:</strong> Allows EFS Access<br /><strong>VPC:</strong> AWS Default VPC<br /><strong>Inbound rules</strong><br />1. NFS -&gt; 0.0.0.0/0<br /><strong>Tags</strong><br />Name -&gt; Allow EFS<br /><strong>Others</strong><br />Set it as default</p>
<p><strong>SG #2</strong></p>
<p><strong>Name:</strong> alb-sg<br /><strong>Description:</strong> Allows HTTP Access via ALB (Port 80)<br /><strong>VPC:</strong> AWS Default VPC<br /><strong>Inbound rules:</strong><br />1. HTTP -&gt; 0.0.0.0/0<br /><strong>Tags:</strong><br />Name -&gt; Allow HTTP for ALB<br /><strong>Others</strong><br />Set it as default</p>
<p><strong>SG #3</strong></p>
<p><strong>Name:</strong> ec2-sg<br /><strong>Description:</strong> SG for EC2<br /><strong>VPC:</strong> AWS Default VPC<br /><strong>Inbound rules</strong><br />1. HTTP -&gt; alb-sg (Select SG)<br />2. SSH -&gt; 0.0.0.0/0<br /><strong>Tags</strong><br />Name -&gt; SG for EC2<br /><strong>Others</strong><br />Set it as default</p>
</blockquote>
<h2 id="heading-2-creating-your-efs-elastic-file-system">#2: Creating your EFS (Elastic File System)</h2>
<h3 id="heading-configurations"><strong>Configurations:</strong></h3>
<blockquote>
<p><strong>Name:</strong> Website Data<br /><strong>Availability and durability:</strong> One Zone<br /><strong>AZ:</strong> ap-southeast-1<br /><strong>Automatic backups:</strong> Disabled<br /><strong>Lifecycle management:</strong> None<br /><strong>Performance mode:</strong> General Purpose<br /><strong>Throughput mode:</strong> Bursting<br /><strong>Encryption (Data at rest):</strong> Turned on<br /><strong>VPC:</strong> default<br /><strong>Subnet:</strong> Default Subnet (Depending on the AZ selected)<br /><strong>Security Group:</strong> Created from #1 (efs-sg-default)</p>
<p><strong>* Leave everything else as default and create your EFS</strong></p>
</blockquote>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1767059341680/96f14924-ac31-470f-b9ab-aee84efb1354.png" alt /></p>
<h2 id="heading-3-creating-launch-template">#3: Creating Launch Template</h2>
<blockquote>
<p><strong>Name:</strong> httpd-template<br /><strong>Auto Scaling guidance:</strong> Optional but I have turned it on<br /><strong>AMI:</strong> Amazon Linux 2<br /><strong>Instance type:</strong> t2.micro (Free tier eligible)<br /><strong>Key pair:</strong> Select any existing Key pair, or create a new one.<br /><strong>Security Group:</strong> Select <strong>“efs-sg-default” &amp; “ec2-sg”</strong> SG created from #1<br /><strong>Storage:</strong> Default (8 GB)</p>
<p><strong>Advanced Details</strong><br /><strong>Request Spot Instances:</strong> Enabled<br /><strong>IAM instance profile:</strong> Select any IAM Role if you have</p>
<p><strong>User Data Script:</strong>  </p>
<p>#!/bin/bash<br />sudo yum update -y<br />sudo yum install httpd -y<br />sudo systemctl start httpd<br />sudo systemctl enable httpd<br />sudo mount -t nfs4 -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport <strong>“your_efs_ip”</strong>:/ /var/www/html</p>
</blockquote>
<p><strong><mark>NOTE</mark></strong><mark>:<br /><strong>You may need to replace </strong></mark> <mark>“your_efs_ip”</mark>** <mark> with the real ID of your EFS which you may find in the AWS Management Console.</mark></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1767059343140/13ca3b11-e259-42d7-b457-bfd1880c1334.png" alt /></p>
<h2 id="heading-5-creating-target-groups-for-alb">#5: Creating Target Groups for ALB</h2>
<blockquote>
<p><strong>Choose a target type:</strong> Instances<br /><strong>Target group name:</strong> httpd-tg<br /><strong>Protocol:</strong> HTTP -&gt; Port 80<br /><strong>VPC:</strong> AWS Default VPC<br /><strong>Health check protocol:</strong> HTTP<br /><strong>Health check path:</strong> /</p>
<p>Click on <strong>“Next”</strong></p>
<p><strong>Register Instances:</strong> Do not select any instances</p>
<p><em>Finally, create the Target Group</em></p>
</blockquote>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1767059344323/cec13f83-a773-42c0-b729-8895787aa5a3.png" alt /></p>
<h2 id="heading-6-create-application-load-balancer">#6: Create Application Load Balancer</h2>
<blockquote>
<p><strong>Name:</strong> httpd-alb<br /><strong>Scheme:</strong> Internet-facing<br /><strong>IP address type:</strong> IPv4<br /><strong>VPC:</strong> AWS Default VPC<br /><strong>Subnet Mappings:</strong> Select all<br /><strong>Security Group:</strong> Created from #1 (allow-http-for-alb)<br /><strong>Target Group:</strong> Created from #5 (HTTP: 80 -&gt; httpd-tg)</p>
<p><strong><mark>And create it!</mark></strong></p>
</blockquote>
<h2 id="heading-7-create-auto-scaling-group">#7: Create Auto Scaling Group</h2>
<blockquote>
<p><strong>Auto Scaling group name:</strong> httpd-asg<br /><strong>Launch template:</strong> Created from #4) (httpd-template)<br /><strong>VPC:</strong> AWS Default VPC<br /><strong>AZ:</strong> Select all<br /><strong>Attach existing Load Balancer:</strong> Created from #6 (httpd-alb)<br /><strong>Desired capacity:</strong> 2<br /><strong>Minimum capacity:</strong> 1<br /><strong>Maximum capacity:</strong> 2<br /><strong>Scaling policies:</strong> None for now<br /><strong>Instance scale-in protection:</strong> Disabled<br /><strong>Tags:</strong><br />1. <strong>Name -&gt; “HTTPD Instance”</strong></p>
<p><strong>And create it!</strong></p>
</blockquote>
<p>Upon a success creation of resources in the steps above, you can now visit the URL of your ALB on the browser and enjoy it ! Your website files are now gathered in all the EC2 instances via EFS, and load balanced.</p>
<p>To add a new file, or change something — All you have is to SSH into one of the instances and change the files. It will be automatically reflected across all the other instances.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1767059345529/d0f6752e-8941-477d-bb3a-694940b19c5c.png" alt /></p>
]]></content:encoded></item><item><title><![CDATA[How to use AWS CLI or any Custom Command using Terraform | Raeveen Pasupathy]]></title><description><![CDATA[If you’re currently working on building your own Terraform Module, congratulations 🎉. It is evident that some build-in Terraform Modules may not be sufficient for your business or personal use-cases — Hence, that is why you’re into building yourself...]]></description><link>https://blog.raeveen.dev/how-to-use-aws-cli-or-any-custom-command-using-terraform-raeveen-pasupathy-503e730e52b6</link><guid isPermaLink="true">https://blog.raeveen.dev/how-to-use-aws-cli-or-any-custom-command-using-terraform-raeveen-pasupathy-503e730e52b6</guid><category><![CDATA[Devops]]></category><category><![CDATA[Docker]]></category><category><![CDATA[AWS]]></category><category><![CDATA[Terraform]]></category><dc:creator><![CDATA[Raeveen Pasupathy]]></dc:creator><pubDate>Tue, 30 Dec 2025 01:52:01 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1767059312218/6e542071-0a5d-4b06-8140-4df4de76c40a.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>If you’re currently working on building your own Terraform Module, congratulations 🎉. It is evident that some build-in Terraform Modules may not be sufficient for your business or personal use-cases — Hence, that is why you’re into building yourself one from scratch. Since the services offered by Cloud Infrastructure Providers such as AWS, GCP, Azure and more are evolving day-by-day, it may be required for you to alter or re-architect your existing solutions so it could be the best, in terms of Cost, Security, Efficiency, Reliability and more.</p>
<p>We all sometimes face issues when trying to create our own IaC Module, so it could be used by you or your Organization itself. It could be stressful, I totally hear you! There could be tons of ideas flowing through your head, and you’d be longing for an answer or ways to fix it. In this story, I’m here to show you an optimal way to completely eliminate the need of you to touch the CLI (Command Line Interface) on your Machine, unless it’s for running the <code>terraform apply</code> or <code>terraform destroy</code> command, well you could still throw these commands onto your GH repo as a pipeline and watch it roll.</p>
<p>So, once again, I’m here to introduce <em>all of you on a most efficient way to run</em> <code>*bash*</code> <em>scripts using Terraform</em>. It is actually simple, but sometimes we overlook things 😢.</p>
<hr />
<h1 id="heading-prerequisites">Prerequisites</h1>
<ul>
<li><p>Terraform</p>
</li>
<li><p>AWS CLI</p>
</li>
</ul>
<hr />
<h1 id="heading-solution">Solution</h1>
<p>The <code>null_resource</code> acts as any TF resource does, but without doing anything. Weird? You can think of it as a resource so that you can attach <a target="_blank" href="https://www.terraform.io/language/resources/provisioners/syntax#provisioners-are-a-last-resort">provisioners as a last resort</a> for any manual jobs that needs to be executed. And, this is where the <code>local-exec</code> provisioner comes in hand. It allows us to invoke any local shell command / script.</p>
<h2 id="heading-example"><strong>Example:</strong></h2>
<pre><code class="lang-plaintext">resource "null_resource" "say_hello_world" {  
  provisioner "local-exec" {  
    command     = "echo 'Hello, world'"  
    interpreter = ["/bin/bash", "-c"]  
  }  
}
</code></pre>
<p>However, it is also possible to extend the current script to a newer level.</p>
<pre><code class="lang-plaintext">`resource "null_resource" "say_hello_world" {  
  triggers {  
    shell_hash = "${sha256(file("${path.module}/hello.sh"))}"  
  }`
</code></pre>
<pre><code class="lang-plaintext"> provisioner "local-exec" {  
    command     = "./hello.sh"  
    interpreter = ["/bin/bash", "-c"]  
  }  
}
</code></pre>
<p>**Anything that runs under the <code>local-exec</code> provisioner <strong>can’t</strong> be stored in the TF state. If you need some changes to be made, make sure to include the <code>triggers</code> argument. Adding variables to <code>triggers</code> will prompt for new deployment on every variable’s value update.</p>
<hr />
<h1 id="heading-conclusion">Conclusion</h1>
<p>So, what are you waiting for? Dive right into it!</p>
<p>An example could be found <a target="_blank" href="https://github.com/InspectorGadget/terraform-awscli-custom-command">here</a>.</p>
<p>#HappyDesigning  </p>
<p>#HappyCoding  </p>
<p>#HappyArchitecting</p>
]]></content:encoded></item></channel></rss>