<?xml version="1.0" encoding="utf-8"?>

<feed xmlns="http://www.w3.org/2005/Atom" >
  <generator uri="https://jekyllrb.com/" version="3.10.0">Jekyll</generator>
  <link href="https://jongoodall.co.uk/feed.xml" rel="self" type="application/atom+xml" />
  <link href="https://jongoodall.co.uk/" rel="alternate" type="text/html" />
  <updated>2026-05-10T09:57:26+00:00</updated>
  <id>https://jongoodall.co.uk/</id>

  
    <title type="html">Jon Goodall</title>
  

  
    <subtitle>Principal Cloud Engineer @&lt;a href=&quot;https://www.logicata.com/&quot; target=&quot;_blank&quot;&gt;Logicata&lt;/a&gt;, specializing in DevOps and AWS</subtitle>
  

  
    <author>
        <name>Jon Goodall</name>
      
        <email>jongoodall14@gmail.com</email>
      
      
    </author>
  

  
  
    <entry>
      <title type="html">Building a Jekyll Blog with AI: An Honest Take on ‘AI Slop’</title>
      
      <link href="https://jongoodall.co.uk/blog/2025/12/06/building-a-jekyll-blog-with-ai-honest-take/" rel="alternate" type="text/html" title="Building a Jekyll Blog with AI: An Honest Take on &apos;AI Slop&apos;" />
      
      <published>2025-12-06T15:00:00+00:00</published>
      <updated>2025-12-06T15:00:00+00:00</updated>
      <id>https://jongoodall.co.uk/blog/2025/12/06/building-a-jekyll-blog-with-ai-honest-take</id>
      <content type="html" xml:base="https://jongoodall.co.uk/blog/2025/12/06/building-a-jekyll-blog-with-ai-honest-take/">&lt;h2 id=&quot;building-a-jekyll-blog-with-ai-an-honest-take-on-ai-slop&quot;&gt;Building a Jekyll Blog with AI: An Honest Take on “AI Slop”&lt;/h2&gt;

&lt;p&gt;Let’s get something out of the way immediately: &lt;strong&gt;I didn’t write most of the code for this website.&lt;/strong&gt; An AI did. Specifically, Claude (via Kiro IDE) wrote the Jekyll templates, the CSS, the JavaScript, the Ruby scripts, and even this blog post you’re reading right now.&lt;/p&gt;

&lt;p&gt;Is this “AI slop”? Maybe. Am I going to pretend I hand-crafted every line of HTML? Absolutely not.&lt;/p&gt;

&lt;h2 id=&quot;why-be-honest-about-it&quot;&gt;Why Be Honest About It?&lt;/h2&gt;

&lt;p&gt;I’m a Principal Cloud Engineer. I work with AWS, Terraform, Kubernetes, CI/CD pipelines - that’s my domain. I can architect a multi-region serverless application, but ask me to center a div and I’ll probably Google it like everyone else.&lt;/p&gt;

&lt;p&gt;When I decided to modernize my portfolio site and add a blog, I had two options:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Spend weeks learning modern web development best practices, Jekyll internals, Bootstrap 5, and frontend tooling&lt;/li&gt;
  &lt;li&gt;Use AI to handle the web dev parts while I focus on the content and infrastructure&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;I chose option 2, and I’m not ashamed of it.&lt;/p&gt;

&lt;h2 id=&quot;what-ai-assisted-actually-means&quot;&gt;What “AI-Assisted” Actually Means&lt;/h2&gt;

&lt;p&gt;Here’s what the process looked like:&lt;/p&gt;

&lt;h3 id=&quot;what-i-did&quot;&gt;What I Did:&lt;/h3&gt;
&lt;ul&gt;
  &lt;li&gt;Decided what I wanted (multi-page site, blog, modern design)&lt;/li&gt;
  &lt;li&gt;Provided feedback (“the hero header is crap”, “color scheme needs better contrast”)&lt;/li&gt;
  &lt;li&gt;Made high-level decisions (GitHub Pages hosting, Jekyll, Bootstrap 5)&lt;/li&gt;
  &lt;li&gt;Said “yes” or “no” to changes&lt;/li&gt;
  &lt;li&gt;Pointed out when things were broken&lt;/li&gt;
  &lt;li&gt;That’s about it, honestly&lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id=&quot;what-claude-did&quot;&gt;What Claude Did:&lt;/h3&gt;
&lt;ul&gt;
  &lt;li&gt;Wrote Jekyll layouts and includes&lt;/li&gt;
  &lt;li&gt;Created responsive CSS with proper color contrast&lt;/li&gt;
  &lt;li&gt;Built the blog listing and pagination system&lt;/li&gt;
  &lt;li&gt;Generated category and tag pages&lt;/li&gt;
  &lt;li&gt;Created Ruby scripts for automation&lt;/li&gt;
  &lt;li&gt;Wrote property-based tests&lt;/li&gt;
  &lt;li&gt;Fixed bugs and styling issues&lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id=&quot;what-we-did-together&quot;&gt;What We Did Together:&lt;/h3&gt;
&lt;ul&gt;
  &lt;li&gt;Iterated on design (“much better”, “still crap”)&lt;/li&gt;
  &lt;li&gt;Debugged issues (blog not showing posts, 404s on category pages)&lt;/li&gt;
  &lt;li&gt;Improved the workflow (automated taxonomy generation instead of manual pages)&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;the-interesting-part&quot;&gt;The Interesting Part&lt;/h2&gt;

&lt;p&gt;Here’s what I find genuinely interesting about this process:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. It’s Not Magic&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The AI didn’t just conjure a perfect website. It took iteration, feedback, and course correction. I had to know what I wanted and be able to articulate when something wasn’t working.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. It’s Still My Site (Sort Of)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The decisions are mine in the sense that I said “yes” or “no” to things. The content is be mine (apart from most of this post, guess which bits I did for bonus points). But let’s not pretend I “architected” this - I just pointed at things and said “make it work” or “that’s ugly”.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. It Reveals What Actually Matters&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Turns out, for a personal blog, the code quality matters less than:&lt;/p&gt;
&lt;ul&gt;
  &lt;li&gt;Does it work?&lt;/li&gt;
  &lt;li&gt;Is it maintainable?&lt;/li&gt;
  &lt;li&gt;Does it look professional?&lt;/li&gt;
  &lt;li&gt;Can I focus on writing content instead of fighting with CSS?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The answer to all of these is yes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. The Transparency Is The Point&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I could have built this with AI and never mentioned it. Many people do. But I think the interesting story here is being honest about the process.&lt;/p&gt;

&lt;p&gt;I’m not a web developer. I don’t want to be a web developer. But I wanted a modern, professional site. AI made that possible without requiring me to become an expert in a domain I don’t care about.&lt;/p&gt;

&lt;h2 id=&quot;what-this-means-for-ai-slop&quot;&gt;What This Means for “AI Slop”&lt;/h2&gt;

&lt;p&gt;The term “AI slop” usually refers to low-effort, mass-produced content that floods the internet. And yeah, that’s a real problem.&lt;/p&gt;

&lt;p&gt;But is this that?&lt;/p&gt;

&lt;p&gt;I’d argue no, for a few reasons:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;&lt;strong&gt;It’s Transparent&lt;/strong&gt;: I’m telling you exactly how it was built&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;It’s Purposeful&lt;/strong&gt;: This site serves a specific purpose for me&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;It’s Maintained&lt;/strong&gt;: I’m responsible for it and will keep it updated&lt;/li&gt;
  &lt;li&gt;&lt;strong&gt;It’s Honest&lt;/strong&gt;: I’m not claiming expertise I don’t have&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The alternative would be:&lt;/p&gt;
&lt;ul&gt;
  &lt;li&gt;Paying a web developer (expensive, ongoing)&lt;/li&gt;
  &lt;li&gt;Using a cookie-cutter template (limiting, generic)&lt;/li&gt;
  &lt;li&gt;Not having a blog at all (boring)&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;the-real-question&quot;&gt;The Real Question&lt;/h2&gt;

&lt;p&gt;The real question isn’t “Is this AI slop?” It’s “Does this add value?”&lt;/p&gt;

&lt;p&gt;For me, the answer is yes:&lt;/p&gt;
&lt;ul&gt;
  &lt;li&gt;I have a professional online presence&lt;/li&gt;
  &lt;li&gt;I can share technical content about DevOps and cloud engineering&lt;/li&gt;
  &lt;li&gt;I can maintain and update it myself&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For you, the reader, hopefully it’s also yes:&lt;/p&gt;
&lt;ul&gt;
  &lt;li&gt;You get honest technical content about cloud engineering&lt;/li&gt;
  &lt;li&gt;You see a real example of AI-assisted development&lt;/li&gt;
  &lt;li&gt;You can judge for yourself whether the result is valuable&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;the-workflow&quot;&gt;The Workflow&lt;/h2&gt;

&lt;p&gt;Since this is a technical blog, here’s what the actual workflow looked like:&lt;/p&gt;

&lt;div class=&quot;language-bash highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;c&quot;&gt;## 1. Started with a forked one-page portfolio&lt;/span&gt;
git clone https://github.com/jdgoodall1/jdgoodall1.github.io.git

&lt;span class=&quot;c&quot;&gt;## 2. Used Kiro IDE to modernize it&lt;/span&gt;
&lt;span class=&quot;c&quot;&gt;## - Created spec documents (requirements, design, tasks)&lt;/span&gt;
&lt;span class=&quot;c&quot;&gt;## - Iterated on implementation&lt;/span&gt;
&lt;span class=&quot;c&quot;&gt;## - Fixed issues as they came up&lt;/span&gt;

&lt;span class=&quot;c&quot;&gt;## 3. Automated the boring parts&lt;/span&gt;
rake generate_taxonomy  &lt;span class=&quot;c&quot;&gt;# Auto-generates category/tag pages&lt;/span&gt;
rake serve             &lt;span class=&quot;c&quot;&gt;# Runs Jekyll with live reload&lt;/span&gt;

&lt;span class=&quot;c&quot;&gt;## 4. Deploy&lt;/span&gt;
git push origin main   &lt;span class=&quot;c&quot;&gt;# GitHub Pages handles the rest&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;The entire modernization took a few hours of back-and-forth with the AI, versus what would have been weeks of learning and coding.&lt;/p&gt;

&lt;h2 id=&quot;what-i-actually-learned&quot;&gt;What I Actually Learned&lt;/h2&gt;

&lt;p&gt;Let’s be real: not much. I didn’t suddenly become a Jekyll expert or learn Bootstrap 5. I mostly just said “this looks crap” or “the blog isn’t showing posts” and let the AI figure it out.&lt;/p&gt;

&lt;p&gt;Could I maintain this? Eh, kinda. Could I modify it? Maybe simple stuff. Did I learn the deep internals of Jekyll templating? Absolutely not.&lt;/p&gt;

&lt;p&gt;And that’s fine. That was never the goal.&lt;/p&gt;

&lt;h2 id=&quot;the-bottom-line&quot;&gt;The Bottom Line&lt;/h2&gt;

&lt;p&gt;This site is “AI slop” in the sense that an AI generated most of the code. But it’s not slop in the sense of being low-quality, thoughtless, or deceptive.&lt;/p&gt;

&lt;p&gt;It’s a tool that let me focus on what I’m good at (cloud engineering, DevOps, technical writing) while still having a professional web presence.&lt;/p&gt;

&lt;p&gt;Is that cheating? I don’t think so. It’s just using the right tool for the job.&lt;/p&gt;

&lt;h2 id=&quot;your-turn&quot;&gt;Your Turn&lt;/h2&gt;

&lt;p&gt;If you’re reading this and thinking “I could never build a website”, maybe reconsider. With AI assistance, you probably can. You just need to:&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Know what you want&lt;/li&gt;
  &lt;li&gt;Be able to give feedback&lt;/li&gt;
  &lt;li&gt;Be willing to iterate&lt;/li&gt;
  &lt;li&gt;Be honest about the process&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;And if you’re reading this thinking “This is exactly what’s wrong with AI”, I respect that. But I’d rather be honest about using AI than pretend I’m something I’m not.&lt;/p&gt;

&lt;h2 id=&quot;meta-note&quot;&gt;Meta Note&lt;/h2&gt;

&lt;p&gt;Yes, Claude wrote this blog post too. I gave it the direction: “lean into the AI slop label but be honest about it - I’m not going to claim this as my work in any meaningful way, I just think it’s interesting but can’t be bothered to write it.”&lt;/p&gt;

&lt;p&gt;And here we are.&lt;/p&gt;

&lt;p&gt;Is it slop? You decide.&lt;/p&gt;

&lt;hr /&gt;

&lt;p&gt;&lt;strong&gt;Update&lt;/strong&gt;: If you want to see the actual code and process, it’s all public on &lt;a href=&quot;https://github.com/jdgoodall1/jdgoodall1.github.io&quot;&gt;GitHub&lt;/a&gt;. The README is transparent about how it was built, and you can see every commit.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Another Update&lt;/strong&gt;: The irony of using AI to write a blog post about using AI to build a blog is not lost on me - or Claude actually, it wrote that joke.&lt;/p&gt;</content>

      
      
      
      
      

      <author>
          <name>Jon Goodall</name>
        
        
      </author>

      

      
        <category term="jekyll" />
      
        <category term="ai" />
      
        <category term="claude" />
      
        <category term="automation" />
      
        <category term="meta" />
      

      
        <summary type="html">Let&apos;s be real: I&apos;m a cloud engineer, not a web developer. This entire site was built with AI assistance. Here&apos;s what that actually means, why I&apos;m not pretending otherwise, and why it&apos;s still interesting.</summary>
      

      
      
        
        <media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://jongoodall.co.uk/img/kiro.png" />
      
    </entry>
  
    <entry>
      <title type="html">AWS Database Savings Plans – Save Up to 35% – FINALLY!</title>
      
      <link href="https://www.logicata.com/blog/aws-database-savings-plans/" rel="alternate" type="text/html" title="AWS Database Savings Plans – Save Up to 35% – FINALLY!" />
      
      <published>2025-12-04T14:00:00+00:00</published>
      <updated>2025-12-04T14:00:00+00:00</updated>
      <id>https://jongoodall.co.uk/blog/2025/12/04/aws-database-savings-plans</id>
      <content type="html" xml:base="https://jongoodall.co.uk/blog/2025/12/04/aws-database-savings-plans/">&lt;p&gt;It’s AWS Re:Invent right now, and one announcement has me and the rest of the AWS community very excited – AWS &lt;a href=&quot;https://aws.amazon.com/blogs/aws/introducing-database-savings-plans-for-aws-databases/&quot;&gt;Database Savings Plans.&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I’ve been asking for this for as long as I can remember, probably because I’m a bit dull, and also a little stingy…&lt;/p&gt;

&lt;p&gt;You’re likely wondering why I’m so excited about this, and in no small part, it’s because it makes my life easier. It also gives you, AWS customers, another AWS cost optimisation option to save money on your AWS bill, which is always a good thing.&lt;/p&gt;

&lt;p&gt;Before we get into the details about AWS Database Savings Plans, let’s do a bit of a history lesson.&lt;/p&gt;

&lt;h2 id=&quot;a-history-of-aws-savings-plans&quot;&gt;A History of AWS Savings Plans&lt;/h2&gt;

&lt;p&gt;All the way back in 2019 AWS released “&lt;a href=&quot;https://aws.amazon.com/savingsplans/compute-pricing/&quot;&gt;Compute Savings Plans&lt;/a&gt;”, and I’ve been a fan since day 1. They make saving money on “compute” (namely, EC2, Fargate and later &lt;a href=&quot;https://dev.to/aws-builders/aws-lambda-use-cases-when-you-should-use-it-5e2e&quot;&gt;Lambda&lt;/a&gt;) much easier.&lt;/p&gt;

&lt;p&gt;Before the Compute Savings Plan was released, if you knew that you were going to keep the same server for 1-3 years, you could lock in a commitment using a Reserved Instance (RI). Savings of 20% were common, and savings of 30% or more were possible. But, if you had plans on moving to “modern compute”, you were a bit stuck. Sure, you could do it, but you’d be paying for the RI you’d committed to for the duration of the term, even if you weren’t using it. This was a real barrier to modernisation, because nobody likes paying twice.&lt;/p&gt;

&lt;p&gt;This is an oversimplification, as convertible Reserved Instances exist, which let you trade them for other types. Reserved Instances also “roll up” smaller-sized Reserved Instances to cover larger servers. This has a caveat, though – it only applies if there’s no licence fee built into the hourly spend (sorry, Windows users). But in essence, you were stuck managing servers.&lt;/p&gt;

&lt;p&gt;You could move to containers, but you had to keep using EC2 to run the containers on, which was a headache and added engineering time.&lt;/p&gt;

&lt;p&gt;Compute Savings Plans are different, though – you commit to an hourly spend and save money. Literally, that’s it.&lt;/p&gt;

&lt;p&gt;OK, it’s a spend within the three supported services (EC2, Lambda and Fargate), but so long as you’re spending money on one of those things, you’d be getting the discounted pricing. Purchase terms are the same – commit to longer and pay more up front to save more money. However, the 0% upfront 1-year plan is incredibly compelling, so I default to recommending it.&lt;/p&gt;

&lt;p&gt;Compute Savings Plans aren’t perfect, though. My biggest gripe was that they didn’t support database spend. You might argue it’s a storage service, not a compute service, but tell a developer that. The line between “storage” and “compute” is so thin you can see through it at this point. My second biggest issue was the hourly commitment rather than the daily one. With a daily commitment you can account much better for flexible workload trends, but you can’t have everything.&lt;/p&gt;

&lt;h2 id=&quot;cool-history-done-whats-new&quot;&gt;Cool, history done, what’s new?&lt;/h2&gt;

&lt;p&gt;As of the 2nd of December 2025, announced to great cheers from the audience in Matt Garman’s re:Invent 2025 keynote, AWS Database Savings Plans are a thing! This is “A Very Good Thing”. He really did save the best til last, with only 2 seconds left on the ‘shot clock’!&lt;/p&gt;

&lt;p&gt;AWS Database Savings Plans work in a very similar way to Compute Savings Plans – commit to an hourly spend in “supported usage” and save money.&lt;/p&gt;

&lt;p&gt;Purchase terms are similar, but currently you can only commit to a 1-year term (come on AWS, give us 3 years!), and we’re also only offered a no up-front payment option at lauch. I’d love to see increasing discounts available for committing for longer, and paying up front as you can with Compute Savings Plans. Despite this, there are still some serious discounts available here, and the best bit is it covers serverless too – with a discount of up to 35%! That’s massive and really cements the idea for me that you should start on serverless options until your per-hour cost outweighs the “scale to zero” benefit. AWS are also pushing ‘&lt;a href=&quot;https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/manage-advancepay.html&quot;&gt;Advance Pay&lt;/a&gt;’ as a way to pay up front for your database services, but there’s no discount for doing this, so I’m not sure why you’d bother.&lt;/p&gt;

&lt;p&gt;They also have day 1 support in Savings Plan Purchase Analyzer – I waxed lyrical about this on an episode of the &lt;a href=&quot;https://www.youtube.com/@logicata&quot;&gt;Logicast AWS News Podcast&lt;/a&gt;, so this is a really nice thing to have on day 1. You’d better believe if it didn’t have it, I’d be complaining about it!&lt;/p&gt;

&lt;h2 id=&quot;sounds-great-whats-the-catch&quot;&gt;Sounds Great, What’s the Catch?&lt;/h2&gt;

&lt;p&gt;The AWS Database Savings Plan isn’t a perfect offering, as it still suffers from my second gripe of Compute Savings Plans – the hourly vs. daily commitment.&lt;/p&gt;

&lt;p&gt;It does also muddy the water a bit, as we now have four different types of Savings Plan:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Compute Savings Plan&lt;/li&gt;
  &lt;li&gt;EC2 Savings Plan&lt;/li&gt;
  &lt;li&gt;Database Savings Plan&lt;/li&gt;
  &lt;li&gt;SageMaker Savings Plan&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is very easy to solve though, AWS just needs to release an overall “Savings Plan” that covers Compute &amp;amp; Database, whilst dropping the EC2 Savings Plan offer. I’ve never found them useful between Reserved Instances and Compute Savings Plans, but maybe some people do. I also don’t use SageMaker to have a considered opinion on SageMaker Savings Plans, so they get to stay for now.&lt;/p&gt;

&lt;p&gt;I’m sure it’s not very easy for AWS to do this, for a myriad of internal and technical reasons, but we can but dream.&lt;/p&gt;

&lt;p&gt;Now, onto the “supported services” list. This is confusing. It covers:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;RDS&lt;/li&gt;
  &lt;li&gt;Aurora&lt;/li&gt;
  &lt;li&gt;DynamoDB&lt;/li&gt;
  &lt;li&gt;ElastiCache&lt;/li&gt;
  &lt;li&gt;DocumentDB&lt;/li&gt;
  &lt;li&gt;Neptune&lt;/li&gt;
  &lt;li&gt;Keyspaces&lt;/li&gt;
  &lt;li&gt;Timestream&lt;/li&gt;
  &lt;li&gt;Database Migration Service (who’s running that for a whole year?).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is a massive list of services for day 1 – remember Compute Savings plans only covered EC2 &amp;amp; Fargate at launch.&lt;/p&gt;

&lt;p&gt;However, it’s not all spend within those services that counts. Got a Redis cluster? Sorry, only Valkey is supported. Using a t4g RDS instance? No discount for you. In fact, anything that uses ‘servers’ is only eligible to be included in a Database Savings Plan if it’s using the latest instance types (r7g, m7g, m7i, m8g, etc). This is very frustrating, as many people I’ve worked with need a 24/7 non-prod environment, but only need t4g instances, for example.&lt;/p&gt;

&lt;p&gt;The serverless offering somewhat redeems this, as it’s just per-CU (Capacity Unit) hour. This is a much better offering than the current option of “nothing” and goes a long way to solving for “I can’t do serverless, it’s more expensive under consistent load”. This has been a real issue for me personally, as I’m a big advocate of serverless-first, but I couldn’t honestly recommend it for production workloads. Either the warmup time was too long for transactional workloads, or the constant throughput was too expensive without being able to make a committed purchase.&lt;/p&gt;

&lt;h2 id=&quot;final-thoughts-on-aws-database-savings-plans&quot;&gt;Final Thoughts on AWS Database Savings Plans&lt;/h2&gt;

&lt;p&gt;I’m willing to forgive the complicated in-scope vs. out-of-scope spend on this one, considering the vast array of services that are covered. Also this is a V1 offering so I’m sure it will evolve to include more services, and more payment options, as per the other savings mechanisms.&lt;/p&gt;

&lt;p&gt;This also doesn’t solve for “do I buy a Reserved Instance or an AWS Database Savings Plan”, but the gap is closing, and I’m looking forward to seeing more things come into scope in the future. AWS have committed to including newly released instance types as they become available, so I’ll just have to upgrade my boxes, I guess.&lt;/p&gt;</content>

      
      
      
      
      

      <author>
          <name>Jon Goodall</name>
        
        
      </author>

      

      
        <category term="aws" />
      
        <category term="reinvent" />
      
        <category term="database" />
      
        <category term="savings-plans" />
      
        <category term="cost-optimization" />
      
        <category term="rds" />
      
        <category term="aurora" />
      
        <category term="dynamodb" />
      

      
        <summary type="html">It&apos;s AWS Re:Invent right now, and one announcement has me and the rest of the AWS community very excited – AWS Database Savings Plans. I&apos;ve been asking for this for as long as I can remember.</summary>
      

      
      
        
        <media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://jongoodall.co.uk/img/AWS-Database-Savings-Plans-1024x684.jpg" />
      
    </entry>
  
    <entry>
      <title type="html">Building a Serverless Podcast Workflow: Adventures with AI</title>
      
      <link href="https://www.logicata.com/blog/building-a-serverless-podcast-workflow-adventures-with-ai/" rel="alternate" type="text/html" title="Building a Serverless Podcast Workflow: Adventures with AI" />
      
      <published>2024-12-29T14:00:00+00:00</published>
      <updated>2024-12-29T14:00:00+00:00</updated>
      <id>https://jongoodall.co.uk/blog/2024/12/29/building-serverless-podcast-workflow</id>
      <content type="html" xml:base="https://jongoodall.co.uk/blog/2024/12/29/building-serverless-podcast-workflow/">&lt;p&gt;As you may know, I’m a co-host &amp;amp; standing guest on the &lt;a href=&quot;https://www.logicata.com/follow/&quot;&gt;Logicast AWS News Podcast&lt;/a&gt;, where we discuss all things in the news about AWS.&lt;/p&gt;

&lt;p&gt;What you probably don’t know, is that the preparation &amp;amp; production of a podcast is rather a lot a work, and that anything to speed up &amp;amp; simplify the process is absolutely necessary – especially for a weekly podcast.&lt;/p&gt;

&lt;p&gt;On the preparation side, being about recent AWS news helps because we don’t have to do as much work on research side – we just turn up &amp;amp; record. This doesn’t help us on the production side though, which is still a lot of work.&lt;/p&gt;

&lt;p&gt;To give you a flavour, the things we have to do every week are:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Pick the articles (and ideally read them)&lt;/li&gt;
  &lt;li&gt;Share with the guest and answer any questions they might have&lt;/li&gt;
  &lt;li&gt;Record the episode&lt;/li&gt;
  &lt;li&gt;Download the files&lt;/li&gt;
  &lt;li&gt;Convert the files into the correct formats&lt;/li&gt;
  &lt;li&gt;Create a trailer&lt;/li&gt;
  &lt;li&gt;Create a summary (the “show notes”, in podcasting parlance)&lt;/li&gt;
  &lt;li&gt;Upload to the publishing platform&lt;/li&gt;
  &lt;li&gt;Social promotion&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;On top of this, because “content is king”, we want to be able to re-use the episode as much as possible. Our current wishlist is:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Create short “clips” for social posting to “drip feed” the content and drive subscribers&lt;/li&gt;
  &lt;li&gt;Create a long-form blog post from the recording, that isn’t just a transcript&lt;/li&gt;
  &lt;li&gt;Add full subtitles to each video.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;As with any problem, there were a few options to solve both the required tasks, and start on the wishlist.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Option 1: Outsource it.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This looks like a combination of things, from hiring a production &amp;amp; marketing person (much love to Alicja for the work she does), to using 3rd party tools to help with some of the creation (I’m not linking the tool, because they’re not paying us, but we use an AI service for clip/trailer creation)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Option 2: Automate All The Things&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Obviously I want to do this, because I’m an engineer, and in my head my time is free. I’m sure Logicata disagrees with me here though…..&lt;/p&gt;

&lt;p&gt;However, throw in the fact that we “needed” a reason to talk about AI, we thought we’d better have a go at doing “something”.&lt;/p&gt;

&lt;h2 id=&quot;enter-the-workflow&quot;&gt;Enter the Workflow&lt;/h2&gt;

&lt;p&gt;Now, I’m a Serverless AWS Community Builder, so obviously I went straight for Lambda &amp;amp; Step Functions here. I started playing around with options, and for once, doing some research. I know! Didn’t see that coming either.&lt;/p&gt;

&lt;p&gt;Up in this rarified research-fueled air, I found this AWS blog: https://aws.amazon.com/blogs/machine-learning/create-summaries-of-recordings-using-generative-ai-with-amazon-bedrock-and-amazon-transcribe/&lt;/p&gt;

&lt;p&gt;This was a really good foundation for what we needed/wanted to build, it even had a sample project at the time which let me short-circuit hours of dev time&lt;/p&gt;

&lt;p&gt;After a bit of tweaking, I came up with this:&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;https://dev-to-uploads.s3.amazonaws.com/uploads/articles/aj0h1afym9s4nlmuh3r2.png&quot; alt=&quot;Serverless AI Workflow V1&quot; /&gt;&lt;/p&gt;

&lt;p&gt;Yes, that’s a big scary image, so let’s break it down.&lt;/p&gt;

&lt;p&gt;The process is:&lt;/p&gt;

&lt;h5 id=&quot;step-1-kick-off-with-file-upload&quot;&gt;Step 1: Kick-off with File Upload&lt;/h5&gt;

&lt;p&gt;We start by uploading an m4a file to an S3 bucket, and use the bucket notification to trigger the workflow.&lt;/p&gt;

&lt;p&gt;I have to download the files from the recording platform, which isn’t a problem, but is a bit annoying.&lt;/p&gt;

&lt;h5 id=&quot;step-2-media-conversion&quot;&gt;Step 2: Media Conversion&lt;/h5&gt;

&lt;p&gt;AWS Elemental MediaConvert transforms the m4a file into an mp3 format, ready for Spotify and other platforms.&lt;/p&gt;

&lt;p&gt;We have to do this because the recording platform delivers an m4a, but most audio platforms prefer an mp3.&lt;/p&gt;

&lt;p&gt;This is a fire-and-forget approach, so I’m manually checking for the job completion and downloading the file afterwards. Again, not a problem but somewhat annoying.&lt;/p&gt;

&lt;h5 id=&quot;step-3-transcription&quot;&gt;Step 3: Transcription&lt;/h5&gt;

&lt;p&gt;Amazon Transcribe converts audio into a text-based JSON document. This is actually the most expensive part of the process, which I didn’t expect at the outset.&lt;/p&gt;

&lt;h5 id=&quot;step-4-run-the-prompts&quot;&gt;Step 4: Run the prompts&lt;/h5&gt;

&lt;p&gt;Amazon Bedrock reads the transcription and generates summaries and titles using prompts stored in DynamoDB.&lt;/p&gt;

&lt;p&gt;Since building this, prompt manager became a thing, because as everyone knows the best way to get AWS to create a new feature is to build it yourself first.&lt;/p&gt;

&lt;p&gt;This is all in one Lambda, using a loop in Python. I regret this enormously but it was the quickest option.&lt;/p&gt;

&lt;h5 id=&quot;step-5-outputs&quot;&gt;Step 5: Outputs&lt;/h5&gt;

&lt;p&gt;The final outputs are sent to an SNS topic for easy access.&lt;/p&gt;

&lt;p&gt;We have a Slack channel email subscribed to the topic, so the messages aren’t lost in inboxes.&lt;/p&gt;

&lt;p&gt;We went with SNS &amp;amp; email both because the baseline I used was already doing it, and I couldn’t be bothered to work out the schema for AWS Chatbot. I should probably do this though.&lt;/p&gt;

&lt;p&gt;Obviously this isn’t our full wishlist, or even the complete set of required tasks. However, with careful prompting, it does make the required tasks a lot faster to do. The summary is a good prompt to create the show notes &amp;amp; the LLM creates the title – sometimes we use it, sometimes not.&lt;/p&gt;

&lt;h2 id=&quot;there-must-be-some-problems-though&quot;&gt;There must be some problems though?&lt;/h2&gt;

&lt;p&gt;You would be correct there. The issue comes back to my time – it’s only free in my head. Turns out, building this sort of thing takes rather a lot of time &amp;amp; effort, so it has a number of “rough edges”.&lt;/p&gt;

&lt;p&gt;Chiefly:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. It’s really fragile.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Seriously, one dodgy prompt, or an episode that runs a touch long, bang. All falls over, nothing comes out the end. Lately we’ve been hitting rate limits too, presumably because we’re on an ancient version of Claude.&lt;/p&gt;

&lt;p&gt;This is mostly because I’m hacking it together, and not spending a proper amount of time on it&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Transcription is expensive, and the workflow must restart on error.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Again, because it’s fragile, the re-runs have to start at the beginning. What’s worse, because most of the failures are prompt-based and errors in the model invocation are only checked after-the-fact in a downstream task, I can’t take the offending prompt out and re-drive from the failure, thus forcing a full re-run.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. It’s kinda slow&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Nothing doing here, it’s just slow. No parallelisation of the prompts (due to the aforementioned bad Python loop), and a single lambda taking every output response and dumping it onto SNS at the same time.&lt;/p&gt;

&lt;h2 id=&quot;improvements&quot;&gt;Improvements.&lt;/h2&gt;

&lt;p&gt;OK, we’ve run this for a few months (eek), and I’ve even delivered a whole talk on it (see that &lt;a href=&quot;https://youtu.be/IUSKn8YZn68?si=69rANdfwM27XWEzq&amp;amp;t=2386&quot;&gt;here&lt;/a&gt;), I should probably do something about fixing these rough edges. This was the list:&lt;/p&gt;

&lt;h3 id=&quot;improvement-1&quot;&gt;Improvement 1:&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Update the model:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Annoyingly the interface between Claude versions has changed, so some faffing around is needed here.&lt;/p&gt;

&lt;h3 id=&quot;improvement-2&quot;&gt;Improvement 2:&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Use Prompt Manager:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;What it says on the tin. No more dodgy DynamoDB table for the prompts, use the service properly.&lt;/p&gt;

&lt;h3 id=&quot;improvement-3&quot;&gt;Improvement 3:&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Fix the bad loop:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Take the loop through the prompts out of a single lambda, and run them all as single Lambda’s, called using a Map state.&lt;/p&gt;

&lt;p&gt;This also solves for speed, as the prompts are the second slowest part of the workflow&lt;/p&gt;

&lt;h3 id=&quot;improvement-4&quot;&gt;Improvement 4:&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Less fragility:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Through the judicious use of “ignoring errors”, we want to be able to run all the prompts and get outputs, even if one (or most) of them fail.&lt;/p&gt;

&lt;h3 id=&quot;improvement-5&quot;&gt;Improvement 5:&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;File conversion result in Slack&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Still using SNS -&amp;gt; email, but now we’re checking for the conversion job, creating an S3 pre-signed URL and sending that to the SNS topic as soon as it’s available. The pre-signed URL lasts for a couple of hours.&lt;/p&gt;

&lt;h2 id=&quot;yes-yes-show-us-a-picture&quot;&gt;Yes yes, Show us a picture:&lt;/h2&gt;

&lt;p&gt;Fine, something like this:&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;https://dev-to-uploads.s3.amazonaws.com/uploads/articles/sk3lyd0p9w51nf573g7a.png&quot; alt=&quot;Image description&quot; /&gt;&lt;/p&gt;

&lt;h2 id=&quot;did-it-work&quot;&gt;Did it work?&lt;/h2&gt;

&lt;p&gt;Well, no. Not quite.&lt;/p&gt;

&lt;p&gt;With the improvement list as a starting place, I ended up here:&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zc1ek8k8htyhlkq7muu1.png&quot; alt=&quot;Serverless AI Workflow V2&quot; /&gt;&lt;/p&gt;

&lt;p&gt;Bigger and scarier than before I know, but it can’t be helped – we’re just doing more stuff now.&lt;/p&gt;

&lt;p&gt;Let me walk you through it.&lt;/p&gt;

&lt;h3 id=&quot;steps-1-3-kick-off&quot;&gt;Steps 1-3: Kick Off:&lt;/h3&gt;

&lt;p&gt;All still the same – kick off with the upload of the m4a file, trigger the transcoding &amp;amp; transcription.&lt;/p&gt;

&lt;p&gt;Only slight tweak is the transcoding trigger actually returns the job id, so I can use that later.&lt;/p&gt;

&lt;h3 id=&quot;step-4-parallel-state&quot;&gt;Step 4: Parallel State:&lt;/h3&gt;

&lt;p&gt;Now we actually use the parallel container I built earlier and split into two branches – one for the transcription &amp;amp; LLM invocations, and the other for the transcoding.&lt;/p&gt;

&lt;h3 id=&quot;step-5-transcoding-branch&quot;&gt;Step 5 (Transcoding Branch):&lt;/h3&gt;

&lt;p&gt;Nothing massively clever here, just a loop in the step function based on an if/else/continue premise to check for the status of the transcoding job.&lt;/p&gt;

&lt;p&gt;If it’s not done, loop around again and wait some more, if it failed, if it’s completed generate a pre-signed url and send to SNS, if it failed send an error to SNS but don’t halt the step function.&lt;/p&gt;

&lt;p&gt;This last bit is important – as far as possible we’re not halting the step function for errors in the process, especially on the lower-value task.&lt;/p&gt;

&lt;h3 id=&quot;step-5-llm-branch&quot;&gt;Step 5 (LLM Branch):&lt;/h3&gt;

&lt;p&gt;Now we grab the prompts in their own Lambda, but they’re still from DynamoDB, because I couldn’t fathom prompt manager in the few evenings I had to spend on this.&lt;/p&gt;

&lt;p&gt;Same goes for the direct SDK integration between DDB &amp;amp; Step Functions really.&lt;/p&gt;

&lt;p&gt;I’m sure some of the Serverless DevAdvocates, AWS Heroes &amp;amp; Community Builders I know would dislike me for this, but I didn’t see the benefit of it here.&lt;/p&gt;

&lt;p&gt;Using Lambda Powertools in Python, grabbing the list is 4 lines of code. Plus I’d already written said code in v1, so I kept it.&lt;/p&gt;

&lt;h3 id=&quot;step-6-llm-branch&quot;&gt;Step 6 (LLM Branch):&lt;/h3&gt;

&lt;p&gt;Much the same as before, but with another loop – not a map state.&lt;/p&gt;

&lt;p&gt;It turns out the rate limiting wasn’t because Claude v2 is ancient. It’s because all of Bedrock has really low rate limits.&lt;/p&gt;

&lt;p&gt;This means that we’re not solving for speed, but we are solving for rate limits, with an unlimited number of prompts, so that’s something.&lt;/p&gt;

&lt;p&gt;On each execution of the “Invoke Bedrock Model” lambda we’re dropping the prompt we ran from the list of them, as a quick-and-dirty for loop. With some time this could be cleaned up a bit, but for now it works.&lt;/p&gt;

&lt;p&gt;Also, we’re using Claude 3.5 Sonnet V1, and have designs on both V2 (or V3.5 Opus when that eventually comes out), and Amazon Nova Pro, as the outputs in the console looked encouraging.&lt;/p&gt;

&lt;h3 id=&quot;step-7-llm-branch&quot;&gt;Step 7 (LLM Branch):&lt;/h3&gt;

&lt;p&gt;You’ll notice that a couple of states have been removed, namely the direct SDK integration with SNS for sending the results, and the “end error” state.&lt;/p&gt;

&lt;p&gt;This reduces the re-run cost by allowing me to re-drive the state machine from the point of error in the case of hitting a rate limit – which was 90% of our errors in v1.&lt;/p&gt;

&lt;h3 id=&quot;step-8-llm-branch&quot;&gt;Step 8 (LLM Branch):&lt;/h3&gt;

&lt;p&gt;Back around to the iterator we go, but this time with an arbitrary 2 minute sleep.&lt;/p&gt;

&lt;p&gt;This gets us around the 1 invocation-per-minute rate we’re working with, but I could do something a bit smarter here with error code checking &amp;amp; exponential backoff.&lt;/p&gt;

&lt;p&gt;The iterator is well-trodden at this point – just check if the prompts list still has prompts in it, and go around again. If it’s now empty, finish the branch.&lt;/p&gt;

&lt;h3 id=&quot;step-9-both-branches&quot;&gt;Step 9 (Both Branches):&lt;/h3&gt;

&lt;p&gt;End.&lt;/p&gt;

&lt;p&gt;Both branches are now done, so we close out.&lt;/p&gt;

&lt;h2 id=&quot;so-how-is-this-better&quot;&gt;So, how is this better?&lt;/h2&gt;

&lt;p&gt;Well for one I don’t have to sit and wait for the transcoding to finish. The pre-signed URL is dropped straight into Slack for me to grab, so that’s nice.&lt;/p&gt;

&lt;p&gt;Also, we can run an unlimited number of prompts, and shouldn’t get rate-limited anywhere near as often – if we do, re-drive from failure covers the restart without having to re-do the expensive transcoding &amp;amp; transcribing.&lt;/p&gt;

&lt;p&gt;The updated model performs loads better, and because the interface for all Claude v3/3.5 models is the same, I have a route to make each prompt run under a different model – which I thought was the idea behind Bedrock to start with, but seems to be harder than I thought it would be.&lt;/p&gt;

&lt;p&gt;Also we have monitoring, sort of.&lt;/p&gt;

&lt;p&gt;I put a small Cloudwatch Alarm on the failures of the step function (well, I had Q Developer write it actually, can’t avoid using AI in this project), which also sends to the same SNS topic. That way I can just upload the file to S3, and get on with other things, without having to babysit the workflow.&lt;/p&gt;

&lt;p&gt;And of course I have an update on the project I can write a talk for, so I best start shopping that around local meetup groups I guess.&lt;/p&gt;

&lt;h2 id=&quot;whats-next&quot;&gt;What’s Next?&lt;/h2&gt;

&lt;h4 id=&quot;expand-the-iterator&quot;&gt;Expand the Iterator&lt;/h4&gt;

&lt;p&gt;I still want to be able to run a different model for each prompt, because models aren’t one-size-fits-all, and I’d like an easy way to test lots of different models on the same prompt.&lt;/p&gt;

&lt;h4 id=&quot;use-prompt-manager&quot;&gt;Use Prompt Manager&lt;/h4&gt;

&lt;p&gt;Still not using this, and I really should be.&lt;/p&gt;

&lt;h4 id=&quot;resiliency&quot;&gt;Resiliency&lt;/h4&gt;

&lt;p&gt;We’re in a better place than we were, but it’s still not as good as I’d like it to be. Ideally we’ll handle rate limit exceptions via a retry and exponential backoff, plus have proper alerting rather than a single alert for the whole Step Function.&lt;/p&gt;

&lt;h2 id=&quot;finally-what-did-we-learn&quot;&gt;Finally, What did we learn?&lt;/h2&gt;

&lt;p&gt;Rather a lot, as it happens.&lt;/p&gt;

&lt;h4 id=&quot;time--effort-needed&quot;&gt;Time &amp;amp; Effort Needed&lt;/h4&gt;

&lt;p&gt;Phase 1 showed the sheer amount of effort needed to get these things going, even with a big jumping-off point from AWS. This is compounded by the fact that this is a marketing/hobby project, so doesn’t get a lot of time spent on it. Phase 2 just compounded that lesson – it took the best part of a day to make the change, split across several evenings, and it’s not that different from phase 1.&lt;/p&gt;

&lt;h4 id=&quot;pace-of-change&quot;&gt;Pace of Change&lt;/h4&gt;

&lt;p&gt;The pace of change within LLMs is really high – between v1 &amp;amp; v2 there were 6 different models released just for Claude, so keeping up with the current models is a challenge all by itself. Once you start thinking about other model providers (looking at you Amazon Nova), it’s a whole different challenge.&lt;/p&gt;

&lt;h4 id=&quot;llms-are-non-deterministic&quot;&gt;LLMs are Non-Deterministic&lt;/h4&gt;

&lt;p&gt;So we knew that already from the documentation, but in practice it can be really frustrating to not have a consistent output between executions, and you need to be aware of it when developing against them.&lt;/p&gt;

&lt;h4 id=&quot;skillset&quot;&gt;Skillset&lt;/h4&gt;

&lt;p&gt;By day I’m an SRE/Platform Engineer/Generalist Cloud Engineer, not a developer and certainly not an AI/LLM expert, so this was a challenge both dusting off my Serverless developer skills whilst learning how to interface with Bedrock. Fortunately AWS have done a really good job of making it an easy service to consume, and I highly recommend you start with the chat interface in the console to test your prompts.&lt;/p&gt;

&lt;p&gt;The other recommendation I’d have for you is to dive in – after v1 I gravitated much more towards AI/LLM talks and workshops at the various AWS conferences I’ve attended this year (London Summit, London Partner Summit, Re:Invent), which I got much more out of for having a baseline level of knowledge, thanks to this project.&lt;/p&gt;

&lt;h4 id=&quot;llms-have-rate-limits&quot;&gt;LLMs Have Rate Limits&lt;/h4&gt;

&lt;p&gt;Well, yes, you might say. However I didn’t appreciate just how low they are in Bedrock.&lt;/p&gt;

&lt;p&gt;When you think about it, it makes sense, and our usage puts a very high number of tokens through the model in a short space of time. But you do need to be aware of them, and handle them appropriately in your own implementations.&lt;/p&gt;

&lt;h2 id=&quot;so-what-are-my-tips&quot;&gt;So, What are My Tips?&lt;/h2&gt;

&lt;p&gt;Hopefully you can learn from my mistakes here, but if you want to short-circuit this whole “learning by doing” thing, I’d recommend:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Go to a couple of workshops before getting going.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;They don’t have to be in-person, and could be watching something on YouTube after-the-fact, but for a good portion of phase 1 I struggled with just understanding the new terminology I needed&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Test your prompts&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I said this above but it bears repeating – use the console to test your prompts and see what sort of output you’re likely to get. It’s much cheaper to do this than run a whole transcription &amp;amp; transcoding workflow for the sake of changing a single prompt.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Try to do model evaluation&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I didn’t do this, because the project I based on had already done it, and settled on Claude2. I regret not going through the process to get a better understanding of why Claude2 was the correct choice at the time though. You’ll also learn a lot about the various models in the process, which might be useful for another project.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Request model access up front&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The “non-AWS” models aren’t instantly approved when you request them, so save yourself some time and request them early.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. Check the rate limits&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;These are different between models &amp;amp; regions, so you can’t assume that the same thing will work if you port it to another region&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;6. Be aware of time&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If you’re new to LLM development, this is a learning curve that you’ll need to climb, so be patient with yourself. Doubly so if you’re not a developer by day.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;7. Learn by doing&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Hopefully by reading this you can go further and faster than I did, but there’s no substitute for building things when trying to learn.&lt;/p&gt;

&lt;p&gt;To wrap this up I think I’ll quote Amazon CTO Dr. Werner Vogels:&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;Now, Go Build&lt;/p&gt;
&lt;/blockquote&gt;</content>

      
      
      
      
      

      <author>
          <name>Jon Goodall</name>
        
        
      </author>

      

      
        <category term="aws" />
      
        <category term="serverless" />
      
        <category term="lambda" />
      
        <category term="step-functions" />
      
        <category term="bedrock" />
      
        <category term="ai" />
      
        <category term="llm" />
      
        <category term="podcast" />
      
        <category term="automation" />
      

      
        <summary type="html">I&apos;m a co-host on the Logicast AWS News Podcast, and the production is a lot of work. Here&apos;s how we built a serverless AI workflow to automate the boring bits.</summary>
      

      
      
        
        <media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://jongoodall.co.uk/img/aws.png" />
      
    </entry>
  
    <entry>
      <title type="html">AWS Lambda Use Cases: When You Should Use It?</title>
      
      <link href="https://www.logicata.com/blog/aws-lambda-use-cases/" rel="alternate" type="text/html" title="AWS Lambda Use Cases: When You Should Use It?" />
      
      <published>2023-05-30T14:00:00+00:00</published>
      <updated>2023-05-30T14:00:00+00:00</updated>
      <id>https://jongoodall.co.uk/blog/2023/05/30/aws-lambda-use-cases</id>
      <content type="html" xml:base="https://jongoodall.co.uk/blog/2023/05/30/aws-lambda-use-cases/">&lt;p&gt;Lambda, and Serverless in general, is rather “in” right now in the world of cloud computing. If you listened to all the marketing coming out from the big names about it (and yes, I’m guilty of this too); you’d expect that you can run your whole service on it. For next-to-nothing, with no downtime, and your deployments would be as smooth as silk.&lt;/p&gt;

&lt;p&gt;So, how much of the marketing spiel should you listen to – how do you know when to use Lambda? Well, I’m going to try and come up with a reasonable list of use cases for AWS Lambda, so that’s a good place to start.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;https://dev-to-uploads.s3.amazonaws.com/uploads/articles/sr05x6kgmsnqxmcrnl48.png&quot; alt=&quot;AWS Lambda Logo&quot; /&gt;&lt;/p&gt;

&lt;h2 id=&quot;what-is-lambda&quot;&gt;What Is Lambda?&lt;/h2&gt;

&lt;p&gt;Before we get into “what’s it for”, it’s worth defining “what it is”, so let’s do that.&lt;/p&gt;

&lt;p&gt;AWS Lambda is AWS’s take on “Function as a Service” (FaaS). It allows developers to run code without provisioning or managing servers. With AWS Lambda, developers can upload their code, and the service will take care of the rest – including scaling, patching, and availability.&lt;/p&gt;

&lt;p&gt;The idea behind AWS Lambda is to make it easier for developers to build scalable, event-driven applications that run on the cloud. The service is highly-available and fault-tolerant. Which means that it can handle large amounts of traffic without crashing or experiencing downtime. One of the key benefits of using AWS Lambda is that it is fully managed. This means that developers don’t have to worry about managing hardware or operating systems. They can focus on building applications, while AWS Lambda takes care of the underlying infrastructure.&lt;/p&gt;

&lt;p&gt;AWS Lambda supports a variety of programming languages, including Java, Python, Node.js, C#, and Go. This makes it easy for developers to write code in the language they are most comfortable with, without having to learn a new language or platform.&lt;/p&gt;

&lt;p&gt;Another advantage of AWS Lambda is that it provides automatic scaling. This means that the service adjusts the number of functions serving requests. If there is a sudden increase in traffic, AWS Lambda will scale out the number of functions to handle the load. Conversely, if there is a decrease in traffic, the service will scale in the functions to reduce costs.&lt;/p&gt;

&lt;p&gt;AWS Lambda is also cost-effective. Developers only pay for the compute time that their code actually uses. This means that if an application isn’t in use, there are no costs associated with running it. Additionally, since AWS Lambda scales based on demand, developers can avoid over-provisioning and paying for unused resources.&lt;/p&gt;

&lt;p&gt;That’s all fine and good, but what exactly do you use it for?&lt;/p&gt;

&lt;h2 id=&quot;aws-lambda-use-cases&quot;&gt;AWS Lambda Use Cases&lt;/h2&gt;

&lt;h3 id=&quot;use-cases-for-aws-lambda-1-glue&quot;&gt;Use Cases for AWS Lambda #1: Glue&lt;/h3&gt;

&lt;p&gt;&lt;img src=&quot;https://dev-to-uploads.s3.amazonaws.com/uploads/articles/431fre5ap0rnw6torcmm.jpeg&quot; alt=&quot;Arts and Crafts with Glue&quot; /&gt;&lt;/p&gt;

&lt;p&gt;Now I’m not talking about AWS Glue but rather using Lambda to “glue” (or “stitch” if you prefer) other AWS services together.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why would you use Lambda for this?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A few reasons.&lt;/p&gt;

&lt;p&gt;Lambda can bridge two services that don’t talk to each other. For instance, when an API Gateway call is made to retrieve a file from an S3 bucket, Lambda can facilitate the interaction between the two.&lt;/p&gt;

&lt;p&gt;In some cases, the services do talk to each other, but filtering the results can be challenging. For instance, API Gateway and DynamoDB. Yes, API Gateway can talk to tables, but it’s not easy to work out how or to combine queries from several tables into a single result.&lt;/p&gt;

&lt;p&gt;Event-driven architectures. Say you wanted to process an image after uploading to S3, you could send a notification to a queue and have an EC2 instance handle the message processing. Or do this with Lambda, because it’s only charging you when it’s running. In the same vein, you can use Lambda can act as a replacement for cron-triggered scripts, again saving money.&lt;/p&gt;

&lt;p&gt;“Gluing” things together is a lot of the work I’ve seen/done used with lambdas as they’re quick to build &amp;amp; deploy and cheap to run. In most of the deployments I’ve seen, the CloudWatch bill for monitoring the lambdas was higher than the bill for the lambdas themselves.&lt;/p&gt;

&lt;h3 id=&quot;use-cases-for-aws-lambda-2-apis&quot;&gt;Use Cases for AWS Lambda #2: APIs&lt;/h3&gt;

&lt;p&gt;So, you can’t use Lambda’s as APIs by themselves but put them behind either API Gateway or an ALB, and you can.&lt;/p&gt;

&lt;p&gt;Most APIs are “call-and-response” in that a client calls an endpoint for “something,”. This could be data, to kick off background processing or anything else.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why would you use Lambda for this?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Once again, it’s about cost and resource utilization.&lt;/p&gt;

&lt;p&gt;Lambdas can be permanently provisioned and react in very short spaces of time. So they complete the processing and respond to the user for a much lower cost than a server or container can.&lt;/p&gt;

&lt;h3 id=&quot;use-cases-for-aws-lambda-3-websites&quot;&gt;Use Cases for AWS Lambda #3: Websites&lt;/h3&gt;

&lt;p&gt;This one is a bit out there but go with me on it.&lt;/p&gt;

&lt;p&gt;Most “modern” websites consist of dynamically constructed pages. The pages are rendered and served in real-time, per request to the user.&lt;/p&gt;

&lt;p&gt;Most webpages don’t do complex processing on the same thread that is serving the page to the user, as this improves the user experience.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why would you use Lambda for this?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;For exactly the same reasons as using it for the backend or an API.&lt;/p&gt;

&lt;p&gt;You don’t have to have a server, which might only get occasional use, and instead, only pay when people are using the service. You also have a lot less to worry about when it comes to scaling to meet demand, as the lambda service does this for you.&lt;/p&gt;

&lt;h3 id=&quot;use-cases-for-aws-lambda-4-data-processing--etl&quot;&gt;Use Cases for AWS Lambda #4: Data Processing &amp;amp; ETL&lt;/h3&gt;

&lt;p&gt;&lt;img src=&quot;https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6mne68geccos5cfslzue.jpeg&quot; alt=&quot;Data Processing&quot; /&gt;&lt;/p&gt;

&lt;p&gt;This is similar to the “glue” use case but different enough that I thought it deserved its own section.&lt;/p&gt;

&lt;p&gt;ETL (Extract, Transform, Load) is the process of taking data from one data source, changing its format or adding content, and loading it into another data storage platform.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why would you use Lambda for this?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A couple of reasons, depending on your requirements:&lt;/p&gt;

&lt;p&gt;Lambda can be triggered directly from other AWS services. Meaning when data is added to one of the sources, the processing starts quickly in response.&lt;/p&gt;

&lt;p&gt;For instance, you could subscribe your Lambda to the event stream from a DynamoDB table, which allows the Lambda to start working within 1 second of the data being added.&lt;/p&gt;

&lt;p&gt;Lambda also scales in response to demand, so if you have a period of a large volume of data being added to sources, it will be able to keep up and keep feeding your data warehouse in near-real-time.&lt;/p&gt;

&lt;p&gt;Lambda can be called by AWS Step Functions, which allows for complex processing from multiple data sources, whilst being able to break the logic down into very small component parts. This can make the development easier to break up between team members and improve the testability of the system.&lt;/p&gt;

&lt;h3 id=&quot;use-cases-for-aws-lambda-5-containerized-workloads&quot;&gt;Use Cases for AWS Lambda #5: Containerized Workloads&lt;/h3&gt;

&lt;p&gt;&lt;img src=&quot;https://dev-to-uploads.s3.amazonaws.com/uploads/articles/igcv49y41c5qu1k2q8qe.jpeg&quot; alt=&quot;Image description&quot; /&gt;&lt;/p&gt;

&lt;p&gt;Again, out of left field on this but bear with me.&lt;/p&gt;

&lt;p&gt;Since Docker was added as a Lambda runtime environment, you can use Lambda to run anything you’d run in Docker.&lt;/p&gt;

&lt;p&gt;The caveat is that it must complete within 15 minutes and use less than 10GB of RAM. I know I touched on this at the start of the article, but it’s definitely worth going into more detail on this.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why would you use Lambda for this?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;We’re back on the same benefits again – cost and complexity. I’ve done the cost thing a few times now, so I’ll skip that and go to the complexity piece instead.&lt;/p&gt;

&lt;p&gt;Running Docker-based workloads in a highly-available manner is difficult and requires some level of orchestration (e.g. Docker Swarm, ECS or Kubernetes.&lt;/p&gt;

&lt;p&gt;Managing the orchestration tools is a job in and of itself. Yes, AWS can take some of that away in their managed services; but your engineers still need to understand how to manage the tools.&lt;/p&gt;

&lt;p&gt;With Lambda, that goes away as it scales to meet demand. Additionally, deployment is as trivial as uploading a Docker image (though you really should be using a CI/CD setup).&lt;/p&gt;

&lt;h3 id=&quot;use-cases-for-aws-lambda-6-chatbots--voice-assistants&quot;&gt;Use Cases for AWS Lambda #6: ChatBots &amp;amp; Voice Assistants&lt;/h3&gt;

&lt;p&gt;&lt;img src=&quot;https://dev-to-uploads.s3.amazonaws.com/uploads/articles/97shciomgqa6toigc8uf.jpeg&quot; alt=&quot;ChatBot&quot; /&gt;&lt;/p&gt;

&lt;p&gt;ChatBots are on almost every website these days, so I don’t think I need to explain them. If you have a customer service setup of some sort, you either already have a ChatBot on your website, or are thinking/have thought about it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why would you use Lambda for this?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Because it can interface using the API/SDK with things like Lex and Polly, you can use Lambda to get data from APIs or other areas of your infrastructure and send them back to the user via the bot.&lt;/p&gt;

&lt;p&gt;I can’t promise that your users will actually like the bot, but that’s more to do with the information it’s sending back than the technology.&lt;/p&gt;

&lt;h2 id=&quot;closing-thoughts&quot;&gt;Closing Thoughts&lt;/h2&gt;

&lt;p&gt;The hype is more or less correct, and you can use Lambda to run almost anything for a low cost. The biggest drawback is that you have to reframe your thought process. It’s a bit of a mental jump to think about serving web pages out of the same service that you’re using to shuffle data around in your backend, but it can be done.&lt;/p&gt;</content>

      
      
      
      
      

      <author>
          <name>Jon Goodall</name>
        
        
      </author>

      

      
        <category term="aws" />
      
        <category term="lambda" />
      
        <category term="serverless" />
      
        <category term="faas" />
      
        <category term="cloud-computing" />
      
        <category term="architecture" />
      

      
        <summary type="html">Lambda and Serverless is rather &apos;in&apos; right now. But how much of the marketing spiel should you listen to? Let me help you figure out when to actually use AWS Lambda.</summary>
      

      
      
        
        <media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://jongoodall.co.uk/img/AWS_Lambda_logo-1536x1536.png" />
      
    </entry>
  
    <entry>
      <title type="html">Not Another DevOps Blog</title>
      
      <link href="https://jongoodall.co.uk/blog/2021/07/19/not-another-devops-blog/" rel="alternate" type="text/html" title="Not Another DevOps Blog" />
      
      <published>2021-07-19T14:00:00+00:00</published>
      <updated>2021-07-19T14:00:00+00:00</updated>
      <id>https://jongoodall.co.uk/blog/2021/07/19/not-another-devops-blog</id>
      <content type="html" xml:base="https://jongoodall.co.uk/blog/2021/07/19/not-another-devops-blog/">&lt;p&gt;Look. I know what you’re thinking (well, I don’t. But bear with me OK?); “hasn’t this guy seen any of the other blogs out there? Why would I possibly read his one?”&lt;/p&gt;

&lt;p&gt;Well…. Good point. One that I went over a few times before I wrote this if I’m honest. But, I don’t think you’ve seen one quite like this before (and if you have, I’d love to see it). Mine will (try to) be different.&lt;/p&gt;

&lt;p&gt;One of my favourite things to do is play devil’s advocate, and whilst that might not win me many friends, should make for an interesting read. I’m going to talk about the other side of DevOps, away from the glitz and glamour that start-ups exude. The side that recruiters and managers don’t want to talk about, but that all too often (at least in my case, but maybe I’m just unlucky?) ends up being reality for those of us stuck in big corporates.&lt;/p&gt;

&lt;p&gt;The weekend work. The all weekend, all night work. The 7am server upgrades because downtime of any sort isn’t acceptable — even on legacy infrastructure about as stable as a burning house of cards. Networking teams &lt;em&gt;shudder&lt;/em&gt; and their endless reasons why something isn’t their fault, until it magically fixes itself without anyone apparently doing anything — apart from them. And maybe the occasional axe grinding.&lt;/p&gt;

&lt;p&gt;This isn’t to say that it’s all bad, or even mostly bad — far from it. It’s a fantastic world to live in, work is usually varied (if occasionally really badly thought out, “because agile”). The people are (largely) fantastic, clever, brilliant people just dying on the inside from all the red tape and “corporate synergies”, whatever that means.&lt;/p&gt;</content>

      
      
      
      
      

      <author>
          <name>Jon Goodall</name>
        
        
      </author>

      

      
        <category term="corporate" />
      
        <category term="reality-check" />
      
        <category term="weekend-work" />
      
        <category term="agile" />
      
        <category term="networking" />
      

      
        <summary type="html">Look. I know what you&apos;re thinking; hasn&apos;t this guy seen any of the other blogs out there? Why would I possibly read his one? Well, mine will try to be different. Let&apos;s talk about the other side of DevOps.</summary>
      

      
      
        
        <media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://jongoodall.co.uk/img/aws.png" />
      
    </entry>
  
    <entry>
      <title type="html">It’s Always Bash in the End</title>
      
      <link href="https://jongoodall.co.uk/blog/2020/05/12/its-always-bash-in-the-end/" rel="alternate" type="text/html" title="It&apos;s Always Bash in the End" />
      
      <published>2020-05-12T19:00:00+00:00</published>
      <updated>2020-05-12T19:00:00+00:00</updated>
      <id>https://jongoodall.co.uk/blog/2020/05/12/its-always-bash-in-the-end</id>
      <content type="html" xml:base="https://jongoodall.co.uk/blog/2020/05/12/its-always-bash-in-the-end/">&lt;p&gt;&lt;img src=&quot;https://images.unsplash.com/photo-1515879218367-8466d910aaa4?w=1200&quot; alt=&quot;Coding&quot; /&gt;
&lt;em&gt;Photo by &lt;a href=&quot;https://unsplash.com/photos/boy-in-front-of-computer-monitor-vJP-wZ6hGBg&quot;&gt;Unsplash&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;I’ve been told by a few people that “you don’t need to know bash anymore”. No I haven’t, that’s a lie. No-one has told me that, because it isn’t true. You absolutely need to understand bash, the CLI, maybe Powershell (useful on Windows, but lately it’s less important) and probably Python.&lt;/p&gt;

&lt;p&gt;Why? &lt;strong&gt;It’s always bash in the end.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Eventually you will be writing a bash script (or a hacky bash script, as a friend of mine aptly called them). There is just no avoiding it.&lt;/p&gt;

&lt;p&gt;What’s this I don’t hear you cry? I’ll just use a configuration management tool? Good idea, I like most of them, they make life a lot easier a lot of the time. But let’s be honest? They’re basically a wrapper around bash. Or Python.&lt;/p&gt;

&lt;h2 id=&quot;a-concrete-example&quot;&gt;A Concrete Example&lt;/h2&gt;

&lt;p&gt;Let’s use a concrete example. I had (well, inherited) a task to get regular snapshots taken, of an Amazon EC2 instance, which was hosting a MySQL DB. I also needed to age them out, because we only needed to keep a few days worth.&lt;/p&gt;

&lt;p&gt;I’ll walk you through my thought process.&lt;/p&gt;

&lt;h3 id=&quot;step-1-why-arent-we-using-rds&quot;&gt;Step 1: Why aren’t we using RDS?&lt;/h3&gt;
&lt;p&gt;This stuff is handled.&lt;/p&gt;

&lt;h3 id=&quot;step-2-ah-multiple-replication-sources-damn&quot;&gt;Step 2: Ah, multiple replication sources, damn.&lt;/h3&gt;
&lt;p&gt;Let’s use lifecycle rules, 50 lines of terraform handles the whole thing.&lt;/p&gt;

&lt;h3 id=&quot;step-3-oh-cant-do-that&quot;&gt;Step 3: Oh, can’t do that&lt;/h3&gt;
&lt;p&gt;I need to flush logs and set read locks. Right, custom script. Ansible anyone?&lt;/p&gt;

&lt;h3 id=&quot;step-4-ah-there-isnt-a-module-for-create-snapshots&quot;&gt;Step 4: Ah, there isn’t a module for “create snapshots”&lt;/h3&gt;
&lt;p&gt;(there is one for snapshotting an EBS volume, but not for doing all the volumes attached to an instance, at least not when I was doing this). I guess I’m using the AWS CLI.&lt;/p&gt;

&lt;h3 id=&quot;step-5-bash-script&quot;&gt;Step 5: Bash script&lt;/h3&gt;
&lt;p&gt;Cron to run it, with a cron monitor (lots are available, I’m not advertising any here as they’re not paying me).&lt;/p&gt;

&lt;h3 id=&quot;step-6-profit&quot;&gt;Step 6: Profit&lt;/h3&gt;
&lt;p&gt;(well, continue to draw a salary, but that’s close, right?).&lt;/p&gt;

&lt;h2 id=&quot;the-point&quot;&gt;The Point&lt;/h2&gt;

&lt;p&gt;So what’s the message here? I’m a really good engineer? Well my ego would say yes, but no, that’s not the point.&lt;/p&gt;

&lt;p&gt;The point is I ran through 4 layers of “modern” options, from fully managed to IaaS, and ended up writing a script in a language from 1989.&lt;/p&gt;

&lt;p&gt;How about that?&lt;/p&gt;</content>

      
      
      
      
      

      <author>
          <name>Jon Goodall</name>
        
        
      </author>

      

      
        <category term="bash" />
      
        <category term="cli" />
      
        <category term="automation" />
      
        <category term="aws" />
      
        <category term="scripting" />
      
        <category term="python" />
      
        <category term="powershell" />
      

      
        <summary type="html">I&apos;ve been told by a few people that &apos;you don&apos;t need to know bash anymore&apos;. No I haven&apos;t, that&apos;s a lie. No-one has told me that, because it isn&apos;t true. You absolutely need to understand bash.</summary>
      

      
      
        
        <media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://images.unsplash.com/photo-1515879218367-8466d910aaa4?w=1200" />
      
    </entry>
  
    <entry>
      <title type="html">Gitlab vs. Github</title>
      
      <link href="https://jongoodall.co.uk/blog/2020/05/12/gitlab-vs-github/" rel="alternate" type="text/html" title="Gitlab vs. Github" />
      
      <published>2020-05-12T14:00:00+00:00</published>
      <updated>2020-05-12T14:00:00+00:00</updated>
      <id>https://jongoodall.co.uk/blog/2020/05/12/gitlab-vs-github</id>
      <content type="html" xml:base="https://jongoodall.co.uk/blog/2020/05/12/gitlab-vs-github/">&lt;p&gt;&lt;img src=&quot;https://images.unsplash.com/photo-1556139943-4bdca53adf1e?w=1200&quot; alt=&quot;Socks&quot; /&gt;
&lt;em&gt;Photo by &lt;a href=&quot;https://unsplash.com/photos/shallow-focus-photography-of-persons-feet-1zTetyivDYE&quot;&gt;Unsplash&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Since my I posted my article on DevOps tools, which was all funny names and no real content, Gitlab has launched their #gitchallenge. This is to compare Gitlab and GitHub, so that’s what I’m going to do (and hopefully get a free t-shirt, or socks, I like socks).&lt;/p&gt;

&lt;p&gt;I didn’t talk about either tool in the last article, because the names didn’t meet my arbitrary criteria of “sounds a bit funny”. I hadn’t used them either, so wasn’t qualified to talk about it. Since then I’ve used both for work, so can talk about them at some length.&lt;/p&gt;

&lt;p&gt;I’m going to avoid going into exhaustive detail, because that’s boring, and I’ll do several tl;dr sections as I go along - like this:&lt;/p&gt;

&lt;h2 id=&quot;overall-thoughts&quot;&gt;Overall Thoughts&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;TL;DR:&lt;/strong&gt; For “home” users (i.e. having an online presence) use github, just for brand-awareness outside of the tech community.&lt;/p&gt;

&lt;p&gt;For “doing actual work”, if you already have a git solution, that isn’t a “home rolled git server”, just stay with it. It’s not worth leaving unless you need a specific piece of functionality. Or you’re desperate to not use Jenkins (can’t say I blame you).&lt;/p&gt;

&lt;p&gt;If you don’t already have a solution use gitlab, for now. The built-in CI/CD is that much better than Github it’s worth picking gitlab just for that. Github is improving its offering though, so who knows how long that will last for?&lt;/p&gt;

&lt;p&gt;See that up there? That’s what I mean, good overview, not too long or boring. At least I think so.&lt;/p&gt;

&lt;p&gt;OK, let’s do some sections so you can pick and choose which parts of this you care about. It’ll stuff my reader stats, but oh well.&lt;/p&gt;

&lt;h3 id=&quot;sections&quot;&gt;Sections:&lt;/h3&gt;
&lt;ul&gt;
  &lt;li&gt;Git Functions&lt;/li&gt;
  &lt;li&gt;CI/CD&lt;/li&gt;
  &lt;li&gt;Free website Hosting&lt;/li&gt;
  &lt;li&gt;Things they let you do but shouldn’t&lt;/li&gt;
  &lt;li&gt;Other stuff that’s relevant, but isn’t really a feature&lt;/li&gt;
  &lt;li&gt;Closing thoughts&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;git-functions&quot;&gt;Git Functions&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;TL;DR:&lt;/strong&gt; Doesn’t matter, they’re about even.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Long Version:&lt;/strong&gt; Ok, so this is the basic stuff. They both do everything that you could want at a high-level, for example:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Multiple team members&lt;/li&gt;
  &lt;li&gt;Varying level of permissions for team members&lt;/li&gt;
  &lt;li&gt;Branch protection&lt;/li&gt;
  &lt;li&gt;Merge Requests/Pull Requests&lt;/li&gt;
  &lt;li&gt;Mandatory approvals on these&lt;/li&gt;
  &lt;li&gt;CI/CD pipelines triggers&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;They both cover these in a fairly similar way, so I’m not going to dwell here. The only thing of note is that Gitlab calls requests to merge code “Merge Requests”, which is logical. Github calls the same action a “Pull Request”, which is illogical. However this has been adopted as a term by other tools as well (e.g. bitbucket), so your lexicon will be a little different.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Winner:&lt;/strong&gt; It’s a draw&lt;/p&gt;

&lt;h2 id=&quot;cicd&quot;&gt;CI/CD&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;TL;DR:&lt;/strong&gt; Gitlab wins this by a country mile (is that longer than a regular mile? I don’t know). The interface is better, parallel build steps actually exist, there’s manual approvals built-in and the code-reuse is better.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Long Version:&lt;/strong&gt; This is where gitlabs’ longer history of offering built-in CI/CD becomes quite obvious.&lt;/p&gt;

&lt;p&gt;Both Gitlab and Github offer built-in CI, through Gitlab pipelines, and Github Actions. Github actions has only been available since around December 2019(ish?), and it’s pretty good.&lt;/p&gt;

&lt;p&gt;There are a few critical oversights though. I’ll illustrate this with an example, that I’ve implemented in both tools before:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Task:&lt;/strong&gt; End-to-end automated deployments to production, and all lower environments, based on a merge to the master branch. The deployment to production is behind a manual approval, because you’re not Netflix. You’re rebuilding the artefact here, to ensure that you’ve captured hotfixes.&lt;/p&gt;

&lt;h3 id=&quot;the-github-way&quot;&gt;The GitHub way:&lt;/h3&gt;
&lt;p&gt;&lt;em&gt;(that’s a bit Zen isn’t it? I kinda like it, might do that again)&lt;/em&gt;&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Trigger a build based on a push to the master&lt;/li&gt;
  &lt;li&gt;The build runs each step sequentially
    &lt;ul&gt;
      &lt;li&gt;Build the artefact&lt;/li&gt;
      &lt;li&gt;Deploy to the lower environments&lt;/li&gt;
      &lt;li&gt;End with a call to the API to create a draft release.&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;Someone publishes the release, which triggers another workflow to run the deployment to prod.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This seems OK, but the problem is you have to change views within Github to run the process end-to-end, so you need at least two windows open. The manual approval is also a fudge using the API to create a draft release.&lt;/p&gt;

&lt;p&gt;Compare this with:&lt;/p&gt;

&lt;h3 id=&quot;the-gitlab-way&quot;&gt;The Gitlab Way:&lt;/h3&gt;
&lt;p&gt;&lt;em&gt;(yep, sticking with this)&lt;/em&gt;&lt;/p&gt;

&lt;ol&gt;
  &lt;li&gt;Trigger a pipeline based on a push to the master&lt;/li&gt;
  &lt;li&gt;The pipeline runs which:
    &lt;ul&gt;
      &lt;li&gt;Builds the artefact,&lt;/li&gt;
      &lt;li&gt;Deploys it to lower environments in parallel&lt;/li&gt;
      &lt;li&gt;Waits for a manual approval for production&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
  &lt;li&gt;You click the “play” button on the pipeline to deploy to prod, and you’re done.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This is better for a few reasons:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Everything is in the same view, so no jumping around the site to get it done.&lt;/li&gt;
  &lt;li&gt;The process is more intuitive, as all the required steps are in the same pipeline file.&lt;/li&gt;
  &lt;li&gt;You don’t have to mess around with the API to implement a manual release approval&lt;/li&gt;
  &lt;li&gt;You haven’t had to write a script to call the API&lt;/li&gt;
  &lt;li&gt;Although I would highly recommend that you implement releases and tagging in your workflow as a best practice, you shouldn’t have to do it for the sake of a deployment&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Winner:&lt;/strong&gt; Gitlab. By a mile.&lt;/p&gt;

&lt;h2 id=&quot;free-website-hosting&quot;&gt;Free website hosting&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Tl;DR:&lt;/strong&gt; Gitlab edges this for tech, but Github has this for name recognition outside of the tech community (e.g. recruiters). I’d call this a draw.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Long Version:&lt;/strong&gt; Both tools do this pretty well, through the use of “pages”. Each account gets one (1) free “pages” domain to host a static site on. Both offer custom DNS options. Both also offer a pre-generated one - e.g. https://jdgoodall1.github.io/ (disclaimer, my profile is hosted on Github for “sending it to recruiters” reasons).&lt;/p&gt;

&lt;p&gt;Where they start to differ is build configuration. This is optional for Jekyll sites on Github but mandatory for all sites on Gitlab, making Github the marginal winner for lazy people. Like me.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Winner:&lt;/strong&gt; Github, just barely.&lt;/p&gt;

&lt;h2 id=&quot;things-they-let-you-do-but-shouldnt&quot;&gt;Things they let you do but shouldn’t&lt;/h2&gt;

&lt;p&gt;Nice contentious title that one.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;TL;DR:&lt;/strong&gt; The answer to this shouldn’t matter, and if it does to you then you’re a Bad Person™. Github edges this if you do care.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Long Version:&lt;/strong&gt; What I’m talking about here is in-browser edits. I’m a firm believer that these should be punished by buying a case of something expensive for the team, because you really shouldn’t be doing these.&lt;/p&gt;

&lt;p&gt;Both tools let you do this, for ease of use reasons I guess, and I’ll admit to fixing the occasional typo in-browser, but that’s about it.&lt;/p&gt;

&lt;p&gt;If you’re doing any real quantity of work in-browser then for the love of whatever it is you care to love, download an IDE. There are loads of free ones that are very good.&lt;/p&gt;

&lt;p&gt;That being said, Github is better at this, because it has found a way to put VSCode in-browser, and let you use that for your edits. By doing changes in-browser though, you lose any ability to run tests on your local environment before committing to source.&lt;/p&gt;

&lt;p&gt;Doing this runs the risk of making the git history muddy, so I’m not sure we should be striving to make this a better experience.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Winner:&lt;/strong&gt; Github. But don’t do this. Seriously.&lt;/p&gt;

&lt;h2 id=&quot;other-stuff-thats-relevant-but-isnt-really-a-feature&quot;&gt;Other stuff that’s relevant, but isn’t really a feature&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;TL;DR:&lt;/strong&gt; Github. If you say to a recruiter “here’s my gitlab profile” they’ll probably say, “Oh, like github?” This is brand-awareness, which is important in a few specific cases.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Long Version:&lt;/strong&gt; In tech, you’re expected to have an online profile of some sort, and to either blog (which I do) or contribute to open source projects.&lt;/p&gt;

&lt;p&gt;The best-known home of such things is Github. Why this is, I couldn’t tell you. However the end result is application forms that have “your Github profile” as a box to fill out, and no alternative options.&lt;/p&gt;

&lt;p&gt;Yes, I’ve seen this before.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Winner:&lt;/strong&gt; Github, because people suck&lt;/p&gt;

&lt;h2 id=&quot;closing-thoughts&quot;&gt;Closing thoughts&lt;/h2&gt;

&lt;p&gt;You might’ve noticed that github “won” most of the sections there, but that the TL;DR section at the top doesn’t neatly match the sections. That’s because its not that simple - it never is. Also I’m using negative marking in some places just because I can.&lt;/p&gt;

&lt;p&gt;My choice of which tool to use depends very highly on what you’re trying to do.&lt;/p&gt;

&lt;p&gt;If you need an online presence, and don’t want to fight the tide, you should pick Github.&lt;/p&gt;

&lt;p&gt;For ANYTHING ELSE, use Gitlab. The CI/CD offering is SO MUCH BETTER. So much so that its worth using Gitlab just for that. Yes the online editor in Github is better, but STOP USING IT YOU BAD PERSON.&lt;/p&gt;

&lt;p&gt;Once you’ve used Gitlabs’ pipelines you’ll never want to use anything else again. Github is catching up in this area, but it’s slow going. So do yourself a favour and chose gitlab.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Fin.&lt;/strong&gt;&lt;/p&gt;</content>

      
      
      
      
      

      <author>
          <name>Jon Goodall</name>
        
        
      </author>

      

      
        <category term="gitlab" />
      
        <category term="github" />
      
        <category term="ci-cd" />
      
        <category term="git" />
      
        <category term="pipelines" />
      
        <category term="github-actions" />
      

      
        <summary type="html">Since I posted my article on DevOps tools, Gitlab has launched their #gitchallenge. This is to compare Gitlab and GitHub, so that&apos;s what I&apos;m going to do (and hopefully get a free t-shirt, or socks, I like socks).</summary>
      

      
      
        
        <media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://images.unsplash.com/photo-1556139943-4bdca53adf1e?w=1200" />
      
    </entry>
  
    <entry>
      <title type="html">AWS, a translation</title>
      
      <link href="https://jongoodall.co.uk/blog/2019/07/21/aws-a-translation/" rel="alternate" type="text/html" title="AWS, a translation" />
      
      <published>2019-07-21T14:00:00+00:00</published>
      <updated>2019-07-21T14:00:00+00:00</updated>
      <id>https://jongoodall.co.uk/blog/2019/07/21/aws-a-translation</id>
      <content type="html" xml:base="https://jongoodall.co.uk/blog/2019/07/21/aws-a-translation/">&lt;p&gt;I’ve not long passed my “AWS Certified Solutions Architect — Associate” exam (that’s a mouthful), and whilst I was studying for it I noticed that, a lot of the service names are “odd”. Or acronyms. Or Greek. I’ve covered this sort of topic before, (see: &lt;a href=&quot;/2024/12/06/whats-in-a-name-devops-edition.html&quot;&gt;here&lt;/a&gt;), so I thought I’d do it again, with a similar level of brevity. And snark.&lt;/p&gt;

&lt;p&gt;There are a lot of services available, so for the sake of my own sanity I’m not covering them all. Also, Amazon has a habit of releasing new services quicker than the drink runs out at an open bar, making it highly likely another few will turn up whilst I’m writing this.&lt;/p&gt;

&lt;p&gt;Also worth noting, just by reading this you WONT pass the “AWS-SA-Assoc” exam as there aren’t any questions about what the names mean. It’s more about how you use the services.&lt;/p&gt;

&lt;h2 id=&quot;aws&quot;&gt;AWS&lt;/h2&gt;

&lt;p&gt;Yep. Starting here. AWS is an acronym (and there’s a lot of them coming up) for Amazon Web Services. But you probably already knew that. A word of warning, a lot of the names are about this creative. le sigh.&lt;/p&gt;

&lt;p&gt;Fun(?) fact, AWS made up 58% of Amazons’ profit in 2018 (source: &lt;a href=&quot;https://www.investopedia.com/how-amazon-makes-money-4587523&quot;&gt;Investopedia&lt;/a&gt;). So you can feel better about all the money you’ve spent on Amazon. At least that’s what I’m clinging to.&lt;/p&gt;

&lt;h2 id=&quot;the-starting-tools&quot;&gt;The Starting Tools&lt;/h2&gt;

&lt;p&gt;OK. Seems like a good place to start. Well, re-start. These are the tools that AWS is most known for. It’s probably where you’re going to get started with it too.&lt;/p&gt;

&lt;h3 id=&quot;ec2&quot;&gt;EC2&lt;/h3&gt;

&lt;p&gt;Elastic Compute Cluster. Yep. Gonna need to define that one a bit.&lt;/p&gt;

&lt;p&gt;In this sense “Elastic” is not that far from an elastic band. The capacity of your resources can stretch and shrink to meet demand, within limits. Compute is running apps, although in this case it refers to virtual servers. Cluster means there’s more than one.&lt;/p&gt;

&lt;p&gt;Still awake?&lt;/p&gt;

&lt;h3 id=&quot;s3&quot;&gt;S3&lt;/h3&gt;

&lt;p&gt;Simple Storage Service. It’s got 3 S’s, so S3. This is an object store, rather than a file store (though Amazon does have one of those too). It’s interchangeable with file storage to an extent. But instead of using native OS commands you interact with it using the AWS CLI tool. Yes I know, more acronyms.&lt;/p&gt;

&lt;h3 id=&quot;vpc&quot;&gt;VPC&lt;/h3&gt;

&lt;p&gt;Virtual Private Cloud.&lt;/p&gt;

&lt;p&gt;To understand this you need to understand the difference between public and private cloud. The short version is “with private cloud you own it all, and are the only person (or company) on the hardware. With public cloud, none of that is true (in most cases, but I’m not going to go into that here).&lt;/p&gt;

&lt;p&gt;A VPC allows you to treat AWS as if it’s all yours. You’re not going to see anyone else’s resources when you log in, and they wont ever see any of yours either.&lt;/p&gt;

&lt;p&gt;For the most part in AWS you have no idea that anyone else is using the service, except for a few unique naming rules.&lt;/p&gt;

&lt;h3 id=&quot;route53&quot;&gt;Route53&lt;/h3&gt;

&lt;p&gt;DNS (I’m sorry, I’m at it again), Amazon style.&lt;/p&gt;

&lt;p&gt;Domain Name System is a translator, between human readable web addresses and an IP address. For example www.google.co.uk has an ip of 216.58.204.3 which your PC uses on the internet (yes that is actually google.co.uk’s IP). Route53 is Amazons implementation.&lt;/p&gt;

&lt;p&gt;It’s named after 2 things. Route66 was the first highway in the U.S.A., and DNS servers work on port 53. Kinda creative I guess?&lt;/p&gt;

&lt;h3 id=&quot;cloudwatch&quot;&gt;CloudWatch&lt;/h3&gt;

&lt;p&gt;You can “watch” your “cloud” resources. CloudWatch. This covers metrics and logs, but there are different charges depending on what you’re looking at.&lt;/p&gt;

&lt;p&gt;You can do some cool stuff with logs, like exporting them to other tools for analytics and graphing.&lt;/p&gt;

&lt;h3 id=&quot;cloudtrail&quot;&gt;CloudTrail&lt;/h3&gt;

&lt;p&gt;Auditing. Well, an audit “trail” on your “cloud”. Same sort of naming convention here.&lt;/p&gt;

&lt;h3 id=&quot;ebs&quot;&gt;EBS&lt;/h3&gt;

&lt;p&gt;Elastic Block Store.&lt;/p&gt;

&lt;p&gt;This is virtual disk, but it’s a type of disk suited to reading and writing in “blocks”. Databases tend to use this sort of storage type, as it has a much faster read &amp;amp; write speed.&lt;/p&gt;

&lt;h3 id=&quot;rds&quot;&gt;RDS&lt;/h3&gt;

&lt;p&gt;Relational Database Service.&lt;/p&gt;

&lt;p&gt;Amazon will set up and manage a “Highly Available” (HA) cluster of a database engine of your choice. Not all DBMS’s are available (sorry Sybase users), but the common ones are there.&lt;/p&gt;

&lt;p&gt;You still get CLi and SSH access too, which is nice if you need/want/like to fine-tune anything.&lt;/p&gt;

&lt;p&gt;You can pretty much use this as a drop-in replacement for an on-premises DB cluster, but you can’t quite do without a DBA. You will also need some EBS (see above).&lt;/p&gt;

&lt;h3 id=&quot;iam&quot;&gt;IAM&lt;/h3&gt;

&lt;p&gt;Identity Access Management (what?)&lt;/p&gt;

&lt;p&gt;This is AWS’s “permissions” setup. It’s a way to control who gets access to what. Broken down into users, groups, roles &amp;amp; policies. Users go into groups. Roles can be applied to groups or users. Policies are attached to roles.&lt;/p&gt;

&lt;p&gt;The upshot of this, is servers/other resources can hold an “IAM Role”. This allows them access to do/see/get/change something from another service, without having to create service accounts. If you’ve ever used them in the past, you’ll understand why this is “A Good Thing TM”.&lt;/p&gt;

&lt;h3 id=&quot;efs&quot;&gt;EFS&lt;/h3&gt;

&lt;p&gt;Elastic File Store.&lt;/p&gt;

&lt;p&gt;Basically a network drive. Cool pricing model — you just use it, and pay for what you use. Unlike disk-based storage pricing, where you have provision and pay for a whole disk. One less headache.&lt;/p&gt;

&lt;p&gt;Phew. Time for a break. Actually please don’t leave. It gets better I promise.&lt;/p&gt;

&lt;p&gt;Have a coffee. On me.&lt;/p&gt;

&lt;h2 id=&quot;the-intermediate-tools&quot;&gt;The Intermediate Tools&lt;/h2&gt;

&lt;p&gt;These are tools that you will use a lot, once you’re over the initial “what’s this cloud thing?” hurdle. If you’re lucky, you’ll skip the hurdle and crack right on with these too.&lt;/p&gt;

&lt;h3 id=&quot;ecs--ecr&quot;&gt;ECS &amp;amp; ECR&lt;/h3&gt;

&lt;p&gt;Elastic Container Service &amp;amp; Elastic Container Registry.&lt;/p&gt;

&lt;p&gt;Right, containers. They’d come up eventually.&lt;/p&gt;

&lt;p&gt;ECS is Amazon’s service for orchestrating Docker containers (sort of Amazon’s take on Docker Swarm I guess?). ECR is their version of Docker Hub, so you can store all your Docker images inside AWS. Great if your InfoSec people don’t like the idea of data leaving controlled environments.&lt;/p&gt;

&lt;h3 id=&quot;sqs&quot;&gt;SQS&lt;/h3&gt;

&lt;p&gt;Simple Queue Service&lt;/p&gt;

&lt;p&gt;It’s a queuing service. Don’t really know what else to say about it? Nothing creative about the name. It was AWS’s first available service though, way back in 2004, predating AWS itself by 2 YEARS!&lt;/p&gt;

&lt;p&gt;OK, that’s interesting.&lt;/p&gt;

&lt;h3 id=&quot;sns--ses&quot;&gt;SNS &amp;amp; SES&lt;/h3&gt;

&lt;p&gt;Simple Notification Service &amp;amp; Simple Email Service&lt;/p&gt;

&lt;p&gt;It sends notifications (think text messages), and emails. This is sort of writing itself at this point.&lt;/p&gt;

&lt;p&gt;SNS will send emails, but SES gives you more control over the email content.&lt;/p&gt;

&lt;h3 id=&quot;aurora--dynamodb&quot;&gt;Aurora &amp;amp; DynamoDB&lt;/h3&gt;

&lt;p&gt;Aurora is part of the RDS family, but is fully managed, so you don’t get access to the underlying servers. It’s both MySQL and PostgreSQL compliant. Either/or, not both at the same time. The name is Latin for “dawn”.&lt;/p&gt;

&lt;p&gt;Little bit of mental gymnastics here, but maybe they mean “dawn of a new database technology”?&lt;/p&gt;

&lt;p&gt;DynamoDB is the next extension of AWS’s DB offering. Dynamo is a NoSQL (Not Only SQL) database. It’s largely cheaper to run than RDS/Aurora, and is fully serverless, but doesn’t enforce referential integrity (see here for an explanation). So if you can work with that (and being honest, you probably can) go for DynamoDB.&lt;/p&gt;

&lt;p&gt;The name is a derivative of the storage system Dynamo (reference). This in turn is probably based on a physical dynamo, which turns kinetic energy (rotation) into electricity. I can’t work the link out, but it sounds cool&lt;/p&gt;

&lt;h3 id=&quot;elasticache&quot;&gt;Elasticache&lt;/h3&gt;

&lt;p&gt;AWS has two offering for caching services, both under the banner of Elasticache. Redis and Memcached. There are reasons why you’d use one over the other, but I’m not going to go into that here (use Redis if you value your sanity). Again a fairly traceable name. Cache because it’s a cache, elastic because it implements elasticity in the same way the EC2 service does.&lt;/p&gt;

&lt;h3 id=&quot;redshift&quot;&gt;Redshift&lt;/h3&gt;

&lt;p&gt;This is AWS’s data warehousing solution, using columnar storage (most DB’s are row-based, with the notable exception of SybaseIQ. Take 10 imaginary points if you’ve heard of that before).&lt;/p&gt;

&lt;p&gt;The name is based on one of 2 things, and I can’t find anything definitive to confirm either.&lt;/p&gt;

&lt;p&gt;Option 1: Redshift is a physical phenomenon, and part of the doppler effect, where items getting further away appear red. This is usually due to expansion, so this could be the easy way you can expand the size of your redshift clusters.&lt;/p&gt;

&lt;p&gt;Option 2: It’s a swipe at Oracle, who have a red logo. The idea being that teams would shift away from Oracle.&lt;/p&gt;

&lt;p&gt;Take your pick which you believe. I think option 2 is more likely, but because I’m a nerd I like option 1 more.&lt;/p&gt;

&lt;h3 id=&quot;well-architected-tool&quot;&gt;Well-Architected Tool&lt;/h3&gt;

&lt;p&gt;This really dry name sits in front of a really useful tool (isn’t that always the way?). The well-architected tool is AWS’s attempt to automate some of the work their consultants were doing with their customers. Particularly around how best to setup their infrastructure against the well-architected framework:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Operational Excellence&lt;/li&gt;
  &lt;li&gt;Security&lt;/li&gt;
  &lt;li&gt;Reliability&lt;/li&gt;
  &lt;li&gt;Performance Efficiency&lt;/li&gt;
  &lt;li&gt;Cost Optimization&lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id=&quot;api-gateway&quot;&gt;API Gateway&lt;/h3&gt;

&lt;p&gt;An API is an Application Programming Interface (but you knew that already, right?). API Gateway is an easy way to create and publish your API’s, so that you can use them with other services (both AWS and not).&lt;/p&gt;

&lt;p&gt;The upshot of using API Gateway, rather than self-hosting your API, is it talks natively to other AWS services, including Cloudwatch. Meaning that you can monitor your API service just like any other AWS hosted service. And it’s PaaS.&lt;/p&gt;

&lt;h3 id=&quot;cloudfront&quot;&gt;Cloudfront&lt;/h3&gt;

&lt;p&gt;Cloudfront is AWS’s take on a CDN (Content Delivery Network. Can’t get away from these definitions that use more acronyms. Symptom of the industry I guess?).&lt;/p&gt;

&lt;p&gt;Cloudfront is pretty cool, because it’s using AWS’s existing (and massive) network of servers. It talks natively to other AWS services, so you can include the setup/teardown of your CDN with your web-app deployment. Or make your S3-based static site globally available with really low effort.&lt;/p&gt;

&lt;h3 id=&quot;direct-connect&quot;&gt;Direct Connect&lt;/h3&gt;

&lt;p&gt;This creates a direct link between your data centre and the AWS backbone, so you’re not talking over a VPN. This is significantly faster than a VPN, and more consistent — no more spikes at peak times.&lt;/p&gt;

&lt;h3 id=&quot;asm&quot;&gt;ASM&lt;/h3&gt;

&lt;p&gt;AWS Secrets Manager. To be fair this is usually just called Secrets Manager.&lt;/p&gt;

&lt;p&gt;It manages secrets (key/pair usually) and allows you to refer to them via their ARN (Amazon Resource Name. I’m not defining ARNs any further because its not a tool as such). This gives you powerful options inside your resource stacks. Like not putting access keys or database passwords in source control, but referring to their ARN.&lt;/p&gt;

&lt;h3 id=&quot;acm&quot;&gt;ACM&lt;/h3&gt;

&lt;p&gt;AWS Certificate Manager&lt;/p&gt;

&lt;p&gt;It manages SSL/TLS certificates. Like ASM but for certificates. Same advantages really. It’s also one of two ways to attach SSL certificates to elastic load balancers. The other is via IAM, but ACM has a better interface &amp;amp; certificate rotation options, in my opinion.&lt;/p&gt;

&lt;h3 id=&quot;shield&quot;&gt;Shield&lt;/h3&gt;

&lt;p&gt;DDoS protection. Sort of what it says on the tin. Works on OSi layer 3 or 4 (OSi model), 24/7 coverage, with a human on the other end for when the heuristics fall over. Nice price protection too — stops you running up a massive scaling bill because you’ve been DDoS’d.&lt;/p&gt;

&lt;h3 id=&quot;waf&quot;&gt;WAF&lt;/h3&gt;

&lt;p&gt;Web Application Firewall. Firewall as a service at a basic level. At the advanced pricing level for Shield, this comes free. Which is nice.&lt;/p&gt;

&lt;h3 id=&quot;storage-gateway&quot;&gt;Storage Gateway&lt;/h3&gt;

&lt;p&gt;This is a piece of kit that you put in your existing datacentre. It gives you access to the storage services (EBS, EFS, S3, etc) using standard network file transfer protocols (SMB, NFS, iSCSI). Could be viewed as a stepping stone to getting into the cloud, but I think it’s more aimed at being an easier backup solution.&lt;/p&gt;

&lt;p&gt;Right. Break number two. Tea anyone?&lt;/p&gt;

&lt;h2 id=&quot;the-more-obscure-but-still-on-the-exam&quot;&gt;The more obscure (but still on the exam)&lt;/h2&gt;

&lt;p&gt;These are not common in early-phase cloud adoptions, but are on the “AWS-SA-Assoc” exam. A couple of the more interesting names here.&lt;/p&gt;

&lt;h3 id=&quot;athena&quot;&gt;Athena&lt;/h3&gt;

&lt;p&gt;Athena was (is?) the Greek goddess of wisdom, and the tool allows you to query files directing in S3, using SQL. Thus gaining wisdom?&lt;/p&gt;

&lt;h3 id=&quot;quicksight&quot;&gt;QuickSight&lt;/h3&gt;

&lt;p&gt;Quicksight is the graphing tool you can use on top of AWS Athena, to give quick (in)sight into your data.&lt;/p&gt;

&lt;h3 id=&quot;glue&quot;&gt;Glue&lt;/h3&gt;

&lt;p&gt;This is an ETL (Extract Transform Load) tool. ETL is used a lot when creating derived data sets (e.g. data aggregations). Essentially it’s “gluing” data back together.&lt;/p&gt;

&lt;h3 id=&quot;kinesis--firehose&quot;&gt;Kinesis &amp;amp; Firehose&lt;/h3&gt;

&lt;p&gt;These are two different tools, but I’ve lumped them together because they’re used together quite a lot.&lt;/p&gt;

&lt;p&gt;Kinesis is (loosely) Greek for movement, and that’s what it does. Moves data.&lt;/p&gt;

&lt;p&gt;It comes in two forms Streams and Firehose.&lt;/p&gt;

&lt;p&gt;Kinesis Streams take streaming data , lets you do transformations on the data, before outputting it. You could use this to dynamically change the content of a webpage as a user is interacting with it. Pretty cool huh?&lt;/p&gt;

&lt;p&gt;Kinesis firehose allows you to continuously stream data from disparate inputs (like IoT devices) into either analytics tools (e.g. kinesis streams or custom lambdas) or S3.&lt;/p&gt;

&lt;h3 id=&quot;opsworks&quot;&gt;OpsWorks&lt;/h3&gt;

&lt;p&gt;OpsWorks allows you to run your existing chef (and puppet) code in your AWS account. I think it’s called OpsWorks because in a traditional setup that’s the work of an Ops team.&lt;/p&gt;

&lt;p&gt;That’s it. Simple. For once.&lt;/p&gt;

&lt;h3 id=&quot;config&quot;&gt;Config&lt;/h3&gt;

&lt;p&gt;This monitors your AWS estate and gives you some control over the change management, and compliance monitoring. Handy when you have a regulator to worry about.&lt;/p&gt;

&lt;h3 id=&quot;snowball--snowmobile&quot;&gt;Snowball &amp;amp; Snowmobile&lt;/h3&gt;

&lt;p&gt;These are cool. These are seriously cool.&lt;/p&gt;

&lt;p&gt;They are variations on the theme of moving large amounts of data from an on-premises setup to AWS. The snowball is a box with a bunch of disks in and a shipping label. It connects to you network via Ethernet, and one type of snowball will do basic compute operations on the data whilst in-transit. The snowmobile is this, but bigger. Much bigger.&lt;/p&gt;

&lt;p&gt;Comes with armed guards if you want them. Now bear with me whilst I pick my jaw up off the floor.&lt;/p&gt;

&lt;p&gt;Last stretch now, hopefully the caffeine from the tea and coffee is still with you.&lt;/p&gt;

&lt;h2 id=&quot;the-cool-ones-but-not-on-the-exam&quot;&gt;The “Cool” ones (but not on the exam)&lt;/h2&gt;

&lt;p&gt;These are good fun. They also have a habit of showing up at places like re:invent with flashy demos.&lt;/p&gt;

&lt;h3 id=&quot;polly&quot;&gt;Polly&lt;/h3&gt;

&lt;p&gt;Want a cracker?&lt;/p&gt;

&lt;p&gt;Amazon’s text-to-speech service. Uses machine learning to make natural-ish sounding voices. Pops up in a few training courses for the AWS-SA-Assoc exams, because it’s cool and makes an impact.&lt;/p&gt;

&lt;h3 id=&quot;deepracer&quot;&gt;DeepRacer&lt;/h3&gt;

&lt;p&gt;Deep learning &amp;amp; racing cars. Cool.&lt;/p&gt;

&lt;p&gt;This is/was/will be the main feature of 2019’s Re:Invent.&lt;/p&gt;

&lt;p&gt;It’s a cool way to get started with reinforcement learning, but I can’t sell this better than Amazon can, so if you haven’t already, take a look at it here.&lt;/p&gt;

&lt;h3 id=&quot;sumerian&quot;&gt;Sumerian&lt;/h3&gt;

&lt;p&gt;This was the language spoken in Sumer, and ancient mesopotamia (sort of where Iraq is now). How this turns into AR &amp;amp; VR with AWS I have no idea, but it does.&lt;/p&gt;

&lt;h3 id=&quot;lumberyard&quot;&gt;Lumberyard&lt;/h3&gt;

&lt;p&gt;A lumberyard is an American term (I think, not heard it in the UK before) for somewhere you buy large amounts of wood (called lumber).&lt;/p&gt;

&lt;p&gt;AWS’s version is a game engine, for free, that integrates with Twitch (a game streaming platform that Amazon owns, but doesn’t publicise that they do).&lt;/p&gt;

&lt;p&gt;Lumberyard itself is free, which is cool. You pay for the AWS resources (S3, EC2, probably lambdas) that you use, on their own pay-as-you-use pricing models, so you can work out your costs pretty easily.&lt;/p&gt;

&lt;p&gt;I don’t actually write games, but if I did, I’d probably start with lumberyard. If only for the line in the licence that says:&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;57.10 Acceptable Use; Safety-Critical Systems. Your use of the Lumberyard Materials must comply with the AWS Acceptable Use Policy. The Lumberyard Materials are not intended for use with life-critical or safety-critical systems, such as use in operation of medical equipment, automated transportation systems, autonomous vehicles, aircraft or air traffic control, nuclear facilities, manned spacecraft, or military use in connection with live combat. However, this restriction will not apply in the event of the occurrence (certified by the United States Centers for Disease Control or successor body) of a widespread viral infection transmitted via bites or contact with bodily fluids that causes human corpses to reanimate and seek to consume living human flesh, blood, brain or nerve tissue and is likely to result in the fall of organized civilization.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Any service that has a zombie outbreak exemption is OK in my book.&lt;/p&gt;

&lt;h2 id=&quot;end&quot;&gt;END!&lt;/h2&gt;

&lt;p&gt;Yep. Bored now. Hopefully you made it this far and found this at least remotely useful.&lt;/p&gt;

&lt;p&gt;Like I said at the start (but it’s worth saying again) reading this WILL NOT ensure you pass the AWS-SA-Assoc exam, but it might at least get a few of the tools to stay in your mind. Or maybe you’ll start playing with DeepRacer — just watch your billing though!&lt;/p&gt;</content>

      
      
      
      
      

      <author>
          <name>Jon Goodall</name>
        
        
      </author>

      

      
        <category term="aws" />
      
        <category term="cloud" />
      
        <category term="ec2" />
      
        <category term="s3" />
      
        <category term="lambda" />
      
        <category term="kubernetes" />
      
        <category term="devops" />
      
        <category term="certification" />
      

      
        <summary type="html">I&apos;ve not long passed my AWS Certified Solutions Architect — Associate exam, and whilst studying I noticed that a lot of the service names are &apos;odd&apos;. Or acronyms. Or Greek. Let me decode them for you.</summary>
      

      
      
        
        <media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://jongoodall.co.uk/img/aws.png" />
      
    </entry>
  
    <entry>
      <title type="html">What’s In a Name? DevOps Edition</title>
      
      <link href="https://jongoodall.co.uk/blog/2019/01/19/whats-in-a-name-devops-edition/" rel="alternate" type="text/html" title="What&apos;s In a Name? DevOps Edition" />
      
      <published>2019-01-19T14:00:00+00:00</published>
      <updated>2019-01-19T14:00:00+00:00</updated>
      <id>https://jongoodall.co.uk/blog/2019/01/19/whats-in-a-name-devops-edition</id>
      <content type="html" xml:base="https://jongoodall.co.uk/blog/2019/01/19/whats-in-a-name-devops-edition/">&lt;p&gt;I’ve been working in DevOps for a while now, and I’ve yet to come across a tool that didn’t have something odd about its name. It’s either got a backstory, a meaning, or it’s Greek. I don’t know why, but I’d postulate that it’s because the market is completely flooded with tools, and you need yours to stand out, so you can make money — either from the tool itself, or a support package.&lt;/p&gt;

&lt;p&gt;With that in mind, I thought I’d translate them. In case you ever have the misfortune of having to explain to someone at C-Level (‘C’ as in CEO, not the swear word), why you’re trying to install an octopus.&lt;/p&gt;

&lt;p&gt;I’ve listed them here, and linked to the explanation, so you don’t need to read the list. But please do, the stats are a great ego boost.&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;a href=&quot;#docker&quot;&gt;Docker&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#jenkins&quot;&gt;Jenkins&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#bamboo&quot;&gt;Bamboo&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#drone&quot;&gt;Drone&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#gocd&quot;&gt;GoCD&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#octopus-deploy&quot;&gt;Octopus Deploy&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#ansible&quot;&gt;Ansible&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#chef&quot;&gt;Chef&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#puppet&quot;&gt;Puppet&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#teamcity&quot;&gt;TeamCity&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#urbancodedeploy&quot;&gt;UrbanCodeDeploy&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#consul&quot;&gt;Consul&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#vagrant&quot;&gt;Vagrant&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#kafka&quot;&gt;Kafka&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#kubernetes&quot;&gt;Kubernetes&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#terraform&quot;&gt;Terraform&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#vault&quot;&gt;Vault&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;#sentinel&quot;&gt;Sentinel&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;docker&quot;&gt;Docker&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;The tool:&lt;/strong&gt; creates, operates and managers containers.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The meaning:&lt;/strong&gt; Containers are in a dock at some point. A “docker” is an occasional shorthand for someone that works at a dock, with containers.&lt;/p&gt;

&lt;h2 id=&quot;jenkins&quot;&gt;Jenkins&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;The tool:&lt;/strong&gt; General purpose CI tool. CD was retconned into it with plugins that let you write pipelines (which I happen to quite like).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The meaning:&lt;/strong&gt; Stereotypical name for a butler. Butlers run households and “get stuff done” in a general purpose sort of way. Not to be confused with a valet (essentially a male equivalent of a lady’s maid). Also lots of creative definitions on UrbanDictionary, but I refuse to link to that (because no doubt someone will send me a bill for it).&lt;/p&gt;

&lt;h2 id=&quot;bamboo&quot;&gt;Bamboo&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;The tool:&lt;/strong&gt; CI/CD tool from Atlassian. Works with other Atlassian tools (jira, bitbucket etc.) much better than other tools as a result.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The meaning:&lt;/strong&gt; Fast growing plant, not very nutritious, pandas eat a lot of it — this one doesn’t make sense to me.&lt;/p&gt;

&lt;h2 id=&quot;drone&quot;&gt;Drone&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;The tool:&lt;/strong&gt; Yet another CI/CD tool. This one runs in Docker, with pipelines written in a version of docker compose. I guess you could call it “container native”, if you like. They do.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The meaning:&lt;/strong&gt; The proper name for most “worker” insects. And what every corporate employee feels like many times a day. Well, I do anyway.&lt;/p&gt;

&lt;h2 id=&quot;gocd&quot;&gt;GoCD&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;The tool:&lt;/strong&gt; ANOTHER CD TOOL. The odd names are making more sense now… (This definition is a little unfair, because it’s actually a really good tool. Lots of built in functionality, runs on kubernetes really well.)&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The meaning:&lt;/strong&gt; It’s written in GoLang. You could take it to mean “Go and do CD”.&lt;/p&gt;

&lt;h2 id=&quot;octopus-deploy&quot;&gt;Octopus Deploy&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;The tool:&lt;/strong&gt; One of the few deployment specific tools (outside of a couple of DB deployment tools) that I’ve come across. The sales pitch is that it gets you away from writing massive scripts. This will do the “heavy lifting” for you. Not sure I buy that. Not sure they do either, as they have a method of writing pipelines as code.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The meaning:&lt;/strong&gt; Feels like someone thought they were being clever with this one — “an octopus has tentacles, we’ll call our remote agents tentacles”. Nice Octopus graphics though.&lt;/p&gt;

&lt;h2 id=&quot;ansible&quot;&gt;Ansible&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;The tool:&lt;/strong&gt; Configuration management tool (there’s a few of these in the list, and in essence they all let you determine the state of a server in code). Uses YAML (Yet Another Markup Language) file to store its config. Steps are executed sequentially by default, so ordering is simple.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The meaning:&lt;/strong&gt; I think this one is quite clever, if you like science fiction.&lt;/p&gt;

&lt;blockquote&gt;
  &lt;p&gt;“The name of Ansible originally came from the book Rocannon’s World by Ursula Le Guin, published in 1966. She used the word as the name of an instantaneous communication device that would allow contact over vast interstellar distances”
— https://h2g2.com/edited_entry/A1165501&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;I don’t know if that was the inspiration for the name, but I like to think it was.&lt;/p&gt;

&lt;h2 id=&quot;chef&quot;&gt;Chef&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;The tool:&lt;/strong&gt; Configuration management tool. Steps in “recipes”. Really nice interface.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The meaning:&lt;/strong&gt; Chefs read cookbooks or create recipes to achieve the same end result each time (well, nearly. Depends on the restaurant. Hopefully the same isn’t true here).&lt;/p&gt;

&lt;h2 id=&quot;puppet&quot;&gt;Puppet&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;The tool:&lt;/strong&gt; Configuration management tool (again). The IDE is called “geppetto”, which is nice. (Geppetto made Pinocchio, in case you didn’t know. I didn’t until I looked it up).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The meaning:&lt;/strong&gt; You control a puppet on a set of strings from elsewhere. Puppet itself though is the other way around most of the time, as the deployment targets ask for the changes.&lt;/p&gt;

&lt;h2 id=&quot;teamcity&quot;&gt;TeamCity&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;The tool:&lt;/strong&gt; CI/CD tool from JetBrains (who make Intellij and a bunch of other tools).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The meaning:&lt;/strong&gt; Erm. Right. No logic or clever backstory here that I could find. Seems like it was made to sell to large corporations — which to be honest, I can understand.&lt;/p&gt;

&lt;h2 id=&quot;urbancodedeploy&quot;&gt;UrbanCodeDeploy&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;The tool:&lt;/strong&gt; IBMs’ take on a deployment tool. The only tool I’ve found that doesn’t have a free trial or download, so I couldn’t try it out.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The meaning:&lt;/strong&gt; I couldn’t find any reason behind this one, so I think it’s just a name.&lt;/p&gt;

&lt;h2 id=&quot;consul&quot;&gt;Consul&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;The tool:&lt;/strong&gt; Key/Value store from Hashicorp. Nice CLI and API’s. Also does service discovery, health-checking and DNS (via agents).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The meaning:&lt;/strong&gt; This one makes absolutely no sense to me. A consul is an official appointed by a state to live in a foreign city and protect the states’ interests there — e.g. they work at the consulate.&lt;/p&gt;

&lt;h2 id=&quot;vagrant&quot;&gt;Vagrant&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;The tool:&lt;/strong&gt; Allows you to make quick and cheap virtual PC’s on your existing physical PC. Saves you the pain of having to use VirtualBox/VMWare tools directly. Although you do still have to install them.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The meaning:&lt;/strong&gt; Colloquialism for a wandering beggar. If you do a little mental gymnastics you can see where they were going with this — person of no fixed address, virtual PC with no permanent home.&lt;/p&gt;

&lt;h2 id=&quot;kafka&quot;&gt;Kafka&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;The tool:&lt;/strong&gt; Used for building realtime data streams.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The meaning:&lt;/strong&gt; Apparently it’s named after Franz Kafka.&lt;/p&gt;

&lt;h2 id=&quot;kubernetes&quot;&gt;Kubernetes&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;The tool:&lt;/strong&gt; Kubernetes is a “container orchestration tool”. Which translates to it controls large amounts of containers&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The meaning:&lt;/strong&gt; Loosely translated from Greek as a helmsman, or habour pilot. Essentially a controller. Yes the spelling is a bit different, but you can see the logic here.&lt;/p&gt;

&lt;h2 id=&quot;terraform&quot;&gt;Terraform&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;The tool:&lt;/strong&gt; Infrastructure as code from HashiCorp. Lets you make anything in the major cloud providers, and manages their state. So that if someone changes something by hand, terraform can correct it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The meaning:&lt;/strong&gt; SciFi staple. Changing the environment to suit you. We (as in the species) might do it to Mars (the planet, not the chocolate) one day.&lt;/p&gt;

&lt;h2 id=&quot;vault&quot;&gt;Vault&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;The tool:&lt;/strong&gt; Keeps data secure, only known people have the keys. Can seal/unseal/re-key. Various access policies&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The meaning:&lt;/strong&gt; Another analogy. Not a Hollywood vault with a big room behind 1 massive door, but more a vault with lots of safety deposit boxes in it. For a film reference I’d go with “The Bank Job (2008)”.&lt;/p&gt;

&lt;h2 id=&quot;sentinel&quot;&gt;Sentinel&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;The tool:&lt;/strong&gt; Policies as code. Works with other HashiCorp tools (enterprise version, you’ve got to pay for this one) to ensure that they are only used in a pre-defined manner. Lots of good examples on their website, go check it out.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The meaning:&lt;/strong&gt; Sentinels guard or watch things to ensure that people don’t do things they aren’t meant to. Typically military personnel.&lt;/p&gt;

&lt;h2 id=&quot;wrapping-up&quot;&gt;Wrapping up&lt;/h2&gt;

&lt;p&gt;Right, that’s it for now, because I’ve run out of brain. If you’ve made it to this point I’m impressed. If you’ve skimmed the list to see if there’s a witty final statement “hi, &lt;em&gt;waves&lt;/em&gt;”. If you only wanted to see what Sentinel was and saw me waving, I don’t blame you.&lt;/p&gt;

&lt;p&gt;I’ll try to post a follow up at some point as I find/try/use more tools — particularly ones with “odd” names. If there’s any you’ve come across and I’ve missed, or if you have a better reason/definition for any I do have drop me a comment.&lt;/p&gt;

&lt;p&gt;Hopefully this (very dry, quite boring) list saves you a bit of a headache, or gives you one, who knows. I promise next time I’ll write about something interesting and maybe grind an axe for a bit.&lt;/p&gt;</content>

      
      
      
      
      

      <author>
          <name>Jon Goodall</name>
        
        
      </author>

      

      
        <category term="devops" />
      
        <category term="docker" />
      
        <category term="jenkins" />
      
        <category term="kubernetes" />
      
        <category term="terraform" />
      
        <category term="ansible" />
      
        <category term="chef" />
      
        <category term="puppet" />
      
        <category term="kafka" />
      

      
        <summary type="html">I&apos;ve been working in DevOps for a while now, and I&apos;ve yet to come across a tool that didn&apos;t have something odd about its name. It&apos;s either got a backstory, a meaning, or it&apos;s Greek. Let me translate them for you.</summary>
      

      
      
        
        <media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://jongoodall.co.uk/img/aws.png" />
      
    </entry>
  
</feed>
