<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>Random Thoughts</title>
    <link>https://blog.asksven.io/</link>
    <description>Recent content on Random Thoughts</description>
    <generator>Hugo -- gohugo.io</generator>
    <language>en</language>
    <copyright>Content licensed under &lt;a href=&#34;https://creativecommons.org/licenses/by/4.0/&#34;&gt;CC BY 4.0&lt;/a&gt;</copyright>
    <lastBuildDate>Sun, 13 Aug 2023 00:00:00 +0000</lastBuildDate><atom:link href="https://blog.asksven.io/index.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>Watch-out for Azure SFTP costs</title>
      <link>https://blog.asksven.io/posts/watch-out-for-azure-sftp-costs/</link>
      <pubDate>Sun, 13 Aug 2023 00:00:00 +0000</pubDate>
      
      <guid>https://blog.asksven.io/posts/watch-out-for-azure-sftp-costs/</guid>
      <description>Introduction I run backups for both my Kubernetes clusters and my Linux servers. Since I want offsite backups and have an Azure subscriptions I follow these approaches:
 my Kubernetes backup use a Minio proxy connected to an Azure storage account, with a container per cluster/app my Linux backup use(d) sftp to an Azure storage account with SFTP enabled, with a container per server  Now what is the issue with this?</description>
    </item>
    
    <item>
      <title>K8s rook-ceph benchmark</title>
      <link>https://blog.asksven.io/posts/k8s-rook-ceph-benchmark/</link>
      <pubDate>Sun, 30 Jul 2023 00:00:00 +0000</pubDate>
      
      <guid>https://blog.asksven.io/posts/k8s-rook-ceph-benchmark/</guid>
      <description>&lt;h2 id=&#34;introduction&#34;&gt;Introduction&lt;/h2&gt;
&lt;p&gt;I have been procastinating on this for a while, and did not post since march: shame on me!&lt;/p&gt;
&lt;p&gt;Since I just rebuilt my production cluster with proxmox/talos, I took the opportunity to run some storage benchmarks to compare rook-ceph&amp;rsquo;s performance between k8s running on proxmox and k8s running on raspberry pi (version 4 wit 8GB).&lt;/p&gt;</description>
    </item>
    
    <item>
      <title>Minikube with ingress controller on the mac</title>
      <link>https://blog.asksven.io/posts/minikube-with-ingress-on-mac/</link>
      <pubDate>Sun, 19 Mar 2023 00:00:00 +0000</pubDate>
      
      <guid>https://blog.asksven.io/posts/minikube-with-ingress-on-mac/</guid>
      <description>Introduction Update and install minikube  make sure that you have the most up-to-date version installed: minikube update-check. At the time of writing it&amp;rsquo;s v1.29.0 delete any previous minikube config: minikube delete install minikube in docker on the stable version: minikube start --driver=docker --kubernetes-version=v1.26.1. Note after the initial install you only need to run minikube start since the config is sticky check that minikube is up-and-running: kubectl get nodes  The output should look like this:</description>
    </item>
    
    <item>
      <title>Microservices and observability</title>
      <link>https://blog.asksven.io/posts/microservices-and-observability/</link>
      <pubDate>Sun, 09 May 2021 00:00:00 +0000</pubDate>
      
      <guid>https://blog.asksven.io/posts/microservices-and-observability/</guid>
      <description>Introduction In the last months I have been dealing with Kubernetes based (micro)services that I could not change, either because they were off-the-shelf or because they had been externally developed. In terms of observability this is a challenge, especially when application metrics are only partially available.
I could have opted for a service mesh, but implementing Istio for an application composed of 20 microservices seemed quite overkill, adding a lot of complexity and cognitive load.</description>
    </item>
    
    <item>
      <title>Apple Silicon, quo vadis?</title>
      <link>https://blog.asksven.io/posts/apple-silicon-quo-vadis/</link>
      <pubDate>Sat, 16 Jan 2021 00:00:00 +0000</pubDate>
      
      <guid>https://blog.asksven.io/posts/apple-silicon-quo-vadis/</guid>
      <description>Introduction Shame on me, I have not written anything for half a year!
What if I had been asleep for half a year, woke-up and realized: well, so much time has passed, the arm macs are now mainstream, have full-support of native tools making developers more efficient, right? Admittedly, I am not speaking for all developers, but I never stated I was.
My conclusion from the last post was: poor developers who work with Docker!</description>
    </item>
    
    <item>
      <title>Apple! the world is not ready for ARM</title>
      <link>https://blog.asksven.io/posts/apple-the-world-is-not-ready-for-arm/</link>
      <pubDate>Mon, 29 Jun 2020 00:00:00 +0000</pubDate>
      
      <guid>https://blog.asksven.io/posts/apple-the-world-is-not-ready-for-arm/</guid>
      <description>Introduction Last week Apple announced its transition to ARM, err, Apple silicon.
My first thought was: poor developers who work with Docker! They will now develop and test on an architecture that is - at most - exotic in the datacenters of the mainstream cloud-providers.
Don&amp;rsquo;t get me wrong, I am not an Apple-hater: my day-job work-horse is a 2015 Macbook Pro, and I like it better than my work Windows laptop, also because it has an I7 Intel CPU that has aged well.</description>
    </item>
    
    <item>
      <title>Manage your home-network with an Azure DNS Zone</title>
      <link>https://blog.asksven.io/posts/azzure-dns-zone-updater/</link>
      <pubDate>Sun, 28 Jun 2020 00:00:00 +0000</pubDate>
      
      <guid>https://blog.asksven.io/posts/azzure-dns-zone-updater/</guid>
      <description>Introduction Azure DNS Zones is an inexpensive way to manage DNS records for your domains, even if you have dynamic IP. Back in the days, and before greedy Oracle took it over, dyndns.org used the (free) place to be if you had a dynamic IP and wanted to expose your home-network.
Things have changed but fortunately offers like Azure DNS Zones are as inexpensive as a few Euro per month, and are easy to maintain with a little scripting.</description>
    </item>
    
    <item>
      <title>Connected plants</title>
      <link>https://blog.asksven.io/posts/iot-plants/</link>
      <pubDate>Sun, 21 Jun 2020 00:00:00 +0000</pubDate>
      
      <guid>https://blog.asksven.io/posts/iot-plants/</guid>
      <description>&lt;h2 id=&#34;introduction&#34;&gt;Introduction&lt;/h2&gt;
&lt;p&gt;A few weeks ago I had a few days off and, as I will most probably spend my summer vacations in Balconia I started a little non-IT (ahem!) project to set-up my balcony as a green oasis for the summer:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;A little table and two chairs&lt;/li&gt;
&lt;li&gt;A comfortable chair to lie in the sun&lt;/li&gt;
&lt;li&gt;A rack and side-board for plants&lt;/li&gt;
&lt;li&gt;A few plants&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;As I don&amp;rsquo;t have a green thumb, plants are of course somewhat risky, so I decided that I needed some indicator to help me understand when my plant need water and minerals. This post is about where that brought me.&lt;/p&gt;</description>
    </item>
    
    <item>
      <title>Grafana remote image renderer</title>
      <link>https://blog.asksven.io/posts/grafana-remote-image-renderer/</link>
      <pubDate>Sat, 20 Jun 2020 00:00:00 +0000</pubDate>
      
      <guid>https://blog.asksven.io/posts/grafana-remote-image-renderer/</guid>
      <description>Introduction Since the Grafana Image Renderer plug-in is not supported anymore from Grafana 7.0 some changes are required to switch to the remote image renderer, and run it as a docker container.
This post goes into the details of setting-up a remote image renderer for Kubernetes, on amd64, arm/v7 and arm64.
Multi-arch build The official git repo only supports linux/amd64 at this moment but there is an issue for arm-support.</description>
    </item>
    
    <item>
      <title>Gitlab CI/CD docker builds with docker 19.03 images</title>
      <link>https://blog.asksven.io/posts/gitlab-cicd-docker-build-with-docker_19_03/</link>
      <pubDate>Mon, 15 Jun 2020 00:00:00 +0000</pubDate>
      
      <guid>https://blog.asksven.io/posts/gitlab-cicd-docker-build-with-docker_19_03/</guid>
      <description>Introduction In this previous post I came accross an issue that I wanted to write about in more details:
 Why it is bad to rely on any kind of latest tags How docker 19.03-dind will break your gitlab-ci docker builds and what you can do about it  If you do not use latest your pipeline is not already broken but this may still be interesting for you since this summary will help you update.</description>
    </item>
    
    <item>
      <title>Building docker images for multiple architectures with docker buildx</title>
      <link>https://blog.asksven.io/posts/docker-build-for-multiple-architectures-with-docker-buildx/</link>
      <pubDate>Sun, 14 Jun 2020 00:00:00 +0000</pubDate>
      
      <guid>https://blog.asksven.io/posts/docker-build-for-multiple-architectures-with-docker-buildx/</guid>
      <description>Introduction In this previous post we have been exploring how to build docker images for multiple architectures.
In this post we will look into streamlining this approach using docker buildx, both locally and in gitlab-ci.
Step-by-step Enable buildx In order to use docker buildx you will need:
 A recent docker version; I am running 19.03.11 on linux enable the experimental features: export DOCKER_CLI_EXPERIMENTAL=enabled  Running docker buildx should show you:</description>
    </item>
    
    <item>
      <title>Testing gitlab-ci pipelines locally</title>
      <link>https://blog.asksven.io/posts/testing-gitlab-ci-pipelines-locally/</link>
      <pubDate>Sun, 14 Jun 2020 00:00:00 +0000</pubDate>
      
      <guid>https://blog.asksven.io/posts/testing-gitlab-ci-pipelines-locally/</guid>
      <description>&lt;h2 id=&#34;introduction&#34;&gt;Introduction&lt;/h2&gt;
&lt;p&gt;Debugging gitlab-ci pipelines can be a tedious task, especially as the pipeline does not run in the &lt;a href=&#34;https://mitchdenny.com/the-inner-loop/&#34;&gt;inner loop&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Fortunately the gitlab-runner can be installed locally, allowing you to test many aspects of the CI/CD pipeline prior to commit.&lt;/p&gt;</description>
    </item>
    
    <item>
      <title>Kubernetes RBAC explained</title>
      <link>https://blog.asksven.io/posts/kubernetes-rbac-explained/</link>
      <pubDate>Tue, 19 May 2020 00:00:00 +0000</pubDate>
      
      <guid>https://blog.asksven.io/posts/kubernetes-rbac-explained/</guid>
      <description>Introduction Whether it is from CI/CD or from the command-line, I often see the default kube-config with cluster-admin rights being used. This is like permanently working with root privileges and there certainly are more secure ways.
In this post we will look into demystifying Kubernetes RBAC, and setting-up more suitable permissions for two use-cases:
 a CI/CD pipeline that needs full permissions on anything located in a given Namespace a reader who needs to access resources for troubleshooting purposes  Concepts Roles and ClusterRoles define sets of permissions to objects at the namespace and cluster scope.</description>
    </item>
    
    <item>
      <title>Kubernetes policies with Gatekeeper</title>
      <link>https://blog.asksven.io/posts/gatekeeper/</link>
      <pubDate>Mon, 18 May 2020 00:00:00 +0000</pubDate>
      
      <guid>https://blog.asksven.io/posts/gatekeeper/</guid>
      <description>Introduction Gatekeeper is a validating webhook that enforces CRD-based policies executed by Open Policy Agent. In a previous post, we went into details about OPA: this post superseeds it. The differences between OPA and Gatekeeper are listed here.
In this post we will explore Gatekeeper and start with implementing a policy to enforce a given label to be present at the namespace level.
In future posts coming soon we will implement policies as described here:</description>
    </item>
    
    <item>
      <title>SSH login with yubikey using PIV</title>
      <link>https://blog.asksven.io/posts/ssh-login-with-yubikey/</link>
      <pubDate>Mon, 11 May 2020 00:00:00 +0000</pubDate>
      
      <guid>https://blog.asksven.io/posts/ssh-login-with-yubikey/</guid>
      <description>Introduction This article will take you through setting-up a yubikey to hold your SSH private key. It assumes that you have a PIV-enabled yubikey:
PIV, or FIPS 201, is a US government standard. It enables RSA or ECC sign/encrypt operations using a private key stored on a smartcard (such as the YubiKey NEO), through common interfaces like PKCS#11.
PIV is primarily used for non-web applications. It has built-in support under Windows, and can be used on OS X and Linux via the OpenSC project.</description>
    </item>
    
    <item>
      <title>Building docker images for multiple architectures</title>
      <link>https://blog.asksven.io/posts/docker-build-for-multiple-architectures/</link>
      <pubDate>Sun, 03 May 2020 00:00:00 +0000</pubDate>
      
      <guid>https://blog.asksven.io/posts/docker-build-for-multiple-architectures/</guid>
      <description>Introduction Since Kubernetes runs on the Raspberry PI I have been investigating ways to build my blog so that it can run on my x86 (Proxmox) as well as ARM Kubernetes cluster, composed of Raspberry PIs and an Nvidia Jetson Nano.
This post will take you through my learnings of the taxonomy of architectures and platforms, as well as building docker images for multiple architectures.
Architectures Well, I already knew that rpi has a different architecture than my Intel-based hardware, so let&amp;rsquo;s get into how these are named.</description>
    </item>
    
    <item>
      <title>Understanding Kubernetes&#39; pod lifecycle: the readiness probe</title>
      <link>https://blog.asksven.io/posts/kubernetes-readiness-probes/</link>
      <pubDate>Sat, 02 May 2020 00:00:00 +0000</pubDate>
      
      <guid>https://blog.asksven.io/posts/kubernetes-readiness-probes/</guid>
      <description>Introduction Understanding Kubernetes&#39; concepts is key to running highly available applications.
This article will take you through the scenario of deploying a new version of a pod, and show how understanding the pod lifecycle and implementing a readiness probe will help you deploying new releases without downtime.
Without a readiness probe Kubernetes will try to guess when your pod is ready, and then schedule traffic to it. If the pod has latency between the point-in-time when the container is running and when it can handle traffic, this will cause transactions to be dropped, a.</description>
    </item>
    
    <item>
      <title>Self-Service Operations: the Why? and the How?</title>
      <link>https://blog.asksven.io/posts/self-service-operations/</link>
      <pubDate>Sun, 19 Apr 2020 00:00:00 +0000</pubDate>
      
      <guid>https://blog.asksven.io/posts/self-service-operations/</guid>
      <description>Disclaimer The opinions depicted in this post are mine, not the ones of my employer.
Introduction Self-service operations is a term coined by Damon Edwards from Rundeck to describe principles (and tools) that should guide operations in an enterprise, or any other organization that have more than one two pizza team. Why self-service operations is so important comes from the fact, that in large organizations teams depend on other teams (because there is a limit to the size of a team and to what their responsibility can encompass).</description>
    </item>
    
    <item>
      <title>Securing you kubernetes configuration. Not so simple!</title>
      <link>https://blog.asksven.io/posts/securing-kubernetes-configuration/</link>
      <pubDate>Mon, 13 Apr 2020 00:00:00 +0000</pubDate>
      
      <guid>https://blog.asksven.io/posts/securing-kubernetes-configuration/</guid>
      <description>Introduction There are lots of articles explaining what is important and what you should consider to securing your Kubernetes configurations, but I have not found that many guiding you through the steps of implementing these recommendations. And I am not talking about securing the code of the application (this is something that software engineers should be used to) or the containers (this is something for another time).
These recommendations are in the realm of:</description>
    </item>
    
    <item>
      <title>Prometheus push gateway</title>
      <link>https://blog.asksven.io/posts/prometheus-push-gateway/</link>
      <pubDate>Sun, 05 Apr 2020 00:00:00 +0000</pubDate>
      
      <guid>https://blog.asksven.io/posts/prometheus-push-gateway/</guid>
      <description>Introduction While Prometheus&#39; default architecture is scraping there may be good reasons to want to push metrics:
 from sources that are not reachable from Prometheus from source that are short-lived, e.g. batch jobs  For such use-cases Prometheus comes with a pushgateway. When using this architecture you should be aware of the fact that the pushgateway is a single-point-of-failure.
In this post we will look at implementing pushing metrics to Prometheus from a backup job running on another node.</description>
    </item>
    
    <item>
      <title>kubernetes cloud disaster recovery</title>
      <link>https://blog.asksven.io/posts/kubernetes-cloud-dr/</link>
      <pubDate>Thu, 20 Jun 2019 00:00:00 +0000</pubDate>
      
      <guid>https://blog.asksven.io/posts/kubernetes-cloud-dr/</guid>
      <description>Introduction I run my workloads (blog, different apps) on my home-lab server (Proxmox) and Kubernetes, because I can. I have been working on backup as well as automated provisioning of Azure Kubernetes Service (aks) lately so I thought why not put both together and automate a disaster recovery scenario.
Depending on conditions the azure provisioning time may vary but based on different tests the end-to-end process takes about 15 minutes.</description>
    </item>
    
    <item>
      <title>kubernetes backup to Azure with velero</title>
      <link>https://blog.asksven.io/posts/kubernetes-backup-to-azure-with-velero/</link>
      <pubDate>Mon, 10 Jun 2019 00:00:00 +0000</pubDate>
      
      <guid>https://blog.asksven.io/posts/kubernetes-backup-to-azure-with-velero/</guid>
      <description>Introduction I run my workloads on a Kubernetes cluster in my home-lab and wanted to create an offsite (cloud) backup.
Velero (formerly ark) is a neat project that supports a lot of options and cloud providers so I decided to take it for a spin. My specific scenario is currently only aiming at backing up the Kubernetes objects from a selected list of namespaces; backing up state (e.g. databases) will come later, either with Velero or with another tool like stash: I have not decided yet.</description>
    </item>
    
    <item>
      <title>Protect critical Kubernetes namespaces with Open Policy Agent</title>
      <link>https://blog.asksven.io/posts/openpolicyagent/</link>
      <pubDate>Thu, 16 May 2019 00:00:00 +0000</pubDate>
      
      <guid>https://blog.asksven.io/posts/openpolicyagent/</guid>
      <description>Introduction Update 2020-05-16: Gatekeeper superseeds OPA so there is a new post, that replaces this one
Update 2019-09-08: after finding a critical bug causing my cluster to hang and becoming unusable after a restart I did some investigation and testing and have updated the project on Github.
Open Policy Agent is an open-source, general-purpose policy engine that enables unified, context-aware policy enforcement across the entire stack. OPA provides greater flexibility and expressiveness than hard-coded service logic or ad-hoc domain-specific languages and comes with powerful tooling to help anyone get started.</description>
    </item>
    
    <item>
      <title>Locating ssh hackers</title>
      <link>https://blog.asksven.io/posts/locating-ssh-hackers/</link>
      <pubDate>Sat, 02 Mar 2019 00:00:00 +0000</pubDate>
      
      <guid>https://blog.asksven.io/posts/locating-ssh-hackers/</guid>
      <description>Introduction Have you ever read an article and thought: I want to build this?
Well that happened to me while reading Geolocating SSH Hackers In Real-Time, so I decided to build it.
I am into Kubernetes these days so I decided that I would host the showcase on my Kubernetes lab environment:
 I run a Proxmox server with 64 cores and 256 GB of RAM, that is reachable over ssh from the internet (pub/priv-key login only).</description>
    </item>
    
    <item>
      <title>Google: give us BATTERY_STATS back!</title>
      <link>https://blog.asksven.io/posts/google-give-us-batterystats-back/</link>
      <pubDate>Sat, 23 Nov 2013 00:00:00 +0000</pubDate>
      
      <guid>https://blog.asksven.io/posts/google-give-us-batterystats-back/</guid>
      <description>Note I have saved this post from Google+ before its shutdown because I am still pissed at Google.
If you already have a device with Android 4.4 Kitkat on it you may have noticed that your favorite battery stats tool, whether it is BetterBatteryStats, GSam of wakelock detector, does not work.
Well it is not uncommon that new Android versions break a few apps and it usually takes us a few days for your favorite dev to fix things.</description>
    </item>
    
    <item>
      <title>About me</title>
      <link>https://blog.asksven.io/about/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      
      <guid>https://blog.asksven.io/about/</guid>
      <description>I was born in Sweden in 1967, grew up and studied in France and currently live in Germany. In a nutshell, I am European.
I love Android, Linux, containers, Kubernetes and many other topics related to technology. I am the author of Better Battery Stats (Android), that can be found on the Google Play Store. Most of what I do is open-sourced so check my Github and Gitlab links for more.</description>
    </item>
    
  </channel>
</rss>
