Skip to main content

EuroPython 2015 - Python and Internet of Things

I've been working in area of IoT for quite some time, and at Azoi we use Python extensively. Looking at usage of Python on micro-controller, I realised that it's not used as much as in other areas. I've had very good experience going with Python. I've got some exposure with micro controller and think it assists you to rapidly build prototype involving hardware.

I applied for a talk on Internet of Things with Python to spread some awareness around this and it was selected along with another talk by my colleague Bhaumik Shukla and one more by Hitul Mistry. Out of all all submissions, there were total 4 selection out of which 3 were from my team at Azoi. It was also my first major talk at a large conference like EuroPython. I was both nervous and excited as the same time and since I was going to Bilbao, Spain for conference, I thought its a great opportunity to explore other cities as well :-)

Bhaumik's topic was "Python for Cloud Services and Infrastructure Management" and Hitul's Topic was on Concurrency and Parallelism.

After a lot of preparation I was ready to go on stage to deliver my talk, one piece of advise you're delivering a talk at a conference, avoid changing your slides at last moment. In my case, my talk was on 2nd day so I thought I'll add a demo video and modify the flow a little bit to make it more easy to understand. It clearly wasn't a good idea but I managed to handle it.


Here's video of my talk, looking forward to your feedback!






Comments

Popular posts from this blog

Getting Started with Rancher & Kubernetes

During my time working at Azoi (a startup), I was responsible for maintaining Gitlab for my team along with various other self-hosted internal services. When I first setup Gitlab in 2013, it was quite a task, as setting it up would involve configuring various services using provided scripts and a lot of luck. Fortunately, I found Bitnami Gitlab package to make that process easier. However initial setup is one thing and keeping up with the latest releases was a nightmare.

Fast forward to 2016, I joined eInfochips as Solutions Consultant. Where one of my responsibilities was to modernize development workflow and encourage DevOps culture. It made sense to use Gitlab specially for Gitlab-runner and since this was a fresh setup, I explored few options to setup Gitlab, I found that Gitlab monthly releases were now published as Docker images. So I decided to use Gitlab docker image to host gitlab-ce internally. This time around, the experience was very different, initial setup and upgrading…

Rancher for Microservices : Load Balancing and Scaling Containers.

In my previous post, we saw how easy it is to set up Kubernetes cluster using Rancher. Once you have a cluster up and running, next step is to deploy your microservices on the cluster. In this post, we’ll look at how to deploy, run and scale a docker image on your cluster. We’ll also look at setting up an L7 load balancer to distribute traffic between multiple instances of your app.

Let’s create a simple HTTP service which returns server hostname & current version of the binary (hardcoded). I’ve used go-lang for this, below is code snippet which returns hostname and service version.

All it does is, returns a string "App Version 2.0 running on host: <hostname>", once deployed it will return container hostname.

Next step is to dockerize our service by generating a docker image and push it to DockerHub (or your private Docker registry), below is Dockerfile I used to dockerize this service.


Running this will compile our service and generate a docker image on local mac…

Rancher for Microservices : Upgrades and Rollback.

So far we've checked how easy it is to get up and running with Rancher. We also deployed a very simple HTTP service on our Rancher Cluster, attached an L7 Load balancer and successfully scaled up containers running this service.

In this post, I'll use the same service with a slight modification which is version number now return as  2.0 in HTTP response. So far our service is on v1.0, let's say we've worked very hard and released a new version with latest features. We want to release it to our users while ensuring there is no downtime during deployment. At this stage, our docker image of service 2.0 is pushed to docker repository (ravirdv/app:2.0).

In the world without container orchestration platforms, we'd have to write scripts to spawn up compute resource (EC2, VM etc) and then use something like Ansible/Chef/Farbic scripts to provision required services and dependencies. Once that is done, we'd push our package and hope there is no dependency/version misma…