I never got used to Bash programming syntax. Whenever I have to write a more-than-trivial bash script, the strange syntax annoys me, and I have to Google every little thing I need to do, starting from how to do comparisons in
if statements, how to use
sed , etc.
For me, using Python as a shell scripting language seems like a better choice.
Python is a more expressive language. It is relatively concise. It has a massive built-in library that let you perform many tasks without even using shell commands, it is cross-platform and it is preinstalled or easily installed…
So, you have a batch processing / ETL task that receives some data in a loop or per request and crunches it. It might even be running in production. All is working well but now you need to optimize it a bit in order to increase the rate of data you can handle.
In this post, I will introduce a simple yet effective approach to do so, which you can even run in production to measure the performance of real-world workloads. I will supply a short (single file, no dependencies) implementation for the profiler for both Kotlin and Python.
GitHub repo: https://github.com/davidohana/kofiko-kotlin
Kotlin and Python are my favorite programming languages. After publishing Kofiko configuration library for Python, I decided to work on a port of it for Kotlin. Actually the porting to Kotlin took significantly more effort, for many reasons. I wanted to introduce better extensibility architecture this time, I wanted the library to support more formats, and also due to many conceptual differences between Kotlin and Python. For example, Kotlin Annotations can contain metadata only, and Python Decorators can contain logic.
Other challenges involved were how to discover configuration objects, how to design a fluent API for adding…
In the Code-First approach, you first define your data-model in plain code. You can start working with that model immediately, and only later you worry about schema definitions, bindings, and other necessities. The mapping between the domain model and external entities like database tables/fields usually relies on conventions.
What I like about this approach is that it let you focus on the most important things first, and not less important, you have all the convenience of a modern IDE when defining your model — refactorings, code completion, type checks, etc..
After my last attempt…
If you have successfully integrated Generic OAuth with Grafana, you might wonder as I did, how do you allow only specific authenticated users from your organization to access Grafana? and how do you set different access rights (admin, editor, viewer) to those users?
The first thing we need to do is to set disable sign up of new users and anonymous users. This can be done by editing grafana.ini file or setting env vars.
allow_sign_up = false[auth.anonymous]
enabled = false
This will allow only users that are already listed in Grafana’s user database to sign-in.
(Full code and samples for this post at my GitHub Repo)
Suppose you have the following code which invokes a gRPC request and may fail due to various network conditions.
How to retry this call until no exception raised? Wrap the call in an inner(inline) named function and use the provided retry function.
Thanks to closures, we can use any variable in the scope outer to the inner function.
This retry function supports the following features:
Retry function code:
So, keep trying!
The latest full code and samples for this article are available under the Apache-2.0 license at my GitHub Repo.
Yes, we love logging in colors.
Yes, there are many Python libraries and sample code that show you how to colorize your stdout log messages by logging level.
But I am going to show you something better — log messages with alternating colors for each argument in the format string, in addition to colorization by log level. And a bonus — argument formatting is made using the new “brace-style” formatting introduced in Python 3.2, for example:
Implementation is simple, and…
The full code (library + example + .ini files) for the following article are available at GitHub : https://github.com/davidohana/LayConf
In every programming language I use, one of the first thing I need is a decent configuration library.
My requirements are usually:
The result is a small and simple class that supports most…
Log-template mining is a method for extracting useful information from unstructured log files. This blog post explains how we use log-template mining to monitor for network outages. We also introduce Drain3, an open-source streaming log template miner.
Our main goal in this use-case is the early identification of incidents in the networking infrastructure of data centers that are part of the IBM Cloud. The IBM Cloud is based on many data centers in different regions and continents. At each data center (DC), thousands of network devices produce a high volume of syslog events. Why not use that information to quickly…
Software developer & research staff member @ IBM (Haifa Research Lab)