In a previous post I had gone over how a “Try” type can be created in Kotlin from scratch to handle exceptions in a functional way. There is no need however to create such a type in Kotlin, a type called”Result” already handles the behavior of “Try” and this post will go over how it works. I will be taking the scenario from my previous post of retrieving content from a remote url, having two steps that can potentially fail:

  • the URL may not be well formed, and
  • fetching from a remote url may have network issues

So onto the…


Json Patch and Json Merge Patch both do one job well — a way to represent a change to a source json structure.

Json Patch does it as a series of operations which transforms a source document and Json Merge Patch represents the change as a lite version of the source document.

It is easier to show these as an example, and this is straight from the Json Merge Patch’s RFC.

Let’s start with a source document:

{
"title": "Goodbye!",
"author": {
"givenName": "John",
"familyName": "Doe"
},
"tags": [
"example",
"sample"
],
"content": "This will be unchanged"
}

and the…


Functional programming languages like Scala often have a type called “Try” to hold the result of a computation if successful or to capture an exception on failure.

This is an incredibly useful type, allowing a caller to pointedly control how to handle an exceptional scenario. In this post I will try and create such a type from scratch.

As an example, I will be using the scenario from Daniel Westheide’s excellent introduction to the Try type in Scala

So my objective is to call a remote URL and return the content as a string. …


If you ever need to capture the smallest or largest “n” from a stream of data, the approach more often than not will be to use a simple data-structure called the Priority Queue.

Priority Queues do one thing very well — once a bunch of data is added, it can return the lowest value (or the highest value) in constant time.

How is this useful to answer a top or bottom “n” type question. Let’s see.

Consider this hypothetical stream of data:

And you have to answer the smallest 2 at any point as this stream of data comes in…


A Java stream represents potentially an infinite sequence of data. This is a simple post that will go into the mechanics involved in generating a simple stream of Fibonacci numbers.

The simplest way to get this stream of data is to use the generate method of Stream.

As you can imagine to generate a specific Fibonacci number in this sequence, the previous two numbers are required, which means the state of the previous two numbers need to be maintained somewhere. The two solutions that I will be describing here both maintain this state, however they do it differently.


AWS DynamoDB is described as a NoSQL key-value and a document database. In my work I mostly use the key-value behavior of the database but rarely use the document database features, however the document database part is growing on me and this post highlights some ways of using the document database feature of DynamoDB along with introducing a small utility library built on top of AWS SDK 2.X for Java that simplifies using document database features of AWS DynamoDB

DynamoDB as a document database

So what does it mean for AWS DynamoDB to be treated as a document database. …


Project Reactor implements the Reactive Streams specification, which is a standard for asynchronously processing a stream of data while respecting the processing capabilities of a consumer.

At a very broad level, there are two entities involved, a Producer that produces the stream of data and a Consumer that consumes data. …


It is useful to have a version attribute on any entity saved to an AWS DynamoDB database which is simply a numeric indication of the number of times the entity has been modified. When the entity is first created it can be set to 1 and then incremented on every update.

The benefit is immediate — an indicator of the number of times an entity has been modified which can be used for auditing the entity. …


This is a follow up to my blog post about processing SQS messages efficiently using Spring Boot and Project Reactor

There are a few gaps in the approach that I have listed in the first part.

  1. Handling failures in SQS Client calls
  2. The approach would process only 1 message from SQS at a time, how can it be parallelized
  3. It does not handle errors, any error in the pipeline would break the entire process and stop reading newer messages from the queue.

Recap

Just to recap, the previous post demonstrates creating a pipeline to process messages from an AWS SQS Queue…


I recently worked on a project where I had to efficiently process a large number of messages streaming in through an AWS SQS Queue. In this and the following post, I will go over the approach that I took to process the messages using the excellent Project Reactor

The following is the kind of set-up that I am aiming for:

Setting up a local AWS Environment

Before I jump into the code, let me get some preliminaries out of the way. First, how do you get a local version of SNS and SQS. One of the easiest ways is to use localstack. …

Biju Kunjummen

Lead software Engineer with Nike

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store