Apache Flink Standalone Setup on Linux/macOs

Apache Flink Standalone Setup on Linux/macOs

To develop and run various types of applications, Apache Flink is considered a great choice because of the variety of features it offers. Some of its prominent features are stream and batch processing support, advanced state management, event-time processing semantics, and consistency for the state. You can deploy Flink on multiple resource providers like YARN, Apache Mesos, and Kubernetes. Apache Flink is the new generation of Big Data Platform. It is also known as 4G for Big Data,  Delivering high throughput and low latency. Since it supports both batch and stream processing, it enables users to analyze historical data and real-time data.

While developing or testing any application, we need a playground to experiment with our application. We need our playground to be independent, available at affordable prices, and easily preparable. Apache Flink supports creating a standalone cluster with few simple steps. It provides a friendly Web UI for monitoring the cluster and the jobs. In this blog, I will provide a brief overview of Apache Flink.

Then I will look into the prerequisites for setting up Apache Flink standalone cluster.

After that, I will set up a local standalone cluster. You will also learn how to start the cluster, submit an example WordCount job, and terminate the cluster finally.


What is Apache Flink?

Apache Flink is an open-source distributed computing framework for stateful computations over bounded and unbounded data streams. It is an actual stream processing framework that supports Java, Scala, and Python. Has a high-level Dataset API for batch application, DataStream API for processing continuous streams of data, Table API for running SQL queries on batch and streaming data.  It also comprises Gelly for graph processing, FlinkML for machine learning application, and FlinkCEP for complex event processing that detects intricate event patterns in data streams.


Prerequisites for Apache Flink

Apache Flink is developed by leveraging Java and Scala. Therefore, it requires a compatible JVM environment to run the Flink applications. Some of the prerequisites for Apache Flink are as mentioned below:

Language Version
Java Java 8/11
Scala (only for writing driver programs in Scala) Scala 2.11/2.12


Local Installation

In this section, we will do a local installation of the Apache Flink. Its local installation directory contains shell scripts, jar files, config files, binary files, etc. All these files are necessary to manage the cluster, monitor it, and run Flink applications.

First, we have to ensure that we have the correct Java 8/11 installation.


Then, we will download the Apache Flink distribution file from here,



We will extract the distribution file using the tar command


Now, visit Flink’s home directory to carry out the next steps:

Once, the local standalone cluster is installed, let’s start the cluster now.


Start the Cluster

The standalone cluster runs independently without interacting with the outside world. Once the cluster is up, jobs can be submitted to the cluster. The local Flink cluster quickly starts using a single script.


The Flink Web UI will be available on http://localhost:8081/.

Apache Flink Dashboard

After successfully starting the cluster, the next step is to submit a job.


Submit a Job

A job is an application running in the cluster. The application is defined in a single file or a set of files known as a driver program. We write driver programs in Java or Scala, compiles them, and then build their jars. These jars are submitted in the running clusters through the command-line interface or the Flink Web UI.

The jar can have multiple class files. We can explicitly provide the name of the class file containing the driver program while submitting the jar, or it can be referred from the jar’s manifest file.

In this example, we will submit a WordCount job using the command line interface and using one of the existing example jars provided by Flink.


We can check the output of the job by running the tail command on the output log.



We can also monitor the running and completed jobs in the Flink Web UI.

Apache Flink web dashboard


After carrying out these steps, we will now move towards the last step of terminating the cluster.


Stop the Cluster

When running the jobs on the cluster are finishes, stop the cluster and all running components by using a single script.



Apache Flink is a lot faster, more flexible, and versatile than other Big Data frameworks. A local setup will be an excellent start for learning Apache Flink. It also provides us with a nice environment to test our Flink applications before deploying them in production.

So, do try this method and share your experience with us. Happy coding!

JavaScript Workers: A Brief Overview

JavaScript plays a pivotal role in developing websites. Around 97% of the websites have adopted the tool for its web page behavior. It is  a dynamic programming language that you can use for various purposes like web development, in web application, game development, and more. You can even implement dynamic features on your web pages to enhance their look that cannot be achieved with HTML and CSS.  JavaScript is a single-threaded language that has one call stack and one memory heap.  Here single thread is the main browser thread. However, it can be non-blocking while using asynchronous executions. JavaScript handles asynchronous executions using the call stack, callback queue, web API, and event loop which is unrelated to the workers. 

What are JavaScript workers? 

In simple terms, JavaScript workers run on a thread other than the main browser thread. So, when we execute these scripts, they do not block the main thread and the browser remains responsive.  In this blog, I aim to cover web workers and service workers. I have explained both these types of workers and how they are used along with their purpose. 

Web Workers 

While executing JavaScript in the browser, it uses the browser main (single) thread. This blocks the other execution until the current script’s execution is finished. Thus, leaving the browser unresponsive until it finishes executing the existing script. Web Workers, on the other hand, runs independently and does not affect the browser page performance. Web workers are supported in almost all known browsers, and this can be used to do some expensive calculations without leaving the browser unresponsive. 

Web Workers Creation 

Web workers are simple JavaScript files that can  run by creating new workers object using the workers file. A simple workers file can look like, 

Workers can be instantiated like, 

Communication With Workers 

The Communication process between workers and client will be achieved by listening to messages from each other. Below code is a sample to add a listener for both client and workers in which they have implemented onmessage function, which is the listener. To send messages to the workers and clients, use the postMessage method to post messages to each other. 

Terminating Workers 

The service worker instance, has terminate function that can be called to terminate.

Service Workers 

Service workers are specialized workers that run in the background and are separate from a web page. This acts like a proxy network and cache that helps developers build an offline experience for the web pages. Like web workers service, the worker also uses the postMessage API to communicate from client web pages. Since this can act as a proxy, the registration of service workers is restricted to pages served over HTTPS. For development purposes, localhost can be used to register the service workers. 

Let’s discuss the service workers lifecycle and communication with the client. Here, I have taken an example of a web page with some static content and some dynamic forms in which users can upload data.   

Service Workers Registration 

To register service workers, a web page should provide a path to the service workers file as given below: 

Here, the service workers are placed at the root path. This way, the service workers can intercept fetch events in your domain. If the service workers are registered with path /example/service-worker.js then the service workers can intercept all the fetch events starts with /example/. Eg. /example/abc or /example/xyz. 

Service Workers Installation 

When a page tries to register service workers, it kicks off the install listener that is present in the service workers file. Through this event the listener is triggered when the service workers are first installed. The installed event can look like as mentioned below: 

In the above example files, main.css, main.js and index.html will be added to the cache. 

Service Workers Activation 

When the service workers are installed, it enters into the next phase which is activation. We can clean the old cache during the activation and set up for IndexedDB to store some dynamic data. In the example we have a form in which there are some text fields and some files to be uploaded. We can set up IndexedDB to store those files in case the user goes offline while saving the form data to the server. 

Caching and Responding 

Service workers are meant for offline experience. So, we will need some caching server requests. To do so, we can listen to fetch events and respond accordingly. 

Communication With Service Workers 

The communication between the client and service workers happens through postMessage API. There are several ways a client can communicate through service workers. Let’s take the above example where the user has made some entries in the form and failed to upload it because of the unavailability of the network. 

Here, we will be discussing the client page that is making a request to the service workers to upload and show the communication  from the service workers. Client pages have a listener to the online event. When the network is online, the client requests the service workers to retry the upload. When the network connection is restored, try uploading the form data to the server by communicating with the service workers on-demand, or the service workers make it automatically by checking the status of upload items present in IndexedDB. 

  1. Using Broadcast Channel 

As one can guess, the Broadcast channel allows us to communicate to browsing contexts. In this case, both the client and the service workers use the same track to communicate. It is the most effortless setup for communication; however, not all browsers currently support the broadcast channel. 

Note:  Check the browser support before using it. 

Let’s assume retryFailedUpload function has a job that is to retrieve data from IndededDB and then make a network request to save the data to return a promise indicating success/failure. 

  1. Using Client API 

The service workers have all the clients and can communicate through one or more clients by sending messages to clients. Service workers have references to connected clients ordered by the last focused client. The service workers can target clients and post a message. Thus, the client can use the service workers controller to send messages to the service workers. 

  1. Using ChannelMessaging

Using channel messaging, we can listen to the messages for a channel port sent by the service workers Message channel instance contains two ports, namely port1 and port2. The client can listen to messages from the service workers and the service workers send the messages to the listening port. In the code below, the first client sends a message to keep a reference of the client port on which the service workers can send the messages. 

Uses and More 

Web workers are helpful when performing long-running tasks or some periodic tasks. It does not have a restriction on the number of workers created. A page can have multiple web workers; also, a worker can instantiate workers.  

On the other hand, the service workers are meant for boosting the offline experience. So far, we have discussed mainly caching, but service workers can do periodic sync and manage notifications. I hope this brief overview of the JavaScript workers help you to understand it better. Till then, happy coding! 

Read more: 

  1. https://developer.mozilla.org/en-US/docs/Web/API/Web_Workers_API/Using_web_workers 
  2. https://developer.mozilla.org/en-US/docs/Web/API/Service_Worker_API/Using_Service_Workers 
  3. https://www.w3.org/TR/service-workers/ 

Identity, Authentication, And Access Management Using Azure Active Directory  

Identity, Authentication, And Access Management Using Azure Active Directory

Identity and Access Management (IAM) enables you to manage access to services and resources securely. With  IAM, you can create and manage AWS users and groups. It also gives you the right to allow and deny their access to AWS resources.

In today’s world, there are multiple IAM solutions that are available in the market. Even Microsoft offers its lightweight IAM tool over classic Active Directory that can be leveraged to authenticate users, provide identity, and control access. In this blog, I am going to compare the popular IAM solutions that are available. And, I will also highlight the pros and cons of using Active Directory B2C as an IAM solution for our application.


Microsoft IAM

Microsoft is known for its security; it has evolved a lot since its inception and is much ahead of the competition now.  The good ethics of Microsoft has made them a popular choice among industry leaders. Looking at its high popularity and years of market existence, Microsoft entered the SaaS-based IAM space a little late. Whereas Google and AWS on the other side have an upper hand or an advantage over Microsoft because of their more years of existence in the IAM market.

Google has perfectly integrated with most of the modern single sign-on (SSO) and login solutions as well. Similarly, AWS Cognito is also quite easy to use and makes the integration process seamless. There are some new players in the market like Okta who have been quite successful with their simple integration processes and handy with preconfigured adapters.

Now, Microsoft is trying to catch up with the competitors, but its system is a bit too complex for new users and integration options are also limited. For those who cannot jeopardize security over anything else, Microsoft should be their first choice. Though the setup for Azure Active Directory B2C is a bit tedious and time-consuming but is highly secure. Microsoft cares about the data privacy and security of its users. Let’s have a look at major key features Microsoft offers in its Active Directory


Important Features Of Microsoft’s Active Directory

Azure Active Directory B2C provides business-to-customer identity as a service. Your customers use their preferred social, enterprise, or local account identities to get single sign-on access to your applications and APIs. Azure Active Directory B2C (Azure AD B2C) is a customer identity access management (IAM) solution capable of supporting millions of users and billions of authentications per day. It takes care of the scaling and safety of the authentication platform, monitoring, and automatically handles the threats like denial-of-service, password spray, or brute force attacks. Some of its key features are listed below.


    • Conditional Access (Role-based access control)
    • Identity protection
    • Reporting and monitoring
    • SAML Support

One other thing that we expect from a good IAM solution is to support multiple authentication methods and multi-factor authentication. Now, let me list down some main authentication methods as supported by Microsoft AD B2C.


Authentication Methods

Microsoft supports almost all popular authentication methods that are in the market. If you are planning to integrate it with the popular ERP systems then Microsoft has built-in adapters and ready-to-use methods. Here are some of the popular methods of Microsoft support.

Microsoft also supports a bunch of other methods and covers almost everything you need for Authentication.  But, with this, it has also got some limitations attached that I have discussed in the next section.


Limitations in Microsoft

Although Microsoft supports almost all popular authentication methods it has got few limitations and bugs. Some bugs make features almost unusable. Microsoft SDK for JavaScript is in the development stage and not mature for production use. There are many other bugs that make life painful for the development team. The thought process of Microsoft is also a little different from the industry in general. Their UI is difficult to use and the naming convention is a bit odd too.

Some of the key system limitations are:

    • It allows only 500 transactions per second per App Proxy application.
    • It allows only 750 transactions for the Azure AD organization.
    • Requires Microsoft environment.

For in-depth information, you can follow this source https://docs.microsoft.com/en-us/azure/active-directory/users-groups-roles/directory-service-limits-restrictions.

Microsoft offers almost all the modern features but it is a little time-consuming to understand and use. Other than this, Microsoft has built-in integrations for ERP and SCM systems like SAP and Oracle. When you integrate with enterprise applications Microsoft also provides support, unlike Google and AWS.

One of the most important factors while deciding IAM solution is the ease of integration. It is a deal-breaker for many, especially when we connect with a niche software or application that requires special integration methods. Microsoft has decades of experience in integrations and their systems are mature enough to support almost everything by now. Let’s have a look at some of the integrations Microsoft support with their Active Directory B2C.


Supported Integrations

For ready-to configure apps we can use built-in adapters and SSO mechanisms to connect. The process remains the same for one type of system. Once we configure our system to integrate with SAP, Oracle, and more, then we can easily add the new SAP, and Oracle as well.

For custom-built applications, we need to configure and build adapters to match their specifications. Custom-built applications might not have any connectors or mechanisms to connect with our system. We need to analyze it on a case-to-case basis as I have discussed below.


Comparison With Popular Systems

IAM tools have become the backbone of the technology industry. The IAM market is going through significant changes, as zero trusts become an increasingly important part of access management products, it is important to choose the right IAM solution. There are many IAM tools out there, but we will consider the most popular ones and compare Microsoft Active Directory against them.


Active Directory Firebase Cognito
Closed-source Open-source Closed-source
Backed by Microsoft Azure Backed by Google Backed by AWS
Easy integrations with LDAP Easy to integrate and manage with all open standards Easy to integrate and manage with all open standards
Free tier has limited features Cost-effective in free tier Free tier is very limited
Suited for enterprise applications and SSO with big ERP and SCM systems Suited for fast development and integrations

Suited for fast development and integrations

Who should use the Active Directory B2C?

If you are integrating with large SAP or Oracle-like systems, Active Directory is for you. If you are looking for trusted security then also you can consider Microsoft-backed Active Directory, which is highly trusted.


Who should avoid the Active Directory B2C?

If you are looking for fast-paced development with lots of customizations, then you should better avoid Active Directory B2C. Most of your time will go into understanding the framework and dealing with issues in plugins that are in the beta stage.


Final Thoughts

Azure Active Directory B2C is a niche solution and not widely used. It has good capabilities, strong security, and the backing of Microsoft. It is definitely a good product but it does not suit well for fast-paced development. It has some bugs in plugins and integration is also not seamless. So, before you make a decision analyze the pros and cons thoroughly, then decide based on your requirements. Till then, happy reading!

Things to Know Before You Select A Crypto Wallet

Things to Know Before You Select A Crypto Wallet

For quite a long time, we have been using physical wallets to carry our identity cards, money in the form of gold, silver, and other metal coins, and fiat paper currencies. We are also carrying plastic money in our wallets. But now digital modes are gaining traction. Banks and other financial institutions have started offering digital wallets to ease peer-to-peer transactions, bill payments, and money transactions. A detailed look into this system reveals that digital wallets are changing human behavior when it comes to moving money and assets smartly. The wallets we use are basically a vendor-defined identity mechanism that enables us to maintain our cash and assets. This has eased and secured our access to valuable assets.

Crypto wallets do the same thing. It helps us to identify ourselves in the blockchain world and maintain our digital assets like coins, NFTs, etc. We have been developing blockchain networks and applications here at Talentica for the past few years and have gained significant market understanding through our research works and hands-on experience. You can know in detail about the blockchain framework in our blog, Simple Blockchain Framework: An Introduction to Block & Transaction Structure – Talentica.

The blockchain ecosystem is inadequate without a crypto wallet. Let’s know in detail about the crypto wallet.


What Is Crypto Wallet?

A crypto wallet is a type of wallet that holds our identity and information that we can use to connect with decentralized applications or assets. To interact with blockchain networks, DApps or cryptocurrencies, we need to have a crypto wallet. Its software runs each time you interact with the blockchain application. This interaction enables receiving or sending a coin, updating or fetching an NFT asset, etc.

Types Of Crypto Wallets

There are mainly two different categories of crypto wallets. The first one is “cold wallet,” and the second one is “hot wallet.”

Cold Wallet 

The cold wallet is a type of wallet that doesn’t share connection with the internet. This wallet is a hardware device that contains our identity and other network connection details in it. To interact with the wallet, we need to connect this device with the application. The details present inside the hardware device never leave it. If you want to make a transaction, create the transaction and then pass it to the wallet. The wallet will return the signed transaction. This method is the most secure form of crypto wallet implementation. But at the same time, it is costly, has low user experience, and needs secured device with high safety procedures.

Hot Wallet

The hot wallet refers to the wallet category connected to the internet. This wallet is a software code that contains our identity and other network connection details. While interacting with the application, we run this piece of code. The software wallet can be developed in many ways to support blockchain applications. The most common implementations of the hot wallets are “the custodian wallet” and “the non-custodian wallet.” Hot wallets can be used via different ways, such as the web browser, the desktop client, and the mobile client.

Hot wallets have their pros and cons over cold wallets. Since hot wallets are connected to the internet, they are more vulnerable to hacks than cold wallets. On the other side, hot wallets are easy to access and more user-friendly than cold wallets.

Now, we will dive deep into the hot wallet. As we saw earlier, hot wallets can be built in different ways, such as custodial and non-custodial wallets. So, let’s understand them in detail.


Custodial Wallet vs. Non-Custodial Wallet

Custodial Wallet 

A custodial wallet is a type of crypto wallet where the vendor keeps the private keys. Here, the third party has complete control over the private keys. They will give users the right to transact on the application, but they are transacting on the user’s behalf.

In the custodial wallet, the private key is secured by the vendor. So, this wallet comes with a single point of failure. If any malicious hacker gets access to the application database, he/she could get access to information of every single user. This wallet implementation is highly prone to attack by malicious groups. The vendor will maintain the mapping between the private key and the belongings of the end-user in this implementation. In this way, each action can be linked back to the user. The users get some login credentials in a general standard format to access the application. Even if the end-user lost its app login credentials, the vendor provides them with recovery functionality. The end-user does not have to maintain high-security measures for their credentials. Thus, it is a better user experience with less freedom over the data.

The custodial wallet is appropriate for crypto exchange like use-cases. In this way, they are giving a better user experience and earning brokerage on each transaction. Also, at the same time, they are retaining the users since they are holding their wallet key pair.

Non-Custodial Wallet 

A non-custodial wallet is a type of wallet where the end-user holds the private key. The third-party has no control over the user’s identity. They cannot restrict the user’s actions and cannot transact on behalf of the user. Users are solely responsible for all sorts of security measures for the private key, such as storing it with safety, not sharing it with anyone else, etc.

Now, to make any malicious transaction, the malicious hacker needs to attack the user itself. There is no single point of failure. This makes it a more secure implementation than the custodian wallet. Users can do transactions anonymously since they own the private-public key, and their real-world identity is not linked with the public key. They are not dependent on the application to make transactions. This freedom comes with great responsibility. Assume that the end-user forgets the private key. Since the vendor has no control over the private keys, they cannot help the end-user recover their private key. Losing the private key means losing all of your assets.

The non-custodial wallet is the proper form of decentralized application. This form of wallet is supported by almost every application. MetaMask, Bitski, WalletConnect, Fortmatic are examples of live non-custodial wallets.

Some applications do support non-custodial wallets or identity management. But having said that, they do want to know who the owner of the public key is. Most of the time, the reason behind this requirement is their use case. For example, they are working in the supply chain management domain. Then, they would require the transactor’s real-world identity, the transactor’s role or designation in the organization, etc.

The developers can use a digital certificate to bind the user information to the public key. An X.509 is a standard format of a public key certificate. This public-key certificate binds the public key with the real-world user identity. A user can make N number of identities, but all identities are linked back to the user via certificates. These certificates build trust and ownership using a chain of digital certificates. These signing certificates are publicly well-known trusted third parties. Their root certificate is a self-signed identity. It is beneficial to use cases where we need to build an audit system with the DApps.

A typical pattern has emerged across blockchain-based enterprises. In general, enterprises adopt consortium-based private blockchain networks to improve security and hide the data from the public domain. During the early stages of development, enterprises create a custodial wallet in their ecosystem. After that, gradually make a transition to non-custodial hot wallets. This process also aids them in gaining a better understanding of the end-user preferences, which they can then use to develop the preferred wallet functionality over time.


Final Words

Each wallet type provides a different level of security and freedom to application development and the end-user experience. Selecting one specific wallet type for an application depends on the app use case. Since each wallet directly impacts the end-user, we need to consider their point of view while selecting the specific crypto wallet implementation.

In my next blog, I will talk about the custodial wallet’s implementation details. Till then, stay connected and stay safe!

Solve 3 Most Irritating Outlook Email Rendering Issues.

Solve 3 Most Irritating Outlook Email Rendering Issues.

Outlook is one of the most popular email clients for fulfilling business needs, with a market share of 9.1%. But, it has significant drawbacks. Defects in Outlook are majorly related to the specific rules around email rendering.

Outlook email template’s impact on email rendering can be huge. While working, we often try to give our emails a distinct look to make them look more enticing. It is all the more important for email marketing professionals because that unique look could win them paying customers.

Designing and testing play a crucial role in getting the aesthetic and business parts right. However, I often notice specific display problems with Outlook email templates, which I am sure many of you might have observed too. For example, when you try to send out a newsletter, Outlook often picks up a particular email template and renders it with broken links. Pictures go missing and layouts get misaligned as well. The problem is prevalent in 2007, 2010, and 2013 versions. The root cause could be that these versions leverage Microsoft Word algorithms for email rendering, which puts a limit on the HTML and CSS codes.

But before we dig deep further, let’s discuss a bit about email templates before moving on to the most irritating Outlook rendering issues.


What Is An Email Template?

An email template is a preformatted HTML email that you can use to create your unique layout by replacing the specified text with your own. Templates can be text-only or HTML plus text. In the latter case, it is the user’s email client who gets to decide what to do. It lets you add photos and links to an email in conjunction with Cascading Style Sheets (CSS) and design the email to enhance your corporate or personal taste.

There are many email clients in the market today- some of them have track records of decades. But Apple Mail/Gmail and Outlook/Outlook.com are among the most coveted ones.


3 Most Common Outlook Email Rendering Issues

Using a plain-looking email with few frills will not create rendering issues for email clients in general. The CSS properties used in email templates are compatible with most email clients. However, exceptions exist.

So, in this blog, I have created a list of some of the most common Outlook email rendering issues we face on a daily basis.

    1. Email clients do not support background pictures.
    2. Email template’s background gradient does not function.
    3. Users frequently encounter spacing, padding, and margins-related issues that behave differently in different email clients.

Let’s check out the solutions to resolve these persisting problems in Outlook email templates.

  • Email clients do not support background pictures

Background images are not rendered, which piles on issues for desktop email clients. Only Apple Mail 10, Outlook for Mac, Gmail.com, Outlook.com, and Thunderbird have complete support at present. The rest are offering only limited or no compatibility. Outlook 2007-2013 does not support backdrops.

Using the background attribute in the table, you can direct the email client to render a background image stored in the URL specified. For the background picture to fill the whole email window, use the following code in your HTML’s <body> element.

<!--[if gte mso 9]>

<v:image xmlns:v="urn:schemas-microsoft-com:vml" fill="true" 
stroke="false" style=" border: 0;display: inline-block; width: ; height: ;
 " src=" https://images.unsplash.com/photo-1588196749597-9ff075ee6b5b?ixid=MnwxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8&ixlib=rb-1.2.1&auto=format&fit=crop&w=1567&q=80" />

<v:rect xmlns:v="urn:schemas-microsoft-com:vml" fill="true" stroke="false"
 style=" border: 0;display: inline-block;position: absolute; width: 480pt;
 height:300pt;"> <v:fill  opacity="0%" color="#ffffff"  /> <v:textbox 

<td valign="middle" align="center" bgcolor="#ffffff" 
background="https://images.unsplash.com/photo-1588196749597-9ff075ee6b5b?ixid=MnwxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8&ixlib=rb-1.2.1&auto=format&fit=crop&w=1567&q=80" style="background-image:url(https://images.unsplash.com/photo-1588196749597-9ff075ee6b5b?ixid=MnwxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8&ixlib=rb-1.2.1&auto=format&fit=crop&w=1567&q=80); background-repeat: no-repeat; background-size: cover;">


Codepen link https://codepen.io/palak-tal/pen/eYWggYa


Email template

The sole disadvantage of applying the background property is that you cannot modify the image size or location. Besides, the picture will get repeated if the container’s <td> dimension is higher than the image’s dimension. You can avoid it by adding the background-color property or inlining the background-repeat and background-position CSS values.

The background-color property is usually required in email code to utilize properties that have been around for a long time. The older the HTML or CSS rule, the probability is more for it to function properly- a major driving force that makes email clients go for the background property on a table.

  • The email template’s background gradient does not function

Gradient background color in outlook style begins in HTML with a solid color fallback on the background-color property.

Within the inline CSS styles, use the background property to replicate that fallback. After this, you can add the background property again and use the linear-gradient style to generate the linear gradient. Then, create an Outlook fallback using VML gradients. For Outlook, you need to make the background gradient, utilize vector markup language (VML).

The most important item to note here is the background-color property, which has a value. The placement order is irrelevant for email clients that do not support gradient; however, if the bgcolor property occurs after the inline style, the gradient background will be overridden by the solid color provided to the bgcolor attribute. Typically, the background gradient feature in email templates does not function for Outlook and other online outdated versions, but it does work for Gmail and other clients.

The most frequent problem users experience is not receiving the gradient color but a single color when applying it to the gradient portion.

<!--[if gte mso 1]>

<v:rect xmlns:v="urn:schemas-microsoft-com:vml" fill="true" stroke="false"
 style="mso-width-percent:1000" fillcolor="#18b7ea">

<v:fill type="gradient" color2="#18b7ea"/>

<v:textbox style="mso-fit-shape-to-text:true" inset="0,0,0,0">



<td width="600" bgcolor="#18b7ea" style="background:linear-gradient(45deg,
 #8e36e0 0%, #164b92 100%); margin:0; width: 600px; max-width: 600px;
 padding: 0;"

<!--[if gte mso 9]>





For other email clients, use a solid fallback color. To supply the fallback color for Windows 10 and Office 365 desktop clients, utilize the background-color property with the 6-digit HEX color. As a fallback, Web.de will use the CSS background property bgcolor=”#e37b46″.


<td width=”600″ bgcolor=”#18b7ea” style=”background:linear-gradient(45deg, #8e36e0 0%, #164b92 100%); margin:0; width: 600px; max-width: 600px; padding: 0;”>

email template bgcolor

Code pen link https://codepen.io/palak-tal/pen/eYWggYa

It supports the gradient color for email clients using desktop, mobile, and webmail. CSS gradients are not supported everywhere; however, some fallbacks allow a gradient to be included in all email clients.

Link :- https://www.campaignmonitor.com/css/color-background/css-gradients/

CSS gradient support

Snippet of the email template

Code pen link https://codepen.io/palak-tal/pen/eYWggYa


Email rendering issue

  • Users frequently encounter spacing, padding, and margins-related issues that behave differently in different email clients

Margins do not work in Outlook and it doesn’t allow email margins in any of its versions (except for Outlook.com). The same can be said for Gmail.

The other option is to use padding to create the necessary space around the content blocks. All versions of Outlook fully support padding. However, you should always consider mobile rendering on a mobile device, as padding might make the content appear excessively narrow. Padding should be used for gaps instead of margins because margins do not function.


The above examples are tested through https://sendgrid.com/.


Final Thoughts

In 1996, Bill Gates wrote an essay titled “Content is King”. In it, he suggested that content would play a crucial role in generating revenue for companies. Since then, two and a half decades have passed. The Internet has evolved and users are getting bombarded by diverse types of content every minute.

To stand out from this barrage of contents, you need to break the clutter. And Outlook email templates could help you out. With proper designs, it could accentuate the impact of your content, help you build responsive campaigns, and ensure high conversion rates.

It’s high time you give it a serious thought.

Do share your experiences with us on working with email templates and after executing the solutions. Till then, happy coding!

Intuit Wasabi – A Scalable A/B Testing Solution

Intuit Wasabi – A Scalable A/B Testing Solution


What is A/B Testing? 

A/B testing or split testing is a way to statistically analyze and compare two versions of an app or a webpage to identify the better-performing one. It is a type of experimental testing where different page variants are shown to users randomly. You can then determine which one has a large user base or gives more profitable metrics with the results achieved.

A/B testing


Advantages of A/B testing

  1. Strong connect between the company and the customer: A product analysis with A/B testing leads to better customer engagement. It sets the premise for a strong bond between the company and the customer.
  2. Better conversion rate: A campaign based on results attracts a wider customer base, improves conversion rate, and acquires more paying customers.
  3. Flawless product: First, a pilot test and then identifying profitable scenarios could lead to a robust and flawless product.
  4. Simple metrics analysis: Since A/B testing is preferred on atomic or very small-scale components, it’s easy to analyze the data.
  5. Leads to a surge in sales: The measures, taken based on the metrics derived from the A/B testing, eventually increases the product’s sales, leading to more profit.


Problem Statement

I was always looking for solutions to attract more customers in a product-based organization. Its product and marketing team were experimenting with minimal textual changes on the atomic components like page heading/buttons.

Initially, they were doing those experiments using Google Optimize. However, the results were troubling me in ways mentioned below:

  1. Faced a lot of performance-related issues: Only after the page was loaded, per variant changes were observed. The site was then loaded with flickers, impacting the page performance. It led to the loss of a potential customer.
  2. Not possible to do experiments with structural changes: The type of experiments possible to execute was only text variants. They were not ready to implement findings of any investigation involving structural changes to the experiment.

For example, check the different button text variants in the screenshot given below.

A/B Testing result



We had to find a solution to save ourselves from all the above problems. We wanted to conduct experiments by observing changes in atomic and molecular levels.

Luckily, we found a tool named Intuit Wasabi to solve all our needs. It was scalable and modular to carry out experiments on a more significant level.

Intuit Wasabi

Intuit Wasabi running

Since it is an API-driven tool, one can easily integrate the APIs without impacting page performance.

  1. Page performance was the topmost priority of our application, so we were already building applications in the SSR (Server-Side Rendering) way. We added the logic of executing an experiment at the back-end and directly showed the corresponding variant on the front-end, which saved us from the flicker.
  2. We started with a POC and carried out atomic experiments only like checking the conversion rate of the customers with just button text change. Eventually, we succeeded with those experiments and decided to do the experiments to a greater extent, doing structural changes and finding out the user ratio per variant.
  3. At a given point in time, multiple experiments with multiple variants can be executed. Then, find out which one of the variants per experiment gives better results.
  4. It is free, open-source, and easily customizable.



With the help of Intuit Wasabi, we can execute various experiments with lots of customizable options. It helps the company identify the customer conversion rate and build a strong product for customers, and finally, earn better revenue. Do try this tool and share your experiences with us. Happy coding!

How To Pick The Right Data Analytics Strategy For Serverless Systems?

How To Pick The Right Data Analytics Strategy For Serverless Systems?

A 2018 survey of The Newstack suggests that around 46% of IT decision-makers are either using or evaluating options of going serverless. Organizations of all sizes, be it cloud-native startups or large enterprises, are exploring opportunities in this field. Now, if you dig a little it will reveal that companies are pursuing this strategic move to avoid tech hassles, reduce cost, and focus more on bringing their ideas to the market.

Most of these companies are banking on AWS Serverless applications. As a user, you may have opted for Lambda functions (the most preferred one) to host the business logic via APIs and AWS Aurora Serverless to store and manage data for the web application. You can use this stored data for both reporting and analytics purposes. On the other hand, you can apply BI to develop new business strategies based on the insights or patterns observed in the data.

Serverless DB systems


Analyzing A Use Case

Let’s consider a coaching platform to improve participants’ current knowledge or skill level. Such programs include various sessions on skills and sub-skills with learners and coaches. After its completion, you can do a survey to find out how much participants have improved. If you use analytics for it, you can assess their strengths and weaknesses too. You can go even further and get feedback on coaches and learning content because it is not one-dimensional. Then, with the collected data, you can plan your strategy

If you use analytics properly, getting real-time insights regarding engagement would not be a difficult task. Suppose the engagement or the feedback is negative, you can immediately launch corrective measures. The entire process has the potential to improve the program’s efficacy.


Things to do before introducing Analytics

Now, if the deployment of Analytics and a serverless DB is your priority, you need to consider a few factors for AWS Aurora Serverless:

    1. Analytics introduces a higher amount of reads and might involve a lot of computation.
    2. It requires the proper execution of aggregation operations. Frequent or heavy use of analytics can keep your DB busy with heavy reads, causing a bottleneck for traditional applications.
    3. The serverless DB might scale up based on the percentage of utilization or the maximum connections reached in serverless applications. But the scaleout operation can take up to 2.5 mins, which might affect the user. Lack of speed in the application may become apparent.


Things to do before introducing serverless DB

Points to be considered for serverless DBs:

    1. One of the major blockers of the AWS Aurora Serverless DB is that it cannot create Read replicas.
    2. AWS Aurora Serverless does not guarantee durability.
    3. The DB instance for an Aurora Serverless DB cluster is created in a single Availability Zone. automatic Multi-AZ failovertakes longer in the case of Serverless DB.
    4. There are constraints regarding DB connection pooling if Data API is not getting used. AWS Aurora serverless does not support RDS proxy for DB connection pooling.


Building an efficient data pipeline

Although AWS Aurora Serverless manages scalability, high availability, and maintains the DB at the AWS end, you must be aware of the constraints before building a resilient system.

If there are cases where analytics is required and you have the AWS Aurora Serverless DB, you can use Elasticsearch / Redis / DynamoDB / Redshift as the source to pick the analytics data. Build the data pipelines to update the raw Data or computed information to the secondary storage incrementally.

Another option for data pipelines is to have messaging queues. These queues will listen to events and update the secondary storage post computations accordingly.

You can also improve the speed by having a design pattern with a denormalized DB or domain aggregation or star schema. It can provide near real-time aggregated data for analytics. Data can be aggregated and stored in a normalized table periodically or processed post listing to events in this scenario. You can use this information directly for analytics.


Strategy Comparison

Analytics can introduce heavy reads or higher levels of computations. So, you have to strategize wisely. Please find the comparison between various strategies as mentioned below.

Key Consideration Strategy 1 Strategy 2 Strategy 3 Strategy 4
Name Aurora Serverless having Denormalized Tables – Using higher configuration machine Provisioned DB with Read Replicas having Denormalized Tables Using ElasticSearch with Aurora Serverless MySQL

This will need some data pipeline/queues to keep the data in sync

Aurora Serverless  V2 having Denormalized Tables
Speed Fast Faster Fastest Fast (Need to Benchmark)
Cost Pay as you use

Aurora Capacity Unit              $0.06 per ACU Hour

(0.06*24*30)=$44.64 at max

db.t3.medium – $0.065/Hour = 48.36$ per Month

Storage Rate              $0.10 per GB-month

I/O Rate              $0.20 per 1 million requests

t3.medium.elasticsearch $0.073/hour

(0.073*24*31) = $ 54.312 per Month

Pay as you use

Aurora Capacity Unit              $0.06 per ACU Hour

(0.12*24*30)=$86.64 at max

Durability No Yes as Read Replicas can be created Yes  if Replication is done Yes
Scaling Autoscaling Need to be configured Need to be configured Autoscaling
High Availability Provided by AWS Need to be managed Need to be managed Provided by AWS
Maintenance Provided by AWS Need to be managed Need to be managed Provided by AWS
Analytics Usage Pattern Low High High Medium to High
Latency Low (Need to benchmark) Low (Need to benchmark) Lowest Low (Need to benchmark)
API Integration Yes, Data API can be used No Yes No
Pros Scaling and Maintenance will be taken care of by AWS No issue regarding connection Pooling



Read Replication Available

Granular Scaling available

Scaling Up and Down will be faster than Aurora serverless V1

Cons There might be slowness observed during scale-out.

Connection Pooling Issue

While scaling it doubles the instance size.

Higher Cost

Need to manage scaling and handling

Higher Cost


New to Market


Does not support AWS RDS to solve the connection pooling issue.

After going through the strategy comparison chart, you will be able to plan out a proper strategy. Try this process and share your experience with us. Stay safe and happy coding!






Change Notifications and Named Options using Options pattern in .NET Core

Change Notifications and Named Options using Options pattern in .NET Core

In my previous blog, I explained the process of creating strongly typed access to a group of related settings in the .NET Core using the Options pattern. I also talked about the IOptions, IOptionsSnapshot, and IOptionsMonitor interface in .Net Core. In this blog, I have moved a step ahead by diving deep into the additional features offered by IOptions on .NET Core.

Here, I am going to build the same example as discussed in my previous blog on IOptions. Before I start the process, kindly go through the blog to understand the basics of the Options pattern in .NET Core.

Quick Recap Of The Application: We have a .NET 5 Web API project with an API to generate the report. Next, send that report to some recipients with a subject that we can customize. So, we have a ReportController with a post API for the same. We have a couple of services, namely ReportService and EmailService to generate the report and send the report as an email, respectively. The EmailService is registered as Singleton scoped dependency, and it reads the configuration parameter for Subject and Recipients via IOptionsMonitor.

We will go back to having EmailOptions as the private field in EmailService rather than IOptionsMonitor<EmailOptions>. Since we are reading CurrentValue in the constructor and EmailService is a singleton service, this will give rise to the problem where the service won’t read the latest value from the configuration while the application is still running. Now, let’s fix that using the OnChange listener provided by IOptionsMonitor. In the constructor, in addition to setting the EmailOptions, we will also register an OnChange listener for IOptionsMonitor. In that listener, we will reset our field to the latest value provided by IOptionsMonitor.

EmailOptions in EmailService

Register OnChange in EmailService

Now, let us start the application and see what happens when we change configuration values. As soon as I change the configuration values and save the file, the breakpoint which I had set on the OnChange listener got hit.

Email Options modified in appsettings.json

OnChange in EmailService

Options pattern in .NET Core

Application run with Email Options modified

This is precisely what we wanted. Now, whenever EmailService’s Send method is called, we can read the latest values from the configuration. Whenever EmailOptions configuration changes, we capture the newest value in the OnChange listener. Note that the OnChange listener is only available for IOptionsMonitor and not IOptionsSnapshot, and the reason being IOptionsSnapshot never had this problem in the first place. With IOptionsSnapshot, we were already able to read the latest configuration values in our service.


Exception Scenario

Now, let us consider another scenario. If some exceptions occur in our code, we should handle them gracefully and email the administrators to let them know about the exception. And, we want to use the same EmailService to send the admin email. Let us first write some code to handle the exceptions in our code. We will add UseExceptionHandler middleware in Configure method of the Startup class and take the exception thrown from subsequent middleware in the pipeline. We can also inject IEmailService in the Configure method, which has already been registered as a dependency in the ConfigureServices process. It is this code where we will retrieve the exception details and send emails to the administrators.

Please Note: this is just an example, and it is not the standard way notifications are handled for an application.

UseExceptionHandler in Configure



Let us now implement the SendAdmin method that we have used in Exception Handler in our EmailService. Also, let us add another section called AdminEmail to our appsettings.json file. In the SendAdmin process, we want to read from the AdminEmail section rather than the Email section of the application. Let us see how we can implement this with our existing setup?

We will create another class for AdminEmailOptions similar to EmailOptions class and then inject it the same way as EmailOptions into the EmailService.

appsettings.json with AdminEmail section

ConfigureServices in Startup.cs



Now, we are done!

At this point, I am not very excited about this implementation as there is excessive code duplication. Surely there is a better way but first let us check that the performance works. To do this, we need to have our application throw some errors so that the Exception Handler comes into action. Let us throw an exception from ReportService and run the application.

Throwing an exception from ReportService.cs

Options pattern in .NET Core

Application running with exception thrown

As we can see, the AdminEmailOptions has correctly read the AdminEmail section of the configuration. As I mentioned earlier, there is too much code duplication with this approach. Essentially we are using the same structure of Email Options with different values at different places. We should be able to use the same class for both, which we are going to do next. So, first of all, let us get rid of the additional AdminEmailOptions class and add the section name for AdminEmail in the existing EmailOptions class.


Since we do not have AdminEmailOptions class anymore, we will remove its configuration in the ConfigureServices method and configure EmailOptions twice. One for the Email section and the other for AdminSection. The difference from the earlier version is that now we are passing section name as the first parameter. And this is the crux of the solution. What we have implemented here is called Named Option. We have registered two instances of EmailOptions with different names, and whenever we want to fetch the Email Options from configuration, we have to specify the name of the configuration.

Register Named Options in Startup for EmailOptions

Now, let us fix our EmailService. We don’t need AdminEmailOptions anymore. Instead, we will change the type of adminEmailOptionsVal field to EmailOptions. In the constructor, we need just one instance of IOptionsMonitor, and while setting the two areas, we need to call its Get method and pass the section’s name.


That’s it. If we run the application, we can see that both the Send method and SendAdmin method are reading the Email parameters from their respective section. Instead of throwing from ReportService, we will throw an exception from EmailService’s Send method, just after we have written to console to call both Send and SendAdmin methods in the same run.

Exception thrown from Send method of EmailService

And if we run the application now, we can see that both the Send and SendAdmin methods have picked values from their respective configuration section.

Options pattern in .NET Core


So with that, we conclude the demonstration of the Options pattern in .NET Core. Options pattern is a handy feature provided in .NET Core applications, and some of the features that we have covered are:

  • Strongly-typed configurations
  • Reading configuration changes after the application starts
  • Named Options

These features are provided via IOptionsIOptionsSnapshot, and the IOptionsMonitor interface, and we should use the implementation as per the need of our application.


I hope you enjoyed reading this. Do try this method and give your reviews. Till then happy coding!

Note: The source code for this application can be found at mmoiyadi/IOptions.NetCore.

Create Strongly Typed Configurations in .NET Core

Create Strongly Typed Configurations in .NET Core

In this article, I’ll explain how to create strongly typed access to a group of related settings in the .NET Core using the Options pattern.

Options pattern mainly provides interface segregation principle (‘I’ from SOLID design principle) and separation of concern. It is a pattern in which a set of related configuration parameters are grouped into a separate class providing the type safety. The classes are then injected into the different parts of the application via an interface. So, we don’t need to inject the entire configuration but only the configuration information required by a specific part of the application.

We can achieve this via IOptions, IOptionsSnapshot, and IOptionsMonitor interface in .Net Core. Let us create an application to demonstrate the use of each one of them to understand it better. I have created a .NET Core Web API project from the template and created a controller named ReportController for a resource called report.


Creating a new project

.Net Typed Configuration

Create ASP.NET Core Web API project

The responsibility of this resource is to generate reports based on the user input parameters. Then, send the generated report to a set of users whose email addresses are configured in our appsettings.json file so that we have a service to generate the report called ReportService and another to email the generated report EmailService. The EmailService reads the email parameters from the configuration. The ReportService, as well as EmailService, are injected via Scoped dependency. Of course, the scopes of these dependencies can vary based on different application needs. We will go ahead with this and see what happens when the scope of the dependency changes in the latter part of the article.

Check out how our ReportController looks like:

public class ReportController : ControllerBase
        private readonly IReportService reportService;
        private readonly IEmailService emailService;

        public ReportController(IReportService reportService, 
                                IEmailService emailService)
            this.reportService = reportService;
            this.emailService = emailService;

        public IActionResult GenerateAndSendReport(ReportInputModel reportInputModel)
            var report = reportService.GenerateReport(reportInputModel);
            if(report is null)
                return NotFound();
            return Ok();


There is just one method that takes some input parameters and generates a report by calling the GenerateReport method of ReportService. The generated report is then sent using EmailService’s Send method. Let us have a look at services and how they are injected.

public interface IReportService
    string GenerateReport(ReportInputModel reportInputModel);


public interface IEmailService
    void Send(string report);


public class ReportService : IReportService
    public string GenerateReport(ReportInputModel reportInputModel)
        return $"Report generated for report id: {reportInputModel}";


public void ConfigureServices(IServiceCollection services)


    services.AddScoped<IEmailService, EmailService>();
    services.AddScoped<IReportService, ReportService>();            

    services.AddSwaggerGen(c =>
        c.SwaggerDoc("v1", new OpenApiInfo { Title = "Options.NetCore", Version = "v1" });

Configure EmailService and ReportService as Scoped dependencies in Startup.cs

Let us now add the configuration parameters in the appsettings.json file

"Email": {
    "Subject": "Options Report",
    "Recepient": "reportuser@someorg.com"


There are two parameters: the subject of the report and the recipient’s email address for the report to be sent to.

Let us first look at how we would implement this without using the Options pattern. In this case, the EmailService depends on IConfiguration to read the report and email-specific parameters.

public class EmailService : IEmailService
    private readonly IConfiguration configuration;

    public EmailService(IConfiguration configuration)
        this.configuration = configuration;
    public void Send(string report)
      Console.WriteLine($"Sending report titled {configuration["Email:Subject"]} " +
                          $"to {configuration["Email:Recepient"]}");

EmailService using IConfiguration

Let us now run the application to see it in action. We are calling the post endpoint using the Swagger UI, which is provided with the default template for Web API projects in .NET 5

.Net Typed Configuration

Application run using IConfiguration option

We are just outputting the report sending to console for simplicity. We can see that the configuration parameters are read correctly from appsettings.json. Now, with the application still running, let us change the Subject of the email in appsettings.json to a different value and see what result we get

"Email": {
    "Subject": "Options Report - Modified",
    "Recepient": "reportuser@someorg.com"

Subject modified in appsetting.json

.Net Typed Configuration

Result after modifying configuration with App still running

Now, our application can read the modified configuration parameters using the IConfiguration approach. All looks good so far.

This approach works well when we have a couple of configuration parameters (in this case, Subject and Recipient). To read them separately from the configuration object shouldn’t be a problem. But, when the number of parameters increases, then things will get trickier. For example, we want to have a ‘cc’ and a ‘bcc’ fields to our email parameters. For each of them, we will have to read separately and provide validations. It will not work well for the single responsibility principle. Wouldn’t it be great to encapsulate all these related parameters in a single class called EmailOptions and use that class in our EmailService. And, this is where the IOptions comes into the picture.

So, let us create an Options folder and create a class to store these two email parameters. Also, there is a string constant to identify the specific section of the configuration uniquely.

public class EmailOptions
    public const string Email = "Email";

    public string Subject { get; set; }

    public string Recepient { get; set; }


In order to use this class in our EmailService, we first need to configure it in ConfigureServices method in our Startup class.

public void ConfigureServices(IServiceCollection services)


    services.AddScoped<IEmailService, EmailService>();
    services.AddScoped<IReportService, ReportService>();

    services.AddSwaggerGen(c =>
        c.SwaggerDoc("v1", new OpenApiInfo { Title = "Options.NetCore", Version = "v1" });

Configure EmailOptions in Startup.cs

After configuring it, let us use it in our EmailService class. We will first add IOptions<EmailOptions> in the constructor to get access to the EmailOptions instance. Now, we can store it as a field and get its value from the Value property of IOptions, and that is it. We are ready to use EmailOptions in our service. Let us change the Send method to use EmailOptions rather than Configuration to read the email parameters. Also, remove the dependency of Configuration from our service. Our EmailService now looks like this

public class EmailService : IEmailService
    private readonly EmailOptions emailOptionsVal;

    public EmailService(IOptions<EmailOptions> emailOptions)
        emailOptionsVal = emailOptions.Value;

    public void Send(string report)
        Console.WriteLine($"Sending report titled {emailOptionsVal.Subject} " +
                            $"to {emailOptionsVal.Recepient}");


Let us now see it in action. Go back to the original value of configuration params and run the application.

.Net Typed Configuration

Run Application using IOptions

As you can see, we are getting the same result, but our service is not dependent on the entire configuration, only a part of it, though. All related parameters are grouped into a single entity.

However, there is one issue with this approach. What if my application is still running and I change one of the configuration parameters.

"Email": {
    "Subject": "Options Report - Modified",
    "Recepient": "reportuser@someorg.com"

appsettings.json modified

.Net Typed Configuration

Result after modifying appsettings.json with App still running

As you can see, we are still using the old values for Subject and Recipient. Well, that was not the case with our previous approach. So how to fix this?

Instead of using IOptions in our service, let us use IOptionsSnapshot and see what happens.

public class EmailService : IEmailService
    private readonly EmailOptions emailOptionsVal;

    public EmailService(IOptionsSnapshot<EmailOptions> emailOptions)
        emailOptionsVal = emailOptions.Value;
    public void Send(string report)
        Console.WriteLine($"Sending report titled {emailOptionsVal.Subject} " +
                            $"to {emailOptionsVal.Recepient}");

EmailService.cs using IOptionsSnapshot

.Net Typed Configuration

Run Application using IOptionsSnapshot

The first line in the output is with the original parameters—the following line with modified parameters and the application is still running.

So we achieved the result that we wanted. The IOptionsSnapshot provides us exactly what it says, a snapshot of the configuration.

Ok, all seems good so far. Well, not quite! I think our EmailService should be registered with singleton dependency rather than scoped dependency. Normally in applications using Email Services, the email service does not change very often. Hence, it makes sense to use a single instance of that service. Now, let us make the necessary change.

public void ConfigureServices(IServiceCollection services)


    services.AddSingleton<IEmailService, EmailService>();
    services.AddScoped<IReportService, ReportService>();


    services.AddSwaggerGen(c =>
        c.SwaggerDoc("v1", new OpenApiInfo { Title = "Options.NetCore", Version = "v1" });


Now, let us run the application

.Net Typed Configuration

Error when using Singleton scope for EmailService

So what happened here? Well, the inner exception says:

”Some services are not able to be constructed (Error while validating the service descriptor ‘ServiceType: Options.NetCore.Services.Interfaces.IEmailService Lifetime: Singleton ImplementationType: Options.NetCore.Services.Implementations.EmailService’: Cannot consume scoped service ‘Microsoft.Extensions.Options.IOptionsSnapshot`1[Options.NetCore.Options.EmailOptions]’ from singleton ‘Options.NetCore.Services.Interfaces.IEmailService’.)”

…and here is the problem. As the error message states IOptionsSnapshot is a scoped dependency and hence can’t be used inside services registered with singleton scope which our EmailService is. So how to fix that? Well, IOptionsMonitor is the answer. Let us change from IOptionsSnapshot to IOptionsMonitor in our service and instead of reading from Value property read from CurrentValue property.

public class EmailService : IEmailService
    private readolny EmailOptions emailOptionsVal;

    public EmailService(IOptionsMonitor<EmailOptions> emailOptions)
        emailOptionsVal = emailOptions.CurrentValue;
    public void Send(string report)
        Console.WriteLine($"Sending report titled {emailOptionsVal.Subject} " +
                            $"to {emailOptionsVal.Recepient}");

EmailService.cs using IOptionsMonitor

Ok, we are good to go. Let us run the application

.Net Typed Configuration

Run Application with IOptionMonitor

And with that, we seem to have resolved the issue. Note that if we now change the config parameters for the email section with the app still running, we will still read the original value from the config. The reason being our EmailService scoped to singleton, so for subsequent requests, the same instance is consumed with actual values from configuration. To solve this issue, let us change our EmailService to have IOptionMonitor<EmailOptions> as its field instead of EmailOptions. We also need to change places where we are reading config value from the CurrentValue property rather than the field itself. So our EmailService looks like below:

public class EmailService : IEmailService
    // private readonly EmailOptions emailOptionsVal;
    private readonly IOptionsMonitor<EmailOptions> emailOptionsVal;

    public EmailService(IOptionsMonitor<EmailOptions> emailOptions)
        emailOptionsVal = emailOptions;
    public void Send(string report)
        Console.WriteLine($"Sending report titled {emailOptionsVal.CurrentValue.Subject} " +
                            $"to {emailOptionsVal.CurrentValue.Recepient}");

EmailService with IOptionsMonitor as field

If we run the application and change the configuration while it is still running, we see the modified values picked up by the application.

.Net Typed Configuration

Run Application with IOptionsMonitor as field


As we saw, there are multiple ways of using Options in the .NET Core application, and which one is best depends on the use case. There are other features provided by IOptions than what I have demonstrated in this article. I will cover them in another blog in the future. Try this method, and then share your experience with us. I hope you enjoyed reading this. Till then, stay safe and happy coding!