aiShare Your Requirements
Manish Kumar Narang Oodles

Manish Kumar Narang (Manager-Sr. Project Manager- Technology)

Experience: 9+ yrs

Manish is an experienced Backend Developer with several years of industry experience in the IT field. He possesses a wide range of skills, including expertise in Backend languages like Core Java, J2EE, Hibernate, Spring/Spring Boot, and Python. Manish is also proficient in relational databases such as MySQL, PostgreSQL, and Oracle. He has hands-on experience in API implementations, web services development, testing, and deployments. Manish has contributed to various internal and client projects, including PMO, Catalyst, Communication-Scaffold, Oodles-Dashboard, and Devops Support, delivering significant business value. He is known for his innovative mindset and excellent problem-solving abilities. He keeps himself updated with new technologies by reading about them. He is skilled at collaborating closely with clients to define project scope and requirements, establish project timelines and milestones, and manage expectations. Manish conducts regular project status meetings, ensuring regular updates to clients and stakeholders regarding project progress, risks, and issues. Additionally, he serves as a mentor and coach to junior developers, offering guidance on project management best practices and fostering their skills development.

Manish Kumar Narang Oodles
Manish Kumar Narang
(Sr. Project Manager- Technology)

Manish is an experienced Backend Developer with several years of industry experience in the IT field. He possesses a wide range of skills, including expertise in Backend languages like Core Java, J2EE, Hibernate, Spring/Spring Boot, and Python. Manish is also proficient in relational databases such as MySQL, PostgreSQL, and Oracle. He has hands-on experience in API implementations, web services development, testing, and deployments. Manish has contributed to various internal and client projects, including PMO, Catalyst, Communication-Scaffold, Oodles-Dashboard, and Devops Support, delivering significant business value. He is known for his innovative mindset and excellent problem-solving abilities. He keeps himself updated with new technologies by reading about them. He is skilled at collaborating closely with clients to define project scope and requirements, establish project timelines and milestones, and manage expectations. Manish conducts regular project status meetings, ensuring regular updates to clients and stakeholders regarding project progress, risks, and issues. Additionally, he serves as a mentor and coach to junior developers, offering guidance on project management best practices and fostering their skills development.

LanguageLanguages

DotENGLISH

Fluent

DotHINDI

Fluent

SkillsSkills

DotJava

80%

DotTechnical Project Management

100%
ExpWork Experience / Trainings / Internship

Jan 2017-Present

Sr. Project Manager - Technology

Gurugram


Oodles Technologies

Gurugram

Jan 2017-Present

EducationEducation

2004-2008

Dot

Delhi University

Computer Engineering-BE

Top Blog Posts
Basic Components of Trading Platform Different Components - 1) User Interface - How your traders see your exchange. Must be user-friendly and intuitive. UI needs to be mobile friendly too. It should have following functionalities : Register and access an account (modify userDetails) View current order, past transactions, balance , statistics etc Analytics (optional) Place buy and sell orders Payment (Fiat, BTC, Eth) Wallet Management Access the support mechanism (communication with the admin) 2) Admin Interface – To Control and Manage the exchange. It should have following functionalities : Editing the trading fees (in percentage) Approving user accounts for trading (User Management View – Active/Inactive user, send email functionality) Approving transactions (only deposit request) Addressing the support requests raised by users (optional) 3) Trading – Transaction Management – accesses the order book, matches buy/sell orders, executes transactions and calculates balances 4) Wallet - Multiple currency wallet with minimum basic functionality The manner in which these components are shown and defined above, one can easily picturize a microservice architecture for development. The development process should take between 3 to 6 months depending on the complexity of the trading platform. Basic Schema User : id, userName, email, phoneNo, status (active/inactive), Country, createdOn, passwd Role : authority (String) {admin, user, manager} Transaction : currency {Fiat, Crypto}, exchangeRate (Null for Fiat), fee (in percentage), amount, buyer (Null for Fiat) (user), seller (Null for Fiat) (user), status {success, failure, pending, cancelled}, grossAmount, transactionId User has many Role User has many Transaction User has many Wallet AuthToken : authType, token, userId Message : subject, to , from, body , dateCreated, status, isChecked Wallet : id, userId, balance, shadow_balance, walletType Currency : type, name, symbol, fee Order : amount, fee, currencyId, orderType, dateCreated, status TraderAd : minTrans, maxTrans, totalSupply, currencyId, traderName
To Setup Eureka Server And Register A Microservice In this blog, we shall see how to make a spring boot application act as eureka server and register a spring boot application in it as eureka client which will in turn act as a microservice. Now first we will setup a Eureka Server. package com.example.eurekasetup; import org.springframework.boot.SpringApplication; import org.springframework.boot.autoconfigure.SpringBootApplication; import org.springframework.cloud.netflix.eureka.server.EnableEurekaServer; import org.springframework.cloud.netflix.zuul.EnableZuulProxy; @EnableZuulProxy @EnableEurekaServer @SpringBootApplication public class EurekaServiceApplication { public static void main(String[] args) { SpringApplication.run(EurekaServiceApplication.class, args); } } In the application.yml we will configure the port number and put other details. Observe that the eureka.client parameters have been disabled since this spring boot application will act as a Eureka server. spring: application: name: eureka-server server: port: 8302 eureka: client: registerWithEureka: false fetchRegistry: false server: waitTimeInMsWhenSyncEmpty: 0 zuul: #Service will be mapped under the /api URI prefix: /api routes: service1: path: /service1/** url: http://localhost:8300 Now we will create a spring boot application which we will act as a Eureka client and we will register it with the Eureka server. package com.example.service1; import org.springframework.boot.SpringApplication; import org.springframework.boot.autoconfigure.SpringBootApplication; import org.springframework.cloud.netflix.eureka.EnableEurekaClient; @EnableEurekaClient @SpringBootApplication public class DbServiceApplication { public static void main(String[] args) { SpringApplication.run(DbServiceApplication.class, args); } } In the application.properties, spring.application.name=service1 server.port=8300 spring.datasource.url=jdbc:mysql://localhost:3306/test spring.datasource.username=root spring.datasource.password =root spring.datasource.tomcat.testWhileIdle=true spring.datasource.tomcat.validationQuery=SELECT 1 spring.jpa.show-sql=true spring.jpa.hibernate.ddl-auto=update spring.jpa.hibernate.naming.implicit-strategy=-strategy=org.hibernate.cfg.ImprovedNamingStrategy spring.jpa.properties.hibernate.dialect=org.hibernate.dialect.MySQL5Dialect In the application.yml, configure this as a eureka client by enabling the eureka.client parametersand by mentioning the url of eureka server as below. eureka: client: registerWithEureka: true fetchRegistry: true serviceUrl: defaultZone: http://localhost:8302/eureka/ instance: hostname: localhost
How To Execute Cron Job In Testing Environment I have been using Grails from quite some time. People who are coming from Java background will find it very familiar. In fact, you can write java code in grails and it will work perfectly. The world of grails is more uncluttered where the idea is to follow convention over configuration. One of my tasks was to schedule thecronjob. To understand the basics,this tutorialis quite sufficient. It was quite easy to schedulecronjob using the Quartz plugin (we used quartz 1.0.2). class SampleJob { static triggers = { cron cronExpression: "0 30 22 * * ? *" // execute job everyday at 10:30 pm } def execute() {} } The above code defines a cron job named SampleJob. The time at which this cron job is triggeredis determined by the cron expression and execute() method contains the code which will be executed. The fields of the cron expression can be understood as follows - cronExpression: "s m h D M W Y" | | | | | | `- Year [optional] | | | | | `- Day of Week | | | | `- Month | | | `- Day of Month | | `- Hour | `- Minute `- Second But the stumbling block for our team was that the code wouldn't executein the testing environment. We went through thePlugin Documentationand found the following - By default, jobs will not be executed when running under the test environment. But the documentation doesn't clearly state how to execute them under the test environment if one wants to. Infactit turns out that it can also be done quite simply by enabling it in the config file of the quartz plugin - <h> DefaultQuartzConfig.groovy</h> as follows. quartz { autoStartup = true jdbcStore = false waitForJobsToCompleteOnShutdown = true exposeSchedulerInRepository = false props { scheduler.skipUpdateCheck = true } } environments { test { quartz { autoStartup = true } } } It can be observed that jobs can be enabled/disabled for other environments also.
A Brief Introduction of Lagom Framework These days I am exploring various microservice frameworks since we want to move our monolithic application to microservice architecture. The most popular microservice framework is provided by Spring Cloud. But i got to know that there are other frameworks available too. One such framework is the Lagom framework. Lagom fits very well in a Domain-Driven Design focused mindset which works well in a microservice Architecture. The simplest program one makes try is a simple service method and a single call. One can download a sample project from the Lagom website. You would need at least Java 8 and maven. Now clone a project (for example: hello world project) in a directory and start the lagom development environment using the command - mvn lagom:runAll The following command after being run would produce the following output. Wait until you see this - 17:38:37.264 [info] play.api.Play [] - Application started (Dev) [INFO] Service hello-impl listening for HTTP on 0:0:0:0:0:0:0:0:57797 [INFO] (Service started, press enter to stop and go back to the console...) Now you may open a new terminal and use the following commands to verify the service method - $ curl http://localhost:9000/hello/Manish Hello, Manish! $ curl http://localhost:9000/hello/Kumar Hello, Kumar! $ curl http://localhost:9000/hello/Narang Hello, Narang! You may also test it on a browser by hitting the respective URLs. One simple command starts the Lagom development environment and runs - Cassandra Kafka A service locator A service gateway Your lagom service While comparing the two microservice frameworks - Springand Lagom - one can quickly observe is that Spring has much greater support available than Lagom.This is also quite natural because Spring framework has been developing the microservices framework for a while and also Lagom is only limited to the reactive principle which is also now included the new release of Spring Boot 2. It would be interesting to keep an eye on the future development of these two frameworks for microservice architecture.
Authentication With JWT In Microservice Architecture In the previous blog, we discussed the API Gateway in the microservice architecture and come to a point where we need to focus our attention on security management between sets of microservices.Now in a monolithic architecture, security is managed by the application server. Since all the services are deployed on one application server Since there is a centralized authentication service which uses the session management features of the application server. Once a user logs in, a session is maintained and it's not necessary for all services to authenticate the user. But in a microservice architecture, authentication/authorization becomes more challenging. Since each microservice may be deployed remotely (and not locally) and all communication happening mostly through HTTP calls, it is not clear how to authenticate the user and pass that information to all microservices. Here we will discuss a method based on JWT to secure communication between microservices. JWT stands for JSON Web Token. It exists in the form of either JWS (JSON Web Signature) or JWE (JSON Web Encryption). JWS and JWE are concrete implementations of JWT - which is like an abstract class. Now the whole process of generating JWT is depicted in the picture below. When a request is made by the client, it first communicates with the Authorization server and gets an access-token. The request along with the access-token is sent to the API Gateway. At this point, access-token is decrypted and send back to the Authorization server to get the JWT (after validation). The JWT token contains the user identity along with the microservices. Each microservice validates the JWT and generates its own JWT to communicates with other microservices according to scope rules. This is possible only if we have the mechanism to decrypt these JWT tokens at each microservice. Sometimes nested JWT is also used in which previous JWT is sent along with the new JWT. One must also be vary about the cost involved in JWT validation at microservice level.
API Gateway In Microservice Oriented Architecture When one chooses to build an application on microservice architecture, one need to decide on how the communication shall take place between the client side and the set of microservices. In a microservice architecture API gateway acts as a single entry endpoint for API calls from the client side. It mightalso be the place where all authentication, security actions and load balancing are processed. Since the services in a Microservice architecture can be changed, it should be hidden from the client side. Essentially, an API gateway takes care of the internal system architecture and only exposes the encapsulated APIs which are tailoredto suit each client requirement. It also helps in simplifying the client side development. API gateway can be thought of as the director of the orchestra and each player in the orchestra as a microservice. So, various requests are received by the API gateway which are then routed to the respective microservice. To do it , it needs to maintain some kind of internal mapping, a registry of services. So the API calls will be dynamically routed to the respective microservice but then the question arises as to how this registry is maintained. If it has to be manually updated then the purpose is defeated. For this, we have service discovery server. When a service is requested the first time, an entry is made in the service registry and thereafter updated regularly. API gateway helps in smooth transitioning from monolithic to microservice architecture. One can start with development of API Gateway and putting it in front of monolithic application and decomposing the services into a set of microservices and keep the client applicaton working through the API Gateway. In time, all the services can be decomposed without having any disruption or an appreciable effect on the client side. Once the API Gateway is running, it can accept requests and respond to them but we still need to take care of the internal communication between different services (credentials, security, traffic control) etc. I shall discuss it in the next blog.
Monolithic Vs Microservice Architecture Lately, we felt a need to migrate from Monothilic to Microservice Architecture. In this post, we will see the pros and cons of both architectural framework. Monolithic Architecture is what we all are quite familiar with where basically we have one storing unit (Database) and all the application services use for storing and retrieving data. Now the pros of this architecture are quite straight forward. Since all the IDEs are designed to support single application, they are easy to develop and deploy on a server. They can be scaled easily by just having many copies to run with a load balancer. In the early phases of development, monolithic architecture works well but asthe size of the application starts to grow and various modules are piled up, the limitation of this architecture shows up. The main drawback is that if the size of the application grows appreciably, it becomes very hard for a singledeveloper to understand it. Also, its very difficult to include new innovations in the form of frameworks, languages etc. One gets stuck with the technologies and frameworks whichwere used in the beginning of the development of the project. Every time the whole application needs to be deployed on production even for minor changes in the code. We briefly summaries the pros and cons of Monolithic Architecture. Pros of Monolith Architecture - Simplicity, for small codebases Faster early development speed Easy testing IDE support Cons of Monolith Architecture - Not Ideal for growing codebases Slowing Iterations in the long term Harder to Innovate Steep code learning curve One of the main motivations to research about Microservice architecture was that we wanted to make use of newer frameworks and languages which became available to us with time and not just stuck with the technologies which we used at the start of development of our application. Wewanted to get rid of the technologies which have become obsolete so that people who are freshly hired can work on latest technologies and be more productive. We know that a few companies like Amazon, eBay and Netflix have already made use of Microservice Architecture. The essential idea is to have many interconnected small service routines instead of a hugesingle application. Though the servies are interconnected, they function independently and therefore one can have different services being developed in different frameworks and languages. Each service has its own designated database and a well defined boundry in the form of message driven API. Due to such decomposition, each service is simple to develop and easy to understand for a single developer. Moreover, unlike monolithic architecture, if one service goes down, the other services keeps the application running. Each service therefore can be scaled differently.We briefly summaries the pros and cons of MicroserviceArchitecture. Pros of Microservice Architecture - Better Architecture for large applications Better agility in the long term Microservices : easy to learn Isolation for scalability and damage control Cons of Microservice Architecture - More moving parts Complex infrastructure requirements Consistency and Availability Harder to test
HTTP GET Request In Groovy Grails Recently while working on a project, I had to make an HTTP GET call with parameters to a different server from the backend side. There is not much data available online to guide on this issue. But after doing some search, I got to know about the HTTPBuilder library through which these actions can be performed. To begin with, we need to first install the HTTPBuilder library. This itself became a task because i had some trouble in trying to install it. There are various modulesavailable online but the one which worked for me is - 'org.codehaus.groovy.modules.http-builder:http-builder:0.7'. It is important to mention here that I am working with grails 2.2.4, Maybe different modules work for different versions. So, to install HTTPBuilder library, we need to update the BuildConfig.groovy file - dependencies { compile "org.codehaus.groovy.modules.http-builder:http-builder:0.7" } Now suppose, i want to make this call - http://myApp.com/getData?param1=something&param2=something We can accomplish this from ther server side by writing this simple piece of code - import groovyx.net.http.HTTPBuilder try{ def http = new HTTPBuilder('http://www.myApp.com') http.get( path : '/getData', query : [param1:something,param2: something] ) { resp -> jsonResp = resp.entity.content.text println jsonResp } } catch(groovyx.net.http.HttpResponseException e){ println e.toString() } Please note how the query parameters are passed. The closure 'resp.entity.content.text' gives you the raw response. If the response is in the form of JSON then the closure directly gives you a map of it which can used directly.
Challenges in setting up and running a Grails project My project uses Grails framework and since we also train people in grails through our project, the project needs to be installed many times on different machines. While installing there are several issues which one encounters and if one is not familiar with the solution, the installation time may increase by at least a couple of hours. I thought I should add my two pence worth on this topic. I shall try to enumerate the common issues and their solutions here. We use grails 2.2.4, ggts-bundle 3.6.4 and JDK 1.8.31. It is better to remove java from the machine if it is not 1.8.31 and replace it with jdk1.8.31. Some java version givescompatibility issues with grails 2.2.4. Problem 1:Grails requires a JDK. You may get an error message like this - Solution 1:GGTS does not consult the JAVA_HOME environment variable. The issue can be resolved by using the -VM argument and manually giving the path for JVM. Just add -VM <path to jdk>/bin/java at the beginning in the file ggts.ini . Make sure that there is no spaces before the -vm argument. The file should begin with -vm Problem 2 : You may get the following error message while running the project - | Error 2017-06-18 16:54:58,702 [Thread-38] ERROR plugins.AbstractGrailsPluginManager - Plugin [i18n:2.0.1] could not reload changes to file [/home/project/grails-app/i18n/messages.properties]: Error starting Sun's native2ascii: Message: Error starting Sun's native2ascii: Solution 2 : 1 )cd to jdk path 2 )cp lib/tools.jar jre/lib/ext/tools.jar Problem3: You may get an error message like this while running the project - Mar 05, 2017 3:51:31 PM org.springsource.loaded.jvm.JVM copyMethod SEVERE: Problems copying method. Incompatible JVM? java.lang.reflect.InvocationTargetException To make it work to get hold ofspringloaded-1.2.5.RELEASE.jarPut this intograils-2.4.4/lib/org.springframework/springloaded/jars/folder. When you install ggts inside the ggts-bundle folder is grails-2.4.4. So put the file in the above location within the ggts-bundle folder.Re-launch GGTS. (function(){(function r(e) { function t(e) { if (e.parentNode) if (e.childNodes.length > 1) { for (var t = document.createDocumentFragment(); e.childNodes.length > 0; ) t.appendChild(e.childNodes[0]); e.parentNode.replaceChild(t, e); } else e.firstChild ? e.parentNode.replaceChild(e.firstChild, e) : e.parentNode.removeChild(e); } function n(e) { if (e) try { for (var n = e.querySelectorAll(".gr_"), r = n.length, o = 0; o
Multiple email addresses as override addresses in Grails In one of my previous blogs, I talked about the mailing functionality in Grails framework. One may read it here . In that blog i also mentioned how the parameters namelygrails.mail.overrideAddress,grails.mail.default.from,grails.mail.default.to can only take one email address. But there may be scenarios where we would want multiple addresses to be contained in these parameters. Take, for example, if the project is tested by multiple qualitative analysts then we should be able to have multiple addresses to override the email addresses in the testing environment. Another scenario which we encountered was when we want to test email functionality for multiple addresses. Since the email addresses are overridden by only one email address, the functionality seems to work even if there-there are multiple email addresses in to, from, cc and bcc. But that may not be so. So, now let me discuss how we can add multiple addresses in the above-mentioned parameters. For that, we may need to look into the inner workings of the mail plugin and specifically at the message builder factory. So, let's look at the MailMessageBuilder.groovy file. There are two functions which are of interest to us. The first method MailMessaageBuilder makes copies of the parameters defined in the config.groovy file. Here the local copies of overrideAddress, defaultFrom and default are of String type.The variables are then used for overriding the email addresses through the second method. So, if we modify the second method according to our requirements we are done. MailMessageBuilder(MailSender mailSender, ConfigObject config, MailMessageContentRenderer mailMessageContentRenderer = null) { this.mailSender = mailSender this.mailMessageContentRenderer = mailMessageContentRenderer this.overrideAddress = config.overrideAddress ?: null this.defaultFrom = overrideAddress ?: (config.default.from ?: null) this.defaultTo = overrideAddress ?: (config.default.to ?: null) } protected String[] toDestinationAddresses(addresses) { if (overrideAddress) { addresses = addresses.collect { overrideAddress } } addresses.collect { it?.toString() } as String[] } Since the to destination addresses method takes the email addresses as the argument and returns the array of overridden email addresses, if we change the return array, our job is done. Suppose I wanted to add email1 and email2 as overridden email addresses, I did it as follows - protected String[] toDestinationAddresses(addresses) { def overriddenAddresses = {email1, email2} def tempAddresses = addresses addresses = [] if (overrideAddress) { overriddenAddresses.each{String overriddenAddress -> tempAddresses = tempAddresses.collect { overriddenAddress } addresses.addAll(tempAddresses) } } addresses.collect { it?.toString() } as String[] } (function(){(function r(e) { function t(e) { if (e.parentNode) if (e.childNodes.length > 1) { for (var t = document.createDocumentFragment(); e.childNodes.length > 0; ) t.appendChild(e.childNodes[0]); e.parentNode.replaceChild(t, e); } else e.firstChild ? e.parentNode.replaceChild(e.firstChild, e) : e.parentNode.removeChild(e); } function n(e) { if (e) try { for (var n = e.querySelectorAll(".gr_"), r = n.length, o = 0; o