|
Hemant Samriya Oodles

Hemant Samriya (Backend-Lead Development)

Experience:4+ yrs

Hemant is an experienced backend developer, specializing in Java. He possesses proficiency in various skills, including Java (up to Java 9), MongoDB, MySQL and tools like Postman, Azure, AWS Dashboard, and Searchkit. He is well-versed in IDE tools such as IntelliJ (primary), Eclipse (STS), and VSCode. He is also experienced in web technologies like JavaScript, HTML, CSS, and JSON. In terms of frameworks, he has expertise in Spring Boot (JPA, DATA, MVC, Security) and Hibernate. Additionally, he has hands-on experience with various AWS services such as Lambda, EC2, S3, and CloudWatch. Hemant is also familiar with ElasticSearch/OpenSearch as a search engine and utilizes version control through GitHub. He has contributed to several projects, including Konfer, HP1T, KRB, and many others.

Hemant Samriya Oodles
Hemant Samriya
(Lead Development)

Hemant is an experienced backend developer, specializing in Java. He possesses proficiency in various skills, including Java (up to Java 9), MongoDB, MySQL and tools like Postman, Azure, AWS Dashboard, and Searchkit. He is well-versed in IDE tools such as IntelliJ (primary), Eclipse (STS), and VSCode. He is also experienced in web technologies like JavaScript, HTML, CSS, and JSON. In terms of frameworks, he has expertise in Spring Boot (JPA, DATA, MVC, Security) and Hibernate. Additionally, he has hands-on experience with various AWS services such as Lambda, EC2, S3, and CloudWatch. Hemant is also familiar with ElasticSearch/OpenSearch as a search engine and utilizes version control through GitHub. He has contributed to several projects, including Konfer, HP1T, KRB, and many others.

LanguageLanguages

DotENGLISH

Conversational

DotHINDI

Fluent

Skills
Skills

DotJava

100%

DotJavascript

100%

DotNo SQL/Mongo DB

60%

DotPrompt

80%

DotSpring Boot

80%

DotPinecone

80%

DotKafka

80%
ExpWork Experience / Trainings / Internship

Jan 2021-Present

Senior Java Developer

Gurgaon


Oodles Technologies

Gurgaon

Jan 2021-Present

Jan 2018-Apr 2018

Project Trainee

Bangalore


Global Softech

Bangalore

Jan 2018-Apr 2018

EducationEducation

2015-2018

Dot

Lachoo Memorial College of Science & Technology

Masters in Computer Application-Computer Science

2012-2015

Dot

Lachoo Memorial College of Science & Technology

Bachelors's in Computer Application-Computer Science

certificateCertifications
Dot

Full Stack Developer

CourseCube

Issued On

Aug 2018

Top Blog Posts
Lets understand MongoDB Aggregations

Definition: Through the aggregation feature, MongoDB facilitates you to process multiple docs and perform operations on them. You can do operations like Group the values of the multiple docs together, perform the operation on the grouped data and it returns meaningful results or information, and we can analyze the data also.

There are 3 ways which can be used to perform aggregations:
    1. Aggregation Pipeline
    2. Single Purpose aggregation methods.
    3. Map-Reduce Operation [Deprecated from MongoDB 5.0]

  • Aggregation Pipeline:
    • We use the aggregate() method to implement this technique, in this method, we pass an array of stages. The array of stages starts from the first stage and its output is resultant in the next stage. This process runs till the last stage, this process work as a pipeline.
              syntax: db.collectionName.aggregate([
                                { $match : { … },
                                { $group : { … },
                                { $sort : { … },
                                ...
                              ], options)

              What is an option here: for example, for the aggregation stages, up to 100MB can be used, if it exceeds this number then it will throw an error. To resolve this issue we can use the allowDiskUse option here. 
                  ex: db.collectionName.aggregate(pipeline, { allowDiskUse : true })
    • aggrgate() function contains these 3 objects:
      • stage:
                        i. $match:
        It filters on the basis of the condition and reduce the amount of the document to the next stage.
                         syntax: { $match: { <condition> } }
                         
                        ii. $project:
        Select the fields from the documents and you can depict the fields according to your requirement.
                        syantax:
        { $project: { <requirement(s)> } }
                        
                        iii. $group:
        It groups the documents on the basis of the values in the documents.
                        syntax: {
                              $group:
                                {
                                  _id: <expression>,
                                  <field1>: { <accumulator> : <expression> },
                                  <fieldN>: { <accumulator> : <expression> },
                                  ...
                                }
                             }
                        iv. $sort:
        It sorts the documents.
                        syntax: { $sort: { <field1>: <sorting order>, <fieldN>: <sorting order> ... } }
                        
                        v. $skip:
        It helps you to skip the N number of documents and return the remaining documents.
                        syntax: { $skip: <integer value> }
                        
                        vi. $limit:
        It limits the starting N number of documents and return those documents. 
                        syntax:
        { $limit: <integer value> }
                        
                        vii. $unwind:
        it splits the element of an array in the documents and return the document with each element.
                        syntax:
        { $unwind: <field $[new field name]> }
                        
                        viii. $out:
        It writes the results in the new collection and must be the last stage.
                        syntax: { $out: { db: "<db-name>", coll: "<new-collection-name>" } }
      • Expression: The expression is the name of the field in the coming documents.
      • Accumulator: these are basically used in $group stage.
                        i. sum: return the sums of the number values.
                        ii. count: return the count of the total number of documents.
                        iii. avg: return the average of the given values.
                        iv. min: return the min value from the documents.
                        v. max: return the max value from the documents.
                        vi. first: return the first documents from the grouping.
                        vii last: return the last documents from the grouping.
                    
                    ex:
        db.collectionName.aggregate([{$group:{
                                    _id: "$id", "total": {$sum:"$fare"}
                                    }}])
                        here, $group is a stage, $id and $fare is the expression which is the field of the doc, and $sum is the accumulator.
                
  • Single Purpose aggregation methods:
    • This way, just bolsters the collection to perform the operations or calculate the results. This is quite a simple way, but has lacked in capabilities.
      • ex: count(), distinct() etc.

                      

  • Map-Reduce Operations:
    • It is deprecated from MongoDB 5.0v, this way uses in the bulk of data or can say large data sets and return computed aggregate result. We have mapReduce() function to perform this operation. it takes four parameters:
          a. Map function: it maps all data in the Key-Value pair.
          b. Reduce function: it performs an operation on the paired data.
          c. Query: can use to filter/query the docs.
          d. Out: generate a new collection.
          So, all the process is run separately, which is effective in a large data set.
    • syntax: db.collectionName.mapReduce(
      • function() {emit(this.key, this.value);},
      • function(key, value){return <calculated_result>},
      • {
      • query : {<condition>},
      • out: "<coll_name>"
      • }
      • )

 

New Features in Stream API in Java 9

The Stream concept is introduced in Java 8 and the main objective of this concept is to process the contents of the Collection with Functional Programming(Lambda Expression).


Q. How to create Stream Objects?
We can create a Stream object from the collection by using the stream() method of the Collection interface. stream() is the default method in all the Collections in the 1.8 version.


Q. How can we process Objects of Collection by using Stream?
Once we have the stream, by using it we can process the object of that collection. we use either filter() or map() method.

  • filter() method, filter the content of the collection based on some boolean condition.
  • map() method, create a new object, for each content present in the collection based on our requirement.

1. takeWhile():
Syntax: default Stream<T> takeWhile (Predicate<? super T> predicate);
it takes elements from the Steam as long as Predicate returns true, if Predicate returns false at that point onwards it won't process the rest stream objects. there is no guarantee that it will process each object of the Stream.
example:

1
2
List<Integer> listOfInteger = Arrays.ofList(2,4,3,9,5,8);
List<Integer> listOfTakeWhile = listOfInteger.stream().takeWhile(i->i%2==0).collect(Collectors.toList());

2. dropWhile():
Syntax: default Stream<T> dropWhile (Predicate<? super T> predicate);
it is opposite to takeWhile(), it drops the objects instead of taking them as long as Predicate returns true, if Predicate returns false then the rest of the Stream return.
example:

List<Integer> listOfInteger = Arrays.asList(2,4,3,9,5,8);
List<Integer> listOfDropWhile = listOfInteger.stream().dropWhile(i->i%2==0).collect(Collectors.toList());

3. Stream.itearte():
i. with 2 args, it takes an initial value and a function that provides the next value.
Syntax: static <T> Stream<T> iterate (T seed, UnaryOperator<T> f);
example:

i) Stream.iterate(1, x-> x+1).forEach(System.out::println);// infinite loop
ii) Stream.iterate(1, x-> x+1).limit(5).forEach(System.out::println);//limit the loop

ii. with 3 args,
Syntax: static <T> Stream<T> iterate(T seed, Predicate<T> hasNext, UnaryOperator<T> next);
The main issue with the 2 args iterate() method is there may be a situation where it can go in an infinite loop, to avoid this issue we limit the iteration with this method. This method takes an initial value, terminate Predicate and a function that provide the next value.
example:

Stream.iterate(1, x-> x < 5, x-> x + 1).forEach(System.out::println);

4. ofNullable():
Syntax: static <T> Stream<T> ofNullable (T t);
this method checks whether the given element is null or not, if it is not null, then this method returns the Stream of that element, in a null situation this method returns an empty stream. the main merit of this method is, we can avoid NullPointerException and do not need to implement the null check condition everywhere.
example:

List<String> listOfStrings = Arrays.asList("A", "B", null, "E", "G", null);
List<String> listOfNonNullStrings = listOfStrings.stream().flatMap(str -> Stream.ofNullable(str)).collect(Collectors.toList());
Lets Create Unmodifiable Collections with Java 9

There are some factory methods for creating unmodifiable Collections. before it,
in brief,
    > List :
        It is an indexed collection of elements where duplicate elements are allowed and insertion order is preserved.
    > Set :
        It is an unordered collection of elements where duplicate elements are not allowed and do not follow insertion order.
    > Map :
        This collection contains Key-Value pair which is called Entry (inner interface inside Map), Keys must be unique and Values can be duplicates.

It is very common to use immutable collection objects in Programming requirements to improve Memory Utilization and Performance.

In Java 9, 
    1. static <E> List<E> of() //factory method of unmodifiable List 
    2. static <E> Set<E> of() //factory method of unmodifiable Set

Up to 10 elements are allowed and the matched methods (12 methods [Method overloading] -> 1 without param method, 10 param methods and 1 var-arg param method) will be executed for more than ten elements, the internal var-arg method will be invoked.

    3. static <E> Map<K,V> of() //factory method of unmodifiable Map

in the Map case, we need to pass the key-value pair instead of elements alone.
    ex: Map<String, String> map = Map.of("1", "Java", "2", "Spring");
    
    > we have another way in map to create unmodifiable map object:
        Map.Entry<String, String> e = Map.entry("1", "Java", "2", "Spring");
    The Entry object is immutable and cannot be modifiable, if we try to change the content then, will get UnsupportedOperationException. by using these Entry objects we can create an Unmodifiable Map object with Map.ofEntries() method.
    Map<String, String> m = Map.ofEntries(e);

Short way, 
    import static java.util.Map.entry;
    Map<String, String> map = Map.ofEntries(entry("1","Java"), entry("2", "Spring"));

In Map Collection we have to take care of that "Up to 10 elements, it is recommended to use of() methods and for more than 10 items we should use ofEntries() method".

There are some exceptions we can get within this feature:
    1. NullPointerException
    2. UnsupportedOperationException
    3. IllegalArgumentsException

1. NullPointerException with Unmodifiable Collection Objects:
    ex: List<String> lang = List.of("Java", "Python", "C", null);//NullPointerException
    /*while using these factory methods, if any element is null then we will get this exception.*/
    
2. UnsupportedOperationException with Unmodifiable Collection Objects:
    ex: 
        Set<String> lang = Set.of("Java", "Python", "C");
        lang.add("Ruby"); //UnsupportedOperationException
        lang.remove("Python"); //UnsupportedOperationException
        /*after creating Unmodifiable objects if we try to change the content(add/remove) then we will face this exception.*/
3. IllegalArgumentsException with Unmodifiable Collection Objects:
    ex-1: 
        Map<String, String> langMap = Map.of("Lang-1","Java", "Lang-2","Python", "Lang-3","C", "Lang-1","Ruby");//IllegalArgumentsException
        /*while using these factory methods, if we are try to add duplicate keys then we will get this exception.*/
    ex-2:
        Set<String> lang = Set.of("Java", "Python", "C", "Java");//IllegalArgumentsException
        /*While using these factory methods, if we are trying to add duplicate elements in the set, then we will get this exception.*/

Anonymous Inner Class vs Lambda Function

 

* Anonymous Inner Class:

 

>It is a class without a name, which can implement an interface that contains any number of Abstract Methods. It can also extend the concrete class and abstract class.

 

> Inside Anonymous inner class, we can declare the Instance Variables, and we can use the 'this' keyword in the inner class, which points to the current inner class object but not points to the outer class object.

 

Example:

 

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
class Test{
	public static void main(String[] args){
		/*Implementing thread with Anonymous class*/
		Thread thread = new Thread(new Runnable(){
			public void run(){
				for(int i=0; i<10; i++){
					System.out.println("Child-Thread-Anonymous-Class "+ i);
				}			
			}		
		});
		thread.start();
		for(int i=0; i<10; i++){
			System.out.println("Main-Thread-Anonymous-Class "+ i);
		}
	}
}

 

* Lambda Functions :

 

> It is a method without a name, Which can implement only those interfaces that contain only one abstract form or only Functional Interface.

 

> Inside the Lambda Function, we are not able to declare instance variables. Any variable we declare in the lambda function acts like a local variable (local variable implicitly a final variable, and we can't re-assign this variable).

 

> 'this' keyword can be used in a lambda function, which points to the current outer class object reference.

 

Example:

 

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
class Test{
	public static void main(String[] args){
		/*Implementing thread with Lambda Function*/
		Thread thread = new Thread(()->{
				for(int i=0; i<10; i++){
					System.out.println("Child-Thread-Lambda-Function "+ i);
				}			
		});
		thread.start();
		for(int i=0; i<10; i++){
			System.out.println("Main-Thread-Lambda-Function "+ i);
		}
	}
}

 
> Here, we have a list, with which we can use the lambda functions-


1. Comparator
2. Predicate (Java 8)
3. Supplier (Java 8)
4. Consumer (Java 8)
5. function (Java 8)
6. Runnable
7. Collections.sort()
8. TreeMap
9. TreeSet
   and so on.


Note:

 

* Anonymous Inner Class is not equal to Lambda Function because Anonymous Inner Class contains more Powerful Features(this keyword, instance variable, etc.) than the Lambda Function. We can replace the Anonymous Inner Class with Lambda Function when only one abstract method (Functional Interface) is available. In multiple abstract methods case, we use Anonymous Inner Class to implement them, which is the best way in this case.

 

* Hence we can say Anonymous Inner Class != Lambda Functions.


References:

* https://docs.oracle.com/javase/8/docs/api/java/util/function/package-summary.html
* https://docs.oracle.com/javase/8/docs/api/java/lang/FunctionalInterface.html
* journaldev.com
* howtodoinjava.com

Banner

Don't just hire talent,
But build your dream team

Our experience in providing the best talents in accordance with diverse industry demands sets us apart from the rest. Hire a dedicated team of experts to build & scale your project, achieve delivery excellence, and maximize your returns. Rest assured, we will help you start and launch your project, your way – with full trust and transparency!