I believe my abilities would be perfect for your venture. I can finish the job within the necessary time frame. I am professional Hadoop Developer
I have 7+ years of experience in this field.
My Skills and Expertise :
• Working on microservices with Spring boot application
• Working on Azure Databricks, ADF and Data lake storage
• Neo4j and Mongo DB Database designing and modeling
• Creation of data pipelines in Azure to integrate spark notebooks
• Data Migration from Data Source(SQL Server) to Azure storage
• Installation and configuration of Hadoop technologies on AWS OR AZURE
• Worked on Cloudera MapReduce,Spark,Scala,Sqoop,Pig,Hive,HBase,Oozie etc
•Worked on R, Tableau, Weka etc. tools.
•Oracle, My Sql
• Java,J2EE,Python,HTML5,PHP,CSS
I Have Good experience in HDFS, Hive, Java, Scala ,Sqoop,pyspark, Kafka,
SQL, HQL (HiveQL),Spark,Databricks,Azure, AWS etc.
• Worked on Kotlin framework
• Worked on data migration projects from SQL and oracle to AWS S3/Azure
storage
• Designing and deploying dynamically scalable, available fault-tolerant, and
reliable applications on the AWSCloud.
Transformations using Hive, MapReduce, loaded data into HDFS and
Extracted the data from SQL into HDFS using Sqoop.
• Worked on GCP, Azure Data Factory, Azure DataLake,Azure
Databricks,Python, Spark, pyspark etc.
• Perform big data processing using Hadoop, MapReduce,HDFS
Data ingestion and ETL on Hadoop platformusing HDFS commands, Sqoop
and Spark.
• Worked on Big Data Integration & Analytics based onHadoop and web upload
Methods
• Worked with onsite client to track a status & issues in offshore development
mode
Data Mining :
o Worked on Weka and R tool
o Implemented NaiveBayesian,Decision Tree, Random Forest etc Algorithms.
o Implemented Normalization,Discretization,Fuzzyfication,Numeric to binary preprocessing methods.
• Working on Azure Databricks to write PySpark codes and store data in ADLS
• Working on Azure event hub for creating producer and consumer for stream
based data
• Neo4j and Mongo DB Database designing and modeling
• Creating Neo4j graphs on stream based data
• Creation of data pipelines in Azure to integrate spark notebooks
• Data Migration from Data Source(SQL Server) to Azure storage
• Implemented Kafka producer and consumer applications on Kafka cluster
11월, 2021 - 선물
•
3 , 1
Senior Software Engineer
7월, 2019 - 11월, 2021
•
2 , 3
The Lean Apps Pvt. Ltd.
7월, 2019 - 11월, 2021
•
2 , 3
• Working on HDFS, Hive, Java, Scala , pyspark, Kafka, SQL, HQL
(HiveQL),Spark,Databricks,Azure, AWS etc.
• Working on AWS EMR cluster to run the Hadoop ecosystems like HDFS,
Spark, Hive etc.
• Worked on AWS DynamoDB for storing the data and cache the data on
Elasticache
• Worked on data migration projects from SQL and oracle to AWS S3/Azure
storage
• Designing and deploying dynamically scalable, available fault-tolerant, and
reliable applications on the AWSCloud.
7월, 2019 - 11월, 2021
•
2 , 3
Associate Software engineer
9월, 2019 - 7월, 2020
•
10 , 1
British Telecom
9월, 2019 - 7월, 2020
•
10 , 1
o Hadoop HDFS
o Hadoop MapReduce programming
o Implemented Data mining algorithms on Hadoop cluster
o Worked on Pig, Hive,Spark,Storm,Sqoop etc.
9월, 2019 - 7월, 2020
•
10 , 1
교육
University of Pune
2012 - 2015
•
3
B.E (Information Technology)
India
2012 - 2015
•
3
자격증
IBM DB2 and J2EE Certified
2014
IBM
RAD,DB2,J2EE technologies are included in this certificate
2014
페이퍼
Detection of DoS/DDoS attack against HTTP Servers using Naïve Bayesian
IEEE(Institute of Electrical and Electronics Engineers)
The First International Conference on Computing, Communication, Control and Automation (ICCUBEA-2015) provides an international open forum for the researchers and technocrats in academia as well as in industries from different parts of the world to interact, exchange concepts, prototypes, innovative research ideas and share the outcomes of their research work which could contribute to the academic arena and further benefit the business and industrial community.