Filter

My recent searches
Filter by:
Budget
to
to
to
Type
Skills
Languages
    Job State
    2,000 clojure hadoop jobs found

    ...efficiency. Requirements: ✅ Data collection from real or synthetic datasets (Kaggle, HuggingFace, or generated data). ✅ Big Data technologies (Hadoop, Apache Spark) for real-time processing. ✅ Machine Learning models (e.g., Scikit-learn, TensorFlow) for failure prediction. ✅ Dashboard for visualization (e.g., Streamlit, Grafana). ✅ Clean and well-documented code (preferably in Python). Deliverables: ? Source code (GitHub or ZIP file). ? Short guide on how to run the system. ? Results with key metrics (accuracy, downtime reduction, etc.). ? Visualization of predictions (charts, tables). Preferred Skills: ✔ Experience with Big Data frameworks (Hadoop, Spark). ✔ Strong knowledge of ML models for predictive analytics. ✔ Familiarity with data visualization tools. ✔ Previous ...

    $284 Average bid
    $284 Avg Bid
    35 bids

    ...a related field. Proven experience developing ETL pipelines and data processing workflows. Hands-on experience with PySpark, Pandas, and SQL. Experience working with big data technologies such as Apache Spark, Hadoop, or Kafka (preferred). Familiarity with cloud data solutions (AWS, GCP, or Azure). Required Skills: Programming: Strong proficiency in Python (PySpark, Pandas) or Scala. Data Modeling & Storage: Experience with relational databases (PostgreSQL, MySQL, SQL Server) and NoSQL databases (MongoDB, Cassandra). Big Data & Distributed Computing: Knowledge of Apache Spark, Hadoop, or Kafka. ETL & Data Integration: Ability to develop efficient ETL processes and manage data pipelines. Cloud Computing: Experience with AWS (S3, Redshift, Glue), GCP (BigQuery),...

    $29 / hr Average bid
    $29 / hr Avg Bid
    16 bids

    I'm seeking a dedicated Clojure Developer to help maintain and enhance our health application. The primary responsibilities will involve development, with a focus on improving the user interface, backend functionalities, and integration with third-party services. Key Responsibilities: - Collaborate with the team to enhance the health application - Improve the user interface, backend functionalities, and third-party integrations - Utilize Clojure to implement necessary updates and modifications Ideal Candidate: - 3+ years of proven experience with React Native development - Strong proficiency in JavaScript and the React ecosystem - Expertise with Redux for state management - Familiarity with native APIs and third-party integrations - Knowledge of performance optimizatio...

    $7 - $13 / hr
    Sealed
    $7 - $13 / hr
    14 bids

    I am seeking a seasoned Data Engineer with over 4 years of experience. The ideal candidate should possess strong skills in ETL processes, Data warehousing, and Big data technologies. Proficiency in tools like Apache Spark, Hadoop, and SQL is a must. Key Requirements: - Extensive experience in ETL processes, Data warehousing and Big data technologies - Proficient in Apache Spark, Hadoop and SQL - Seeking for a Full-time employment Skills and Experience: - 4+ years in Data Engineering - Mastery of relevant tools and technologies - Ready for Full-time commitment If you're a dedicated professional with the required skills and experience, I look forward to your application.

    $80 / hr Average bid
    $80 / hr Avg Bid
    26 bids

    Build & Secure a Scalable 5-Node Hadoop Cluster on AWS/Azure Job Description: We are looking for an experienced freelancer to design, deploy, and secure a scalable 5-node Hadoop cluster on either AWS or Azure. This project requires expertise in Hadoop architecture, cloud infrastructure, and implementing best practices for performance and security. Key Responsibilities: Cluster Deployment: Set up a Hadoop cluster with 1 master node and 4 worker nodes. Install and configure Hadoop 3.x, including HDFS, YARN, and MapReduce. Integrate cloud storage (e.g., S3 or Azure Blob Storage) with the cluster. Scalability & Optimization: Configure the cluster to scale seamlessly with auto-scaling for worker nodes. Optimize Hadoop performance for data ...

    $926 Average bid
    $926 Avg Bid
    50 bids

    Clojure language 7+ Years Remote 3 to 6 Months As per the market standard If you have a strong background in Any Cloud (AWS, Azure, GCP), JAVA, JavaScript/TypeScript or Python as well, that's a major plus. JOB DESCRIPTION and RESPONSIBILITIES Develop, maintain, and improve Clojure-based applications in a team environment. Engage in the entire development lifecycle, including concept, design, testing, and deployment. Write clean, efficient, and testable code. Conduct code reviews and collaborate with peers to ensure high-quality software standards. Assist in diagnosing and resolving application issues. Stay up-to-date with emerging trends in Clojure development and related technologies. Proactively identify and communicate improvements that can be implemented in...

    $2274 Average bid
    $2274 Avg Bid
    12 bids

    I'm seeking a Data Analyst proficient in Hadoop, Hive, and Flume to analyze Movielens data. The primary objective is to uncover trends and patterns within the dataset. Key Requirements: - Experience with Big Data tools, specifically Hadoop, Hive, and Flume - Strong data analysis skills - Ability to identify trends and patterns from complex data sets - Previous experience with Movielens data is a plus The ideal freelancer for this project will have a solid understanding of the Movielens dataset, with the ability to use Big Data tools to generate significant insights. Please provide examples of similar projects you've completed in your proposal.

    $32 Average bid
    $32 Avg Bid
    14 bids

    ...Senior Data Engineer to join our team. The ideal candidate should have a strong background in data engineering, particularly with the following skills and experiences: - 5+ years in data engineering or related roles. - Master’s degree in Computer Science, Engineering, or a related field is preferred. - Proficiency in Apache Airflow for workflow scheduling and management. - Strong experience with Hadoop ecosystems, including HDFS, MapReduce, and Hive. - Expertise in Apache Spark/Scala for large-scale data processing. - Proficient in Python. - Advanced SQL skills for data analysis and reporting. - Experience with AWS cloud platform is a must. The selected candidate will be responsible for developing data pipelines, managing data warehousing and performing data analysis and...

    $1972 Average bid
    $1972 Avg Bid
    29 bids

    I'm looking for a Big Data expert well-versed in Hadoop, Hive, and Flume to assist with some fundamental-level problems. Your tasks will include: - Data ingestion, storage, and processing - Solving queries over a dataset for analysis - Performing basic-level data analysis Skills and experience that would be ideal for this job include: - Proficiency in Hadoop, Hive, and Flume - Experience with data ingestion, storage, and processing - Strong problem-solving skills for basic data analysis - Ability to troubleshoot errors and optimize performance

    $47 Average bid
    $47 Avg Bid
    21 bids

    I'm looking for an expert-level interviewer who can assess a candidate's technical skills in the Big Data domain. The ideal interviewer should have a deep understanding of various Big Data technologies and tools. Key Areas of Focus: - Assessing the candidate's proficiency in key Big Data technologies, including Hadoop, Spark, Hive, HBase, and Kafka. - Evaluating the candidate's understanding and ability to work with data processing techniques, distributed computing, and Big Data architecture and design patterns. - The interview is solely focused on technical skill assessment. Ideal Skills: - Extensive experience with data processing, storage, and distributed computing. - In-depth understanding of Big Data best practices, architecture, and design patterns. Pl...

    $6 / hr Average bid
    $6 / hr Avg Bid
    3 bids

    I'm looking for an expert-level interviewer who can assess a candidate's technical skills in the Big Data domain. The ideal interviewer should have a deep understanding of various Big Data technologies and tools. Key Areas of Focus: - Assessing the candidate's proficiency in key Big Data technologies, including Hadoop, Spark, Hive, HBase, and Kafka. - Evaluating the candidate's understanding and ability to work with data processing techniques, distributed computing, and Big Data architecture and design patterns. - The interview is solely focused on technical skill assessment. Ideal Skills: - Extensive experience with data processing, storage, and distributed computing. - In-depth understanding of Big Data best practices, architecture, and design patterns. Pl...

    $9 / hr Average bid
    $9 / hr Avg Bid
    3 bids

    I'm looking for an expert-level interviewer who can assess a candidate's technical skills in the Big Data domain. The ideal interviewer should have a deep understanding of various Big Data technologies and tools. Key Areas of Focus: - Assessing the candidate's proficiency in key Big Data technologies, including Hadoop, Spark, Hive, HBase, and Kafka. - Evaluating the candidate's understanding and ability to work with data processing techniques, distributed computing, and Big Data architecture and design patterns. - The interview is solely focused on technical skill assessment. Ideal Skills: - Extensive experience with data processing, storage, and distributed computing. - In-depth understanding of Big Data best practices, architecture, and design patterns. Pl...

    $419 Average bid
    $419 Avg Bid
    6 bids

    ...settings for high throughput and low latency. 5. Security: Setting up user authentication (e.g., LDAP, Kerberos, or NiFi user authentication). Implementing role-based access controls (RBAC). Configuring secure connections (HTTPS, secure processors, etc.). Additional Skills (Preferred): Familiarity with cloud platforms (AWS, Azure, GCP) for deploying NiFi. Knowledge of complementary tools like Kafka, Hadoop, or Spark. Proficiency in scripting (e.g., Python, Bash) for automation and data handling. Experience with data serialization formats like JSON, XML, Avro, or Parquet. Steps for Setting Up Apache NiFi for Managing and Distributing Data Feed: Here’s how a specialist would approach the task: 1. Planning and Design: Understand the data sources, volume, and frequency. Ident...

    $509 Average bid
    $509 Avg Bid
    3 bids

    I'm in need of a seasoned data engineer who has extensive experience with Azure, AWS, and Databricks. The primary focus of this project is data pipeline development. Key Requirements: - Proficiency in Python, Spark, and SQL - In-depth knowledge of data...knowledge of data engineering tools and platforms such as Databricks, Azure, and AWS - Experience in developing robust data pipelines - Experience with ETL processes for data extraction, transformation, and loading - Experience in data migration between databases and data warehouses - Expertise in SQL query optimization for performance improvements Ideal Skills: - Data analysis - Familiarity with Hadoop - Data cleaning and preprocessing expertise Please provide your credentials and examples of similar projects you've co...

    $84 / hr Average bid
    $84 / hr Avg Bid
    14 bids

    I'm in need of a seasoned data engineer who has extensive experience with Azure, AWS, and Databricks. The primary focus of this project is data pipeline development. Key Requirements: - Proficiency in Python, Spark, and SQL - In-depth knowledge of da...seasoned data engineer who has extensive experience with Azure, AWS, and Databricks. The primary focus of this project is data pipeline development. Key Requirements: - Proficiency in Python, Spark, and SQL - In-depth knowledge of data engineering tools and platforms such as Databricks, Azure, and AWS - Experience in developing robust data pipelines Ideal Skills: - Data analysis - Familiarity with Hadoop - Data cleaning and preprocessing expertise Please provide your credentials and examples of similar projects you've co...

    $29 / hr Average bid
    $29 / hr Avg Bid
    14 bids

    To build a pipeline where we obtain data from a table in hadoop server and do some quality checks before updating that data to a postgress table. After that we need to filter that postgress data and do an upsert command and update that particular data into an s3 bucket. The data pipeline involves multiple data sources. The pipeline should include data validation to ensure accuracy and consistency. Ensure data quality checks such as checking the count of rows, comparing the data before and after, and collecting the new data inserted before updating the data to the Postgres table." Please use Apache Airflow for orchestration. The pipeline should run daily. Ensure the pipeline includes advanced data validation like integrity checks and statistical analysis.

    $43 Average bid
    $43 Avg Bid
    3 bids

    I'm seeking a seasoned interviewer with a strong focus on Big Data technologies. The purpose of this role is to assess candidates for mid-level Big Data Developer positions. Your primary tasks will include: - Conducting interviews with potential candidates - Evaluating their skills in Hadoop, Hive, Spark, Scala, and Cloud Technologies - Providing detailed feedback on each candidate Urgency is key, as I need someone who can commence interviews immediately. Ideal candidates for this role should possess: - Extensive experience in Big Data technologies - A strong background in interviewing for tech roles - Ability to assess mid-level candidates If you meet these criteria and are interested in a consistent, long-term collaboration, please get in touch.

    $6 / hr Average bid
    $6 / hr Avg Bid
    4 bids

    ...optimize servers running CentOS, Ubuntu, and Debian. -Work with distributed systems, including HBase, Hadoop, and Storm. -Configure and maintain a high-availability ArangoDB cluster with redundancy features. -Conduct comprehensive benchmarking and high-availability testing to evaluate system performance and scalability under various conditions. -Set up and integrate monitoring interfaces such as Prometheus to ensure system health and performance monitoring. -Independently review and test the effectiveness of DevOps work and provide constructive feedback. Required Skills & Expertise: -Strong knowledge of server configuration for CentOS, Ubuntu, and Debian. -Hands-on experience with HBase, Hadoop, and Storm. -Proficiency in configuring and managing high-availability Ar...

    $54 / hr Average bid
    $54 / hr Avg Bid
    33 bids

    I am encountering logic errors in my Clojure code specifically related to the implementation of recursive functions. I need a skilled developer with expertise in Clojure and debugging, who can help me identify and fix these issues. Ideal skills and experience: - Proficiency in Clojure - Extensive debugging experience - Deep understanding of recursive functions - Ability to identify and rectify logic errors

    $188 Average bid
    $188 Avg Bid
    4 bids

    I'm seeking help with a programming assignment, specifically focused on Clojure and Convex lisp. It's crucial that the freelancer has substantial experience and expertise with this language.

    $852 Average bid
    Urgent
    $852 Avg Bid
    21 bids

    ...need a highly skilled Hadoop Stack Installation Specialist to set up and configure a 5-node Hadoop cluster using Apache Ambari on Ubuntu 22.04. The successful candidate will have hands-on experience with Hadoop ecosystem tools, including Apache Hive, Apache Phoenix, and Apache Ranger. The project will focus on: - Data Storage and Management: The primary purpose of this Hadoop cluster will be managing our data. - Tool Integration: Ensuring seamless integration and configuration of Apache Hive, Apache Phoenix, and Apache Ranger across the cluster. I do not need any data migration for this project. Ideal skills for the job include deep familiarity with the Hadoop ecosystem, exceptional knowledge of Ubuntu 22.04, and proven experience in setting up and...

    $422 Average bid
    $422 Avg Bid
    4 bids

    I need an expert to help troubleshoot and optimize my Spark Streaming setup. The goal is to have Spark continuously monitor data from Kafka, perform minor transformations on each message, and save the processed data to a Hadoop Hive table. Key Tasks: - Continuously monitoring and processing data from Kafka using Spark. - Storing the results into a Hadoop Hive table. Skills in data engineering, specifically with Spark, Kafka, and Hive, are highly desirable. The ideal freelancer will have a strong background in troubleshooting and optimizing data pipelines, ensuring data integrity and efficiency in processing.

    $109 Average bid
    $109 Avg Bid
    9 bids

    We are looking for a senior data and AI Specialist, to assist us in building a statistical model in python for clustering, which will allow the generation of synthesized data...foundation in statistics.  * Knowledge in data visualization tools. * Knowledge in the application of Machine Learning Models such as clustering, classification, regression and reinforcement learning. * Basic knowledge in structured and unstructured database management languages. * Proficiency in Deep Learning frameworks such as TensorFlow or PyTorch. * Experience with Big Data frameworks like Hadoop or Spark. * Strong skills in data preprocessing and cleansing methods. * Understanding of Natural Language Processing (NLP) techniques. Duration: 6 weeks, starting from the time of hire. 4 hours per day, a...

    $49 / hr Average bid
    $49 / hr Avg Bid
    97 bids

    ...future reference. Experience: 3-5 years of hands-on experience as a Data Engineer or in a similar role. Proficiency in SQL and experience with relational databases such as MySQL, PostgreSQL, or similar. Understanding of data warehousing principles and best practices. Familiarity with data pipeline orchestration tools like Airflow or similar. Experience working with big data technologies such as Hadoop, Spark, or similar frameworks. Solid Software Engineering Skills Familiarity with cloud platforms (AWS, Azure, GCP). Experience with Databricks is a plus but not required. Skills: Strong programming skills in Python. Excellent problem-solving and analytical skills with a focus on data. Effective communication and collaboration skills Empathetic and able to understand client’s...

    $54873 Average bid
    Featured Urgent NDA
    $54873 Avg Bid
    15 bids

    Busco redactor para realizar los siguientes artículos: === GEEK/AMANTE DE TECNOLOGÍA - Mejores productos tecnológicos (Ver Web) === BÁSICO - Introducción Python básica y ejercicios (Ver Web) - SQL Básico y consultas tipo (Ver Web) === INTERMEDIO - Big Data Fundamentos - Big Data: Hadoop, Apache Hive, Spark, etc. - IA Es perfecto para programadores técnicos freelance o estudiantes avanzados. Para Gracias,

    $60 Average bid
    $60 Avg Bid
    15 bids

    This position is in remote work • The can...remote work • The candidates also should have prior experience on Hadoop Administration, Cloudera Upgradation and Administration, Spark, Nifi, Splunk • Implementing, managing and administering the overall Spark logs to Splunk Dashboard via NiFi • If working with open-source Apache Distribution then Hadoop admins have to manually setup all the configurations- Core-Site, HDFS-Site, YARN-Site and Map Red-Site. • However, when working with Hadoop distribution like Cloudera the configuration files are setup on start-up and the Hadoop admin need not configure them manually. • Ensure that the Splunk dashboard is up and logging the spark job logs. • Monitoring the cluster connectivity an...

    $32 / hr Average bid
    $32 / hr Avg Bid
    2 bids

    I'm looking for a data expert who can help me with reporting and visualization from SQL databases. Key Responsibilities: - Utilize SQL databases for data extraction and analysis - Create insightful graphs and charts using Excel and Hadoop - Transform complex data into easy-to-understand visual representations Ideal Skills: - Proficiency in SQL and Excel - Experience with Hadoop - Strong skills in data visualization and creating graphs and charts - Ability to simplify complex data for clear understanding Your task will be to uncover trends, patterns and insights from our data, and present these in a visually appealing and easy to understand format. Your expertise in data analysis will be crucial to our project's success.

    $38 Average bid
    $38 Avg Bid
    13 bids

    More details: Is this project for business or personal use? For an existing business What information should successful freelancers include in their application? Past work How soon do you need your project completed? ASAP SKills : SQL,H base,HDFS

    $19 / hr Average bid
    $19 / hr Avg Bid
    12 bids

    Need Data Engineer with Hadoop, GCP, SQL, Kafka, Python and Scala experience. It's a part-time opportunity 2hrs per day payment will be accordingly 25-30k per month 1. Project is to migrate out of Talend & Teradata/Hadoop to GCP

    $455 Average bid
    $455 Avg Bid
    11 bids

    We are looking for a skilled backend/full-stack Clojure/Script expert to assist us in developing a credit and cashback wallet system. Some frontend work is also part of the project, so experience with ClojureScript would be a strong advantage. This project is expected to take 2-3 months to complete. If the collaboration goes smoothly, there may be opportunities to work on additional projects outlined in our roadmap. NOTE: This project is strictly for Clojure backend developers. Please do not apply if you don't have any experience with Clojure language.

    $3832 Average bid
    $3832 Avg Bid
    50 bids

    Currently seeking skilled IT Trainers for a vast range of disciplines. This includes, but isn't limited to: - Software development - Networking - Cybersecurity - C, C++ - Data Structures - Python - AI - Data Sciences - Hadoop Big Data etc. Requirements: - Expert knowledge and experience in the selected discipline. I am looking to hire a total of 15 instructors, each specialising in the above-mentioned disciplines. The training courses are short term, with a duration of 1 month. Please state clearly in your proposal your area of expert knowledge and prior experience in training. Reach out if you think you can handle this task, have a passion for IT, and love sharing your knowledge!

    $696 Average bid
    $696 Avg Bid
    8 bids

    I require an expert in Hadoop to carry out tasks ranging from installation and configuration to monitoring and troubleshooting. The ideal candidate should also be adept at data backup and recovery. Key Responsibilities: - Installation and configuration of Hadoop - Monitoring and troubleshooting of Hadoop cluster - Data backup and recovery Security Requirements: - User authentication and authorization - Data encryption - Network security Data Size: The data to be processed is on the smaller side, totalling up to 1 TB. Ideal Experience and Skills: - Proven experience as a Hadoop Administrator - Expertise in installation, configuration, monitoring, troubleshooting, and data backup and recovery - Strong understanding of user authentication, data encryption, and...

    $33 / hr Average bid
    $33 / hr Avg Bid
    11 bids

    ...mandatory trainings completed Knowledge Examples: - Programming languages – Java/ Python/ R / Scala - Web Services - Redshift S3 Spark DigitalOcean etc. - Statistical and data mining techniques: GLM/Regression Random Forest Boosting Trees text mining social network analysis etc. - Google Analytics Site Catalyst Coremetrics Adwords Crimson Hexagon Facebook Insights etc. - Computing Tools - Map/Reduce Hadoop Hive Spark Gurobi MySQL etc. - Database languages such as SQL NoSQL - Analytical tools and languages such as SAS & Mahout. - Practical experience with ETL data processing etc. - Proficiency in MATLAB. - Data visualization software such as Tableau or Qlik. - Proficient in mathematics and calculations. - Utilization of spreadsheet tools such as Microsoft Excel or Googl...

    $29 - $44 / hr
    Sealed NDA
    $29 - $44 / hr
    0 bids

    ...innovative environments. 7-8 years of professional experience in data engineering. Strong expertise in Microsoft Azure data services, particularly Azure Data Factory (ADF) and PySpark. Experience with data pipeline design, development, and maintenance. Proficiency in SQL and experience with NoSQL databases. Knowledge of ETL/ELT processes and tools. Familiarity with big data technologies such as Hadoop and Spark. Experience with Azure Databricks, Data Lake, SQL Database, and other Azure data services. Production experience delivering CI/CD pipelines across Azure and vendor products. Knowledge of architectural patterns across code and infrastructure development. Fully conversant with big-data processing approaches and “schema-on-read” methodologies, with a preference fo...

    $14838 Average bid
    $14838 Avg Bid
    15 bids

    I'm seeking a skilled professional who can assist me in setting up a High-Availability (HA) Hadoop cluster on AWS virtual machines. This cluster is primarily intended for big data storage, with the following specifics: - Cluster Size: The ideal setup will involve 6-10 nodes to accommodate substantial data processing and storage. - Additional Services: Beyond the base Hadoop setup, I require expertise in configuring HDFS High Availability and YARN Resource Management. Your role would be in ensuring these services are integrated seamlessly into the cluster for optimal performance and reliability. Your experience in Hadoop cluster administration, AWS setup, and knowledge of additional services like HDFS and YARN will be critical in delivering a successful project....

    $230 Average bid
    $230 Avg Bid
    4 bids

    ...**Reinforcement Learning**: Training models through rewards and penalties based on actions. 5. **Data Visualization**: - Creating graphical representations of data to communicate findings effectively. - Tools: Tableau, Power BI, Python (Plotly, Bokeh), and R (Shiny). 6. **Big Data Technologies**: - Managing and processing large-scale data using distributed computing. - Technologies: Hadoop, Spark, Hive, and NoSQL databases (MongoDB, Cassandra). 7. **Data Engineering**: - Building and maintaining the infrastructure and pipelines for data collection, storage, and processing. - Tools: SQL, ETL (Extract, Transform, Load) processes, Apache Kafka, and Airflow. ### Applications of Data Science 1. **Business Intelligence and Analytics**: - Improving decis...

    $16 / hr Average bid
    $16 / hr Avg Bid
    23 bids

    ...**Reinforcement Learning**: Training models through rewards and penalties based on actions. 5. **Data Visualization**: - Creating graphical representations of data to communicate findings effectively. - Tools: Tableau, Power BI, Python (Plotly, Bokeh), and R (Shiny). 6. **Big Data Technologies**: - Managing and processing large-scale data using distributed computing. - Technologies: Hadoop, Spark, Hive, and NoSQL databases (MongoDB, Cassandra). 7. **Data Engineering**: - Building and maintaining the infrastructure and pipelines for data collection, storage, and processing. - Tools: SQL, ETL (Extract, Transform, Load) processes, Apache Kafka, and Airflow. ### Applications of Data Science 1. **Business Intelligence and Analytics**: - Improving decis...

    $19 / hr Average bid
    $19 / hr Avg Bid
    13 bids

    ...who can help me with a two-fold project. The first part involves using a Virtual machine to perform Hadoop MapReduce and WordCount analysis. The second part is more focused on data collection, analytics and visualization using Databricks Notebook. Key Tasks: 1. **Hadoop MapReduce & WordCount Analysis:** - Utilize a Virtual machine to perform Hadoop MapReduce - Implement a WordCount analysis on the data 2. **Data Collection & Analytics:** - The data to be collected is unstructured in nature - Use Databricks Notebook for analytics 3. **Data Visualization:** - Create data visualizations using the Databricks platform Ideal skills for this job include: - Proficiency in using Hadoop ecosystem for MapReduce tasks - Strong experience with ...

    $207 Average bid
    $207 Avg Bid
    10 bids

    ...looking for a skilled developer with experience in Hadoop to help me create a real-time data analysis application. Requirements: - The primary focus of this project is data analysis. You should have a strong background in analyzing data and be familiar with common data analysis techniques, tools and algorithms. - The system will need to integrate with various public datasets. Experience with dealing with such data sources will be essential. - The system should support real-time data analysis. So, expertise in real-time data processing and analysis is a must. Ideal Skills for the job: - Strong background in data analysis - Experience working with public datasets - Proficient in real-time data processing and analysis - Familiarity with Hadoop and its ecosystem, such as H...

    $60 Average bid
    $60 Avg Bid
    12 bids

    Senior Python (Full Stack) Engineer Timezone: 1:30 PM to 10 PM IST What we expect: Strong knowledge of Python Experience with one of backend frameworks (Flask/Django/FastAPI/Aiohttp) Experience with one of the modern ...frameworks (React, Angular, , Vue.js) Experience with AWS Cloud database related experience (NoSQL, relational DBs) Good understanding of application architecture principles Good written and verbal skills in English (upper-intermediate or higher) Nice to have: Knowledge of and experience in working with Kubernetes Experience with Data Engineering / ETL Pipelines (Apache Airflow, Pandas, PySpark, Hadoop, etc.) Experience with CI/CD systems Experience with Linux/Unix Experience in working with cloud automation and IaC provisioning tools (Terraform, CloudFormation, et...

    $1188 Average bid
    $1188 Avg Bid
    24 bids

    I have a substantial dataset of 16,000 lines in a CSV file that requires in-depth analysis, and I'm looking for a skilled professional in Hadoop, MapReduce, and Java to take on this project. Specifically, I need: - A comprehensive and detailed analysis of the data using Hadoop and MapReduce - Your expertise in Java to create the necessary codes for this task - Answers to specific questions derived from the dataset - The completion of this project as soon as possible Please provide me with: - Your experience in big data analysis with Hadoop and MapReduce - Your proficiency in Java - Any previous work or examples that demonstrate your skills in this area Experience in statistical analysis, particularly in the context of big data, would be highly beneficial. The...

    $48 Average bid
    $48 Avg Bid
    3 bids

    I have a Ubuntu 22.04 server, currently with spark running perfectly. I want, however, integrate spark with Hive warehouse. In this project, you're required to perform the installing of Hadoop and hive and, after that, update settings in spark so it can connect with hive warehouse properly. You'll be provided with remote access via TeamViewer or anydesk.

    $289 Average bid
    $289 Avg Bid
    12 bids

    Looking for an android developer Java,Kotlin,JSP, Servlets, MQ series, Hibernate, Python, Microservices, Swings, WebSphere-Portals, HTML5, Bootstrap, JavaScript, Bigdata, Linux Frameworks Android Mobile App Development, Spring boot, Struts, Hadoop This will be ongoing

    $89 / hr Average bid
    $89 / hr Avg Bid
    55 bids

    ...ideal freelancer for this project should have proven experience in time series forecasting, Python programming, and developing interactive dashboards. A solid grasp of data visualization principles would be a plus. The output should be in a form of a python file with 4 models above and related dynamic visualizations. The text preprocessing is required and can be done in any necessary technologies (HADOOP/SPARK,NOSQL/SQL databases) - screenshots of the bash code are required....

    $48 Average bid
    $48 Avg Bid
    13 bids

    I'm seeking intermediate level big data engineers who can tackle essential tasks within the d...capable of combining data from various sources to create a cohesive dataset. - Develop data visualization: The engineers will need to create meaningful and understandable visual representations of the analyzed data. The necessary skills for this role are: - Proficient in Python: The candidate should be well-versed in Python programming language to work with our system. - Knowledge of Hadoop: Experience with Hadoop is a must for this position to handle the data integration and data storage tasks. - Knowledge of Kafka Prior experience in big data engineering and a good understanding of data engineering principles will be vital. Please note that this project suits an intermedia...

    $22 / hr Average bid
    $22 / hr Avg Bid
    24 bids

    ...expert in the Clojure language to assist with several aspects of a project I'm undertaking. My primary goal is to develop a scalable system from the ground up. Here are the areas I need help with: 1. **Syntax and Basic Usage**: I need guidance on how Clojure syntax operates, as well as its basic use. 2. **Functional Programming Concepts**: I'm looking for a deeper understanding of Clojure's functional programming concepts. 3. **Clojure Libraries and Frameworks**: Knowledge about various Clojure libraries and frameworks to utilize in my project is required. The project will be developed in a local development environment. Therefore, experience working within this type of environment and setting it up is essential. Ideal freelancers should...

    $208 Average bid
    $208 Avg Bid
    20 bids

    Seeking a skilled developer to optimize and enhance the architecture of our existing web scraper application. The application is currently built using NestJS and PostgreDB, and we are looking to scale it up and leverage cloud functionality for improved p...error handling, rate limiting, and IP rotation. - Strong problem-solving skills and ability to optimize application performance. - Excellent communication and collaboration skills. Nice to have: - Experience with PostgreDB and database optimization techniques. - Knowledge of additional programming languages like Python or Java. - Familiarity with data processing frameworks like Apache Spark or Hadoop. - Experience with data visualization and reporting tools. Potential for ongoing collaboration based on performance and future req...

    $797 Average bid
    $797 Avg Bid
    76 bids

    I need someone to create the dag and trigger it I am working on a migration project from hadoop to bigquery....more details will be shared via chat

    $25 Average bid
    $25 Avg Bid
    5 bids

    I am seeking a skilled data engineering trainer, speed in using Hadoop, Apache Spark, and SQL is paramount. Your expertise will guide me through nuanced uses of these technologies, with a particular focus on data migration. Key Requirements: - Proficiency in Hadoop, Apache Spark, and SQL - More than 10 hours availability weekly - Proven experience in real-world data migration projects Ideal candidates should have a flair for explaining complex concepts in simple language. This engagement will focus on moving data from diverse sources into a data warehouse, thereby making it readily available for business intelligence functions.

    $12 / hr Average bid
    $12 / hr Avg Bid
    6 bids

    I'm in need of a proficient professional versed in Java Hadoop cluster. Please place your bids immediately. $20 for this project

    $6 / hr Average bid
    $6 / hr Avg Bid
    2 bids