Clojure hadoop punët
...efficiency. Requirements: ✅ Data collection from real or synthetic datasets (Kaggle, HuggingFace, or generated data). ✅ Big Data technologies (Hadoop, Apache Spark) for real-time processing. ✅ Machine Learning models (e.g., Scikit-learn, TensorFlow) for failure prediction. ✅ Dashboard for visualization (e.g., Streamlit, Grafana). ✅ Clean and well-documented code (preferably in Python). Deliverables: ? Source code (GitHub or ZIP file). ? Short guide on how to run the system. ? Results with key metrics (accuracy, downtime reduction, etc.). ? Visualization of predictions (charts, tables). Preferred Skills: ✔ Experience with Big Data frameworks (Hadoop, Spark). ✔ Strong knowledge of ML models for predictive analytics. ✔ Familiarity with data visualization tools. ✔ Previous ...
...a related field. Proven experience developing ETL pipelines and data processing workflows. Hands-on experience with PySpark, Pandas, and SQL. Experience working with big data technologies such as Apache Spark, Hadoop, or Kafka (preferred). Familiarity with cloud data solutions (AWS, GCP, or Azure). Required Skills: Programming: Strong proficiency in Python (PySpark, Pandas) or Scala. Data Modeling & Storage: Experience with relational databases (PostgreSQL, MySQL, SQL Server) and NoSQL databases (MongoDB, Cassandra). Big Data & Distributed Computing: Knowledge of Apache Spark, Hadoop, or Kafka. ETL & Data Integration: Ability to develop efficient ETL processes and manage data pipelines. Cloud Computing: Experience with AWS (S3, Redshift, Glue), GCP (BigQuery),...
I'm seeking a dedicated Clojure Developer to help maintain and enhance our health application. The primary responsibilities will involve development, with a focus on improving the user interface, backend functionalities, and integration with third-party services. Key Responsibilities: - Collaborate with the team to enhance the health application - Improve the user interface, backend functionalities, and third-party integrations - Utilize Clojure to implement necessary updates and modifications Ideal Candidate: - 3+ years of proven experience with React Native development - Strong proficiency in JavaScript and the React ecosystem - Expertise with Redux for state management - Familiarity with native APIs and third-party integrations - Knowledge of performance optimizatio...
I am seeking a seasoned Data Engineer with over 4 years of experience. The ideal candidate should possess strong skills in ETL processes, Data warehousing, and Big data technologies. Proficiency in tools like Apache Spark, Hadoop, and SQL is a must. Key Requirements: - Extensive experience in ETL processes, Data warehousing and Big data technologies - Proficient in Apache Spark, Hadoop and SQL - Seeking for a Full-time employment Skills and Experience: - 4+ years in Data Engineering - Mastery of relevant tools and technologies - Ready for Full-time commitment If you're a dedicated professional with the required skills and experience, I look forward to your application.
Build & Secure a Scalable 5-Node Hadoop Cluster on AWS/Azure Job Description: We are looking for an experienced freelancer to design, deploy, and secure a scalable 5-node Hadoop cluster on either AWS or Azure. This project requires expertise in Hadoop architecture, cloud infrastructure, and implementing best practices for performance and security. Key Responsibilities: Cluster Deployment: Set up a Hadoop cluster with 1 master node and 4 worker nodes. Install and configure Hadoop 3.x, including HDFS, YARN, and MapReduce. Integrate cloud storage (e.g., S3 or Azure Blob Storage) with the cluster. Scalability & Optimization: Configure the cluster to scale seamlessly with auto-scaling for worker nodes. Optimize Hadoop performance for data ...
Clojure language 7+ Years Remote 3 to 6 Months As per the market standard If you have a strong background in Any Cloud (AWS, Azure, GCP), JAVA, JavaScript/TypeScript or Python as well, that's a major plus. JOB DESCRIPTION and RESPONSIBILITIES Develop, maintain, and improve Clojure-based applications in a team environment. Engage in the entire development lifecycle, including concept, design, testing, and deployment. Write clean, efficient, and testable code. Conduct code reviews and collaborate with peers to ensure high-quality software standards. Assist in diagnosing and resolving application issues. Stay up-to-date with emerging trends in Clojure development and related technologies. Proactively identify and communicate improvements that can be implemented in...
I'm seeking a Data Analyst proficient in Hadoop, Hive, and Flume to analyze Movielens data. The primary objective is to uncover trends and patterns within the dataset. Key Requirements: - Experience with Big Data tools, specifically Hadoop, Hive, and Flume - Strong data analysis skills - Ability to identify trends and patterns from complex data sets - Previous experience with Movielens data is a plus The ideal freelancer for this project will have a solid understanding of the Movielens dataset, with the ability to use Big Data tools to generate significant insights. Please provide examples of similar projects you've completed in your proposal.
...Senior Data Engineer to join our team. The ideal candidate should have a strong background in data engineering, particularly with the following skills and experiences: - 5+ years in data engineering or related roles. - Master’s degree in Computer Science, Engineering, or a related field is preferred. - Proficiency in Apache Airflow for workflow scheduling and management. - Strong experience with Hadoop ecosystems, including HDFS, MapReduce, and Hive. - Expertise in Apache Spark/Scala for large-scale data processing. - Proficient in Python. - Advanced SQL skills for data analysis and reporting. - Experience with AWS cloud platform is a must. The selected candidate will be responsible for developing data pipelines, managing data warehousing and performing data analysis and...
I'm looking for a Big Data expert well-versed in Hadoop, Hive, and Flume to assist with some fundamental-level problems. Your tasks will include: - Data ingestion, storage, and processing - Solving queries over a dataset for analysis - Performing basic-level data analysis Skills and experience that would be ideal for this job include: - Proficiency in Hadoop, Hive, and Flume - Experience with data ingestion, storage, and processing - Strong problem-solving skills for basic data analysis - Ability to troubleshoot errors and optimize performance
I'm looking for an expert-level interviewer who can assess a candidate's technical skills in the Big Data domain. The ideal interviewer should have a deep understanding of various Big Data technologies and tools. Key Areas of Focus: - Assessing the candidate's proficiency in key Big Data technologies, including Hadoop, Spark, Hive, HBase, and Kafka. - Evaluating the candidate's understanding and ability to work with data processing techniques, distributed computing, and Big Data architecture and design patterns. - The interview is solely focused on technical skill assessment. Ideal Skills: - Extensive experience with data processing, storage, and distributed computing. - In-depth understanding of Big Data best practices, architecture, and design patterns. Pl...
I'm looking for an expert-level interviewer who can assess a candidate's technical skills in the Big Data domain. The ideal interviewer should have a deep understanding of various Big Data technologies and tools. Key Areas of Focus: - Assessing the candidate's proficiency in key Big Data technologies, including Hadoop, Spark, Hive, HBase, and Kafka. - Evaluating the candidate's understanding and ability to work with data processing techniques, distributed computing, and Big Data architecture and design patterns. - The interview is solely focused on technical skill assessment. Ideal Skills: - Extensive experience with data processing, storage, and distributed computing. - In-depth understanding of Big Data best practices, architecture, and design patterns. Pl...
I'm looking for an expert-level interviewer who can assess a candidate's technical skills in the Big Data domain. The ideal interviewer should have a deep understanding of various Big Data technologies and tools. Key Areas of Focus: - Assessing the candidate's proficiency in key Big Data technologies, including Hadoop, Spark, Hive, HBase, and Kafka. - Evaluating the candidate's understanding and ability to work with data processing techniques, distributed computing, and Big Data architecture and design patterns. - The interview is solely focused on technical skill assessment. Ideal Skills: - Extensive experience with data processing, storage, and distributed computing. - In-depth understanding of Big Data best practices, architecture, and design patterns. Pl...
...settings for high throughput and low latency. 5. Security: Setting up user authentication (e.g., LDAP, Kerberos, or NiFi user authentication). Implementing role-based access controls (RBAC). Configuring secure connections (HTTPS, secure processors, etc.). Additional Skills (Preferred): Familiarity with cloud platforms (AWS, Azure, GCP) for deploying NiFi. Knowledge of complementary tools like Kafka, Hadoop, or Spark. Proficiency in scripting (e.g., Python, Bash) for automation and data handling. Experience with data serialization formats like JSON, XML, Avro, or Parquet. Steps for Setting Up Apache NiFi for Managing and Distributing Data Feed: Here’s how a specialist would approach the task: 1. Planning and Design: Understand the data sources, volume, and frequency. Ident...
I'm in need of a seasoned data engineer who has extensive experience with Azure, AWS, and Databricks. The primary focus of this project is data pipeline development. Key Requirements: - Proficiency in Python, Spark, and SQL - In-depth knowledge of data...knowledge of data engineering tools and platforms such as Databricks, Azure, and AWS - Experience in developing robust data pipelines - Experience with ETL processes for data extraction, transformation, and loading - Experience in data migration between databases and data warehouses - Expertise in SQL query optimization for performance improvements Ideal Skills: - Data analysis - Familiarity with Hadoop - Data cleaning and preprocessing expertise Please provide your credentials and examples of similar projects you've co...
I'm in need of a seasoned data engineer who has extensive experience with Azure, AWS, and Databricks. The primary focus of this project is data pipeline development. Key Requirements: - Proficiency in Python, Spark, and SQL - In-depth knowledge of da...seasoned data engineer who has extensive experience with Azure, AWS, and Databricks. The primary focus of this project is data pipeline development. Key Requirements: - Proficiency in Python, Spark, and SQL - In-depth knowledge of data engineering tools and platforms such as Databricks, Azure, and AWS - Experience in developing robust data pipelines Ideal Skills: - Data analysis - Familiarity with Hadoop - Data cleaning and preprocessing expertise Please provide your credentials and examples of similar projects you've co...
To build a pipeline where we obtain data from a table in hadoop server and do some quality checks before updating that data to a postgress table. After that we need to filter that postgress data and do an upsert command and update that particular data into an s3 bucket. The data pipeline involves multiple data sources. The pipeline should include data validation to ensure accuracy and consistency. Ensure data quality checks such as checking the count of rows, comparing the data before and after, and collecting the new data inserted before updating the data to the Postgres table." Please use Apache Airflow for orchestration. The pipeline should run daily. Ensure the pipeline includes advanced data validation like integrity checks and statistical analysis.
...Google Cloud (SageMaker, Vertex AI, etc.) API-Integration: REST, gRPC Big Data Tools: Hadoop, Spark, Kafka Erfahrung: Mehrjährige Erfahrung in der Entwicklung und Implementierung von Machine-Learning-Algorithmen. Kenntnisse in Natural Language Processing (NLP), Computer Vision oder Reinforcement Learning sind ein Plus. Erfahrung in der Optimierung und Skalierung von Modellen für den produktiven Einsatz. Gute Kenntnisse in Datenvorverarbeitung, Feature Engineering und Modellvalidierung. Soft Skills: Fähigkeit, komplexe KI-Ansätze verständlich zu kommunizieren. Agiles Mindset (Scrum, Kanban). Projektsprache: Deutsch Zusammenfassung der gesuchten Skills: Python, TensorFlow, PyTorch, scikit-learn, SQL, MongoDB, Hadoop, Spark, REST, AWS/Azure/Google ...
I'm seeking a seasoned interviewer with a strong focus on Big Data technologies. The purpose of this role is to assess candidates for mid-level Big Data Developer positions. Your primary tasks will include: - Conducting interviews with potential candidates - Evaluating their skills in Hadoop, Hive, Spark, Scala, and Cloud Technologies - Providing detailed feedback on each candidate Urgency is key, as I need someone who can commence interviews immediately. Ideal candidates for this role should possess: - Extensive experience in Big Data technologies - A strong background in interviewing for tech roles - Ability to assess mid-level candidates If you meet these criteria and are interested in a consistent, long-term collaboration, please get in touch.
...optimize servers running CentOS, Ubuntu, and Debian. -Work with distributed systems, including HBase, Hadoop, and Storm. -Configure and maintain a high-availability ArangoDB cluster with redundancy features. -Conduct comprehensive benchmarking and high-availability testing to evaluate system performance and scalability under various conditions. -Set up and integrate monitoring interfaces such as Prometheus to ensure system health and performance monitoring. -Independently review and test the effectiveness of DevOps work and provide constructive feedback. Required Skills & Expertise: -Strong knowledge of server configuration for CentOS, Ubuntu, and Debian. -Hands-on experience with HBase, Hadoop, and Storm. -Proficiency in configuring and managing high-availability Ar...
I am encountering logic errors in my Clojure code specifically related to the implementation of recursive functions. I need a skilled developer with expertise in Clojure and debugging, who can help me identify and fix these issues. Ideal skills and experience: - Proficiency in Clojure - Extensive debugging experience - Deep understanding of recursive functions - Ability to identify and rectify logic errors
I'm seeking help with a programming assignment, specifically focused on Clojure and Convex lisp. It's crucial that the freelancer has substantial experience and expertise with this language.
...need a highly skilled Hadoop Stack Installation Specialist to set up and configure a 5-node Hadoop cluster using Apache Ambari on Ubuntu 22.04. The successful candidate will have hands-on experience with Hadoop ecosystem tools, including Apache Hive, Apache Phoenix, and Apache Ranger. The project will focus on: - Data Storage and Management: The primary purpose of this Hadoop cluster will be managing our data. - Tool Integration: Ensuring seamless integration and configuration of Apache Hive, Apache Phoenix, and Apache Ranger across the cluster. I do not need any data migration for this project. Ideal skills for the job include deep familiarity with the Hadoop ecosystem, exceptional knowledge of Ubuntu 22.04, and proven experience in setting up and...
I need an expert to help troubleshoot and optimize my Spark Streaming setup. The goal is to have Spark continuously monitor data from Kafka, perform minor transformations on each message, and save the processed data to a Hadoop Hive table. Key Tasks: - Continuously monitoring and processing data from Kafka using Spark. - Storing the results into a Hadoop Hive table. Skills in data engineering, specifically with Spark, Kafka, and Hive, are highly desirable. The ideal freelancer will have a strong background in troubleshooting and optimizing data pipelines, ensuring data integrity and efficiency in processing.
We are looking for a senior data and AI Specialist, to assist us in building a statistical model in python for clustering, which will allow the generation of synthesized data...foundation in statistics. * Knowledge in data visualization tools. * Knowledge in the application of Machine Learning Models such as clustering, classification, regression and reinforcement learning. * Basic knowledge in structured and unstructured database management languages. * Proficiency in Deep Learning frameworks such as TensorFlow or PyTorch. * Experience with Big Data frameworks like Hadoop or Spark. * Strong skills in data preprocessing and cleansing methods. * Understanding of Natural Language Processing (NLP) techniques. Duration: 6 weeks, starting from the time of hire. 4 hours per day, a...
...future reference. Experience: 3-5 years of hands-on experience as a Data Engineer or in a similar role. Proficiency in SQL and experience with relational databases such as MySQL, PostgreSQL, or similar. Understanding of data warehousing principles and best practices. Familiarity with data pipeline orchestration tools like Airflow or similar. Experience working with big data technologies such as Hadoop, Spark, or similar frameworks. Solid Software Engineering Skills Familiarity with cloud platforms (AWS, Azure, GCP). Experience with Databricks is a plus but not required. Skills: Strong programming skills in Python. Excellent problem-solving and analytical skills with a focus on data. Effective communication and collaboration skills Empathetic and able to understand client’s...
Busco redactor para realizar los siguientes artículos: === GEEK/AMANTE DE TECNOLOGÍA - Mejores productos tecnológicos (Ver Web) === BÁSICO - Introducción Python básica y ejercicios (Ver Web) - SQL Básico y consultas tipo (Ver Web) === INTERMEDIO - Big Data Fundamentos - Big Data: Hadoop, Apache Hive, Spark, etc. - IA Es perfecto para programadores técnicos freelance o estudiantes avanzados. Para Gracias,
This position is in remote work • The can...remote work • The candidates also should have prior experience on Hadoop Administration, Cloudera Upgradation and Administration, Spark, Nifi, Splunk • Implementing, managing and administering the overall Spark logs to Splunk Dashboard via NiFi • If working with open-source Apache Distribution then Hadoop admins have to manually setup all the configurations- Core-Site, HDFS-Site, YARN-Site and Map Red-Site. • However, when working with Hadoop distribution like Cloudera the configuration files are setup on start-up and the Hadoop admin need not configure them manually. • Ensure that the Splunk dashboard is up and logging the spark job logs. • Monitoring the cluster connectivity an...
...商业、生活、安全、税务等多方面的智能化融合。本项目将以Java为开发基础,采用分布式与微服务架构,确保系统的高可用性、可扩展性和灵活性。 项目技术要求: 1. 技术栈: 主要编程语言:Java(包括但不限于Spring Boot、Spring Cloud等框架) 分布式架构:基于Kubernetes或Docker容器化部署,实现服务间的松耦合与高效调度。 微服务架构:将系统拆分为多个独立的服务,如用户服务、支付服务、数据分析服务等,通过API Gateway进行服务间的通信与管理。 数据库:采用MySQL/PostgreSQL等关系型数据库存储结构化数据,结合Redis、MongoDB等NoSQL数据库处理高并发访问与缓存。 消息队列:Kafka/RabbitMQ等,用于解耦系统组件间的依赖,提高系统响应速度和容错性。 大数据处理:Hadoop/Spark等用于海量数据的存储与处理,结合Elasticsearch等实现快速检索与分析。 AI与机器学习:集成TensorFlow/PyTorch等深度学习框架,实现图像识别、自然语言处理等AI功能。 2. 功能需求: AI数字名片:通过人脸识别与AI分析技术,为园区内人员生成个性化的数字名片,支持快速识别与社交互动。 智慧园区:集成物联网(IoT)技术,实时监控园区内环境(如温湿度、空气质量)、能源使用情况及设备状态,实现智能化管理。 智慧商城+跨境电商:搭建在线商城平台,支持商品展示、在线交易、物流跟踪等功能,并融入跨境电商元素,拓宽商品销售渠道。 AI媒体矩阵:利用AI算法分析用户兴趣,推送个性化内容,同时支持园区内多媒体内容的发布与管理。 智慧生活:集成智能家居控制、在线缴费、社区活动预约等功能,提升居...
I'm looking for a data expert who can help me with reporting and visualization from SQL databases. Key Responsibilities: - Utilize SQL databases for data extraction and analysis - Create insightful graphs and charts using Excel and Hadoop - Transform complex data into easy-to-understand visual representations Ideal Skills: - Proficiency in SQL and Excel - Experience with Hadoop - Strong skills in data visualization and creating graphs and charts - Ability to simplify complex data for clear understanding Your task will be to uncover trends, patterns and insights from our data, and present these in a visually appealing and easy to understand format. Your expertise in data analysis will be crucial to our project's success.
More details: Is this project for business or personal use? For an existing business What information should successful freelancers include in their application? Past work How soon do you need your project completed? ASAP SKills : SQL,H base,HDFS
Need Data Engineer with Hadoop, GCP, SQL, Kafka, Python and Scala experience. It's a part-time opportunity 2hrs per day payment will be accordingly 25-30k per month 1. Project is to migrate out of Talend & Teradata/Hadoop to GCP
We are looking for a skilled backend/full-stack Clojure/Script expert to assist us in developing a credit and cashback wallet system. Some frontend work is also part of the project, so experience with ClojureScript would be a strong advantage. This project is expected to take 2-3 months to complete. If the collaboration goes smoothly, there may be opportunities to work on additional projects outlined in our roadmap. NOTE: This project is strictly for Clojure backend developers. Please do not apply if you don't have any experience with Clojure language.
Currently seeking skilled IT Trainers for a vast range of disciplines. This includes, but isn't limited to: - Software development - Networking - Cybersecurity - C, C++ - Data Structures - Python - AI - Data Sciences - Hadoop Big Data etc. Requirements: - Expert knowledge and experience in the selected discipline. I am looking to hire a total of 15 instructors, each specialising in the above-mentioned disciplines. The training courses are short term, with a duration of 1 month. Please state clearly in your proposal your area of expert knowledge and prior experience in training. Reach out if you think you can handle this task, have a passion for IT, and love sharing your knowledge!
I require an expert in Hadoop to carry out tasks ranging from installation and configuration to monitoring and troubleshooting. The ideal candidate should also be adept at data backup and recovery. Key Responsibilities: - Installation and configuration of Hadoop - Monitoring and troubleshooting of Hadoop cluster - Data backup and recovery Security Requirements: - User authentication and authorization - Data encryption - Network security Data Size: The data to be processed is on the smaller side, totalling up to 1 TB. Ideal Experience and Skills: - Proven experience as a Hadoop Administrator - Expertise in installation, configuration, monitoring, troubleshooting, and data backup and recovery - Strong understanding of user authentication, data encryption, and...
...mandatory trainings completed Knowledge Examples: - Programming languages – Java/ Python/ R / Scala - Web Services - Redshift S3 Spark DigitalOcean etc. - Statistical and data mining techniques: GLM/Regression Random Forest Boosting Trees text mining social network analysis etc. - Google Analytics Site Catalyst Coremetrics Adwords Crimson Hexagon Facebook Insights etc. - Computing Tools - Map/Reduce Hadoop Hive Spark Gurobi MySQL etc. - Database languages such as SQL NoSQL - Analytical tools and languages such as SAS & Mahout. - Practical experience with ETL data processing etc. - Proficiency in MATLAB. - Data visualization software such as Tableau or Qlik. - Proficient in mathematics and calculations. - Utilization of spreadsheet tools such as Microsoft Excel or Googl...
...innovative environments. 7-8 years of professional experience in data engineering. Strong expertise in Microsoft Azure data services, particularly Azure Data Factory (ADF) and PySpark. Experience with data pipeline design, development, and maintenance. Proficiency in SQL and experience with NoSQL databases. Knowledge of ETL/ELT processes and tools. Familiarity with big data technologies such as Hadoop and Spark. Experience with Azure Databricks, Data Lake, SQL Database, and other Azure data services. Production experience delivering CI/CD pipelines across Azure and vendor products. Knowledge of architectural patterns across code and infrastructure development. Fully conversant with big-data processing approaches and “schema-on-read” methodologies, with a preference fo...
I'm seeking a skilled professional who can assist me in setting up a High-Availability (HA) Hadoop cluster on AWS virtual machines. This cluster is primarily intended for big data storage, with the following specifics: - Cluster Size: The ideal setup will involve 6-10 nodes to accommodate substantial data processing and storage. - Additional Services: Beyond the base Hadoop setup, I require expertise in configuring HDFS High Availability and YARN Resource Management. Your role would be in ensuring these services are integrated seamlessly into the cluster for optimal performance and reliability. Your experience in Hadoop cluster administration, AWS setup, and knowledge of additional services like HDFS and YARN will be critical in delivering a successful project....
...**Reinforcement Learning**: Training models through rewards and penalties based on actions. 5. **Data Visualization**: - Creating graphical representations of data to communicate findings effectively. - Tools: Tableau, Power BI, Python (Plotly, Bokeh), and R (Shiny). 6. **Big Data Technologies**: - Managing and processing large-scale data using distributed computing. - Technologies: Hadoop, Spark, Hive, and NoSQL databases (MongoDB, Cassandra). 7. **Data Engineering**: - Building and maintaining the infrastructure and pipelines for data collection, storage, and processing. - Tools: SQL, ETL (Extract, Transform, Load) processes, Apache Kafka, and Airflow. ### Applications of Data Science 1. **Business Intelligence and Analytics**: - Improving decis...
...**Reinforcement Learning**: Training models through rewards and penalties based on actions. 5. **Data Visualization**: - Creating graphical representations of data to communicate findings effectively. - Tools: Tableau, Power BI, Python (Plotly, Bokeh), and R (Shiny). 6. **Big Data Technologies**: - Managing and processing large-scale data using distributed computing. - Technologies: Hadoop, Spark, Hive, and NoSQL databases (MongoDB, Cassandra). 7. **Data Engineering**: - Building and maintaining the infrastructure and pipelines for data collection, storage, and processing. - Tools: SQL, ETL (Extract, Transform, Load) processes, Apache Kafka, and Airflow. ### Applications of Data Science 1. **Business Intelligence and Analytics**: - Improving decis...
...who can help me with a two-fold project. The first part involves using a Virtual machine to perform Hadoop MapReduce and WordCount analysis. The second part is more focused on data collection, analytics and visualization using Databricks Notebook. Key Tasks: 1. **Hadoop MapReduce & WordCount Analysis:** - Utilize a Virtual machine to perform Hadoop MapReduce - Implement a WordCount analysis on the data 2. **Data Collection & Analytics:** - The data to be collected is unstructured in nature - Use Databricks Notebook for analytics 3. **Data Visualization:** - Create data visualizations using the Databricks platform Ideal skills for this job include: - Proficiency in using Hadoop ecosystem for MapReduce tasks - Strong experience with ...
...looking for a skilled developer with experience in Hadoop to help me create a real-time data analysis application. Requirements: - The primary focus of this project is data analysis. You should have a strong background in analyzing data and be familiar with common data analysis techniques, tools and algorithms. - The system will need to integrate with various public datasets. Experience with dealing with such data sources will be essential. - The system should support real-time data analysis. So, expertise in real-time data processing and analysis is a must. Ideal Skills for the job: - Strong background in data analysis - Experience working with public datasets - Proficient in real-time data processing and analysis - Familiarity with Hadoop and its ecosystem, such as H...
Senior Python (Full Stack) Engineer Timezone: 1:30 PM to 10 PM IST What we expect: Strong knowledge of Python Experience with one of backend frameworks (Flask/Django/FastAPI/Aiohttp) Experience with one of the modern ...frameworks (React, Angular, , Vue.js) Experience with AWS Cloud database related experience (NoSQL, relational DBs) Good understanding of application architecture principles Good written and verbal skills in English (upper-intermediate or higher) Nice to have: Knowledge of and experience in working with Kubernetes Experience with Data Engineering / ETL Pipelines (Apache Airflow, Pandas, PySpark, Hadoop, etc.) Experience with CI/CD systems Experience with Linux/Unix Experience in working with cloud automation and IaC provisioning tools (Terraform, CloudFormation, et...
I have a substantial dataset of 16,000 lines in a CSV file that requires in-depth analysis, and I'm looking for a skilled professional in Hadoop, MapReduce, and Java to take on this project. Specifically, I need: - A comprehensive and detailed analysis of the data using Hadoop and MapReduce - Your expertise in Java to create the necessary codes for this task - Answers to specific questions derived from the dataset - The completion of this project as soon as possible Please provide me with: - Your experience in big data analysis with Hadoop and MapReduce - Your proficiency in Java - Any previous work or examples that demonstrate your skills in this area Experience in statistical analysis, particularly in the context of big data, would be highly beneficial. The...
I have a Ubuntu 22.04 server, currently with spark running perfectly. I want, however, integrate spark with Hive warehouse. In this project, you're required to perform the installing of Hadoop and hive and, after that, update settings in spark so it can connect with hive warehouse properly. You'll be provided with remote access via TeamViewer or anydesk.
Looking for an android developer Java,Kotlin,JSP, Servlets, MQ series, Hibernate, Python, Microservices, Swings, WebSphere-Portals, HTML5, Bootstrap, JavaScript, Bigdata, Linux Frameworks Android Mobile App Development, Spring boot, Struts, Hadoop This will be ongoing
...ideal freelancer for this project should have proven experience in time series forecasting, Python programming, and developing interactive dashboards. A solid grasp of data visualization principles would be a plus. The output should be in a form of a python file with 4 models above and related dynamic visualizations. The text preprocessing is required and can be done in any necessary technologies (HADOOP/SPARK,NOSQL/SQL databases) - screenshots of the bash code are required....
I'm seeking intermediate level big data engineers who can tackle essential tasks within the d...capable of combining data from various sources to create a cohesive dataset. - Develop data visualization: The engineers will need to create meaningful and understandable visual representations of the analyzed data. The necessary skills for this role are: - Proficient in Python: The candidate should be well-versed in Python programming language to work with our system. - Knowledge of Hadoop: Experience with Hadoop is a must for this position to handle the data integration and data storage tasks. - Knowledge of Kafka Prior experience in big data engineering and a good understanding of data engineering principles will be vital. Please note that this project suits an intermedia...
...expert in the Clojure language to assist with several aspects of a project I'm undertaking. My primary goal is to develop a scalable system from the ground up. Here are the areas I need help with: 1. **Syntax and Basic Usage**: I need guidance on how Clojure syntax operates, as well as its basic use. 2. **Functional Programming Concepts**: I'm looking for a deeper understanding of Clojure's functional programming concepts. 3. **Clojure Libraries and Frameworks**: Knowledge about various Clojure libraries and frameworks to utilize in my project is required. The project will be developed in a local development environment. Therefore, experience working within this type of environment and setting it up is essential. Ideal freelancers should...
Seeking a skilled developer to optimize and enhance the architecture of our existing web scraper application. The application is currently built using NestJS and PostgreDB, and we are looking to scale it up and leverage cloud functionality for improved p...error handling, rate limiting, and IP rotation. - Strong problem-solving skills and ability to optimize application performance. - Excellent communication and collaboration skills. Nice to have: - Experience with PostgreDB and database optimization techniques. - Knowledge of additional programming languages like Python or Java. - Familiarity with data processing frameworks like Apache Spark or Hadoop. - Experience with data visualization and reporting tools. Potential for ongoing collaboration based on performance and future req...
I need someone to create the dag and trigger it I am working on a migration project from hadoop to bigquery....more details will be shared via chat