Erlang web crawlerproiecte
salut .am un site nefinalizat din cauza este realizat in proportie de 70 la suta .este scris in laravel ,este un site de dating ,am un demo la el sa vezi despre ce este cu el,vreau sal finalizez am de adaugat termenii si conditiile ,de implementat modulul de plata cu paysafe card un paragraf cu ajutor si contact pentru utilizatori mici retusuri la sfarsit un control test cu crawler si index si cam atat .
Am un sistem care trebuie pus online. Sistemul e alcătuit din: - baza de date postgresql - data access layer in C# (rulează și pe .NET și pe Mono) - Game server și web server în Erlang - Partea de UI este în JavaScript
Am un sistem care trebuie pus online si probabil adus ceva modificari la el. Este vorba despre un joc de cultura generala. Sistemul e alcătuit din: - baza de date postgresql - data access layer in C# (rulează și pe .NET și pe Mono) - Game server și web server în Erlang - Partea de UI este în JavaScript
Salut, Daca ai timp mi ar place sa discutam putin.
Caut programator ajax - php, in cazul in care nu o sa primesc o oferta de la cineva care cunoaste ajax si php, voi lua in calcul ofertele php. II rog pe cei interesati, cunoscatori php sa ma contacteze. Este vorba despre un crawler care sa extraga date de pe un website si sa le introduca in baza de date MySQL. Datele introduse in Baza de Date trebuie extrase si afisate intr-un script ajax. Ce ar trebuii facut? 1. Scrierea unui crawler 2. Crearea fisierului MySQL (inserarea datelor obtinute in baza de date) 3. Crearea unui script ajax si afisarea datelor din baza de date. P>S> Nu este mare proiectul!
Am nevoie de un crawler care sa imi ia continut de pe 5 site-uri cu cURL si sa adauge acel continut pe o pagina php care se modifica odata la 5 minute . pm pentru mai multe detalii si exemple .
...crawling GitHub repositories. This incomplete data has hindered my ability to fully analyze and evaluate the codebase of certain repositories. I need an experienced programmer with a solid understanding of Python, Selenium, and web scraping. You should be able to troubleshoot and fix the script so it can accurately crawl and collect all necessary data from GitHub, including files, directories, and their content. Please include in your proposal any relevant experience you have, particularly with crawling data from GitHub or similar platforms using Selenium or other web scraping tools. The script is designed to: Automate Code Analysis: Analyze a GitHub repository's codebase without manual intervention. Generate Evaluation Metrics: Use AI to generate meaningful eva...
I need a web crawler that can capture video stream data to trigger real-time notifications. The specific types of notifications and data formats for saving the crawled data will be discussed later. Ideal Skills: - Proficiency in web scraping - Experience with video stream data - Programming skills in Python, Java or similar languages Please note, the specifics of the project may evolve as we progress.
...background 3. Brand Values- Website must convey a feeling of premium, sophistication, exclusivity, cleanliness, healthy and innovation. 4. Lots of visuals like static illustrations, multiple small animated gifs, infographics and interactive content. 5. Text only to be provided by SSCG, vendor to create visuals based on text to convey information 6. Product imagess to be provided by SSCG 7. Web Crawler software to be incorporated. The purpose is to have a section that would display air pollution related news for India 8. AQI display meter integration from various sources 9. Chatbot integration 10. About 14-15 total pages plus a BLOG page 11. The website has to tell a story. 12. The product is related to health industry, specifically related to air pollution. It's ...
I'm looking for a developer to create a single-player, hand-drawn style, dungeon crawler survival mobile game. Key Features: - Character Leveling and Progression: Players should be able to level up their characters, unlocking new abilities and making strategic decisions about character development. - Resource Gathering and Management: The game should involve collecting resources from the environment, which players will need to manage wisely to survive and progress. - Combat and Tactics: The game should feature engaging and challenging combat scenarios, requiring players to use tactics and strategy to overcome their foes. The ideal candidate for this project would have a strong background in game development, particularly with mobile games. Experience with creating hand-drawn ...
I want to watch the changes in holder numbers, bio, etc over time of one website, two different sections: and There is a list of agents on each of them. Once one clicks on one of them there is plenty of info. Collect and put in json on file (2 jsons, 1 for each of the sectors) Important: Deliverable is the python script. I want to be able to update the data daily.
*Job Title:* Developer Needed for Web Scraping & Interactive Link Management Application *Description:* I’m looking for a skilled developer to create an application that extracts and organizes links from *IFRAME HTML content*. The application should: 1. Gather all information from IFRAMES using *JavaScript* or *Python*. 2. Extract all links from the HTML content within the IFRAMES and compile them into a list. 3. From this list, generate a *unique list of links* (remove duplicates). 4. Associate each link with an image so that clicking the image will redirect to the corresponding URL. 5. Host the application on my domain, ensuring it's easily accessible online. I believe this project may require a *web crawler* or similar scr...
Seeking an AWS Glue expert to assist with fetching analytical data from Go...The job includes setting up an incremental data extraction process that runs daily. Current Status: Query: Already prepared. Connection: The connection to BigQuery is set up and ready from within Glue Studio. Challenge: 1- I need assistance configuring Glue ( Glue Notebook or visual Job) to handle date-partitioned tables in BigQuery and load data from there incrementally. 2- configure S3 crawler to scan the bucket and push new data to DB Background: I previously implemented this workflow using QlikView script, but I am now transitioning to AWS. Looking for guidance on best practices specific to AWS. Ideal candidates should have: - Extensive experience with AWS Glue, AWS Glue NoteBook (PySpark) and S...
I'm looking for an experienced developer who can create a AI web crawlers Key Requirements: -Vactor db knowledge (pinecone) -deep learning
We require someone to install Sphider () the PHP web crawler on on cpanel vps Budget for this task is $10 Winner will be given cpanel to setup and install.
I need a skilled Python developer to create a web crawler script for me. The script should be capable of extracting text data from various tender websites. Key requirements: - Proficient in Python and its web scraping libraries (BeautifulSoup, Scrapy, etc.) - Experience with creating web crawlers - Understanding of how to extract and handle text data - Able to target specific types of websites (in this case, tender sites) - Knowledge of best practices for web crawling (to avoid IP bans, etc.) The ideal freelancer for this project should be able to demonstrate previous experience of similar projects, and have a good understanding of the intricacies involved in crawling and scraping data from websites.
Part - 2 is completed of Profit Crawler
...developer with expertise in web scraping to enhance an existing script designed to extract product data from Walmart's online store. The current script retrieves only 10 items at a time, while the website displays 40 items per page. The task involves addressing the infinite scroll or JavaScript loading mechanisms that are limiting data extraction. Key Responsibilities: - Modify the existing Python script to enable it to scrape all 40 items per page. - Implement solutions to effectively handle infinite scrolling or dynamic content loading. - Ensure the script is efficient and adheres to best practices in web scraping. - Conduct thorough testing of the updated script to confirm accurate data retrieval. Requirements: - Proficiency in Python, particularly with web...
I need a Python/Django based web interface that can take 50 keywords and automate a news crawler, a URL parser, and an interaction with ChatGPT's web interface. Requirements: - Step 1: The web interface should allow me to input 50 keywords which will be stored in a database. - Step 2: The crawler should be able to search for each of the preset 50 keywords on , extract the actual news URLs after Google News redirection and save them back to the database. - Step 3: The parser will validate the URL's accessibility and extract meaningful details from each news article including the Title, publication date, and the original URL. - Step 4: Automate interaction with ChatGPT using Selenium to find the 5W 1H (What, Who, Where, When, Why and How) from e...
I have issues that Kazoo doesn't always detect a busy signal properly, coming from a B leg on calls to mobile devices. Ideal Skills: - Strong understanding of SIP protocol and its default behaviors. - Experience of the 2600hz Kazoo project - Erlang and PHP languages
...titles, meta descriptions, and headers for targeted keywords. - Optimize internal linking structure to improve navigation and SEO. - Monitor and resolve any errors reported in Google Search Console. - Implement structured data (schema markup) to enhance SERP appearance. - Implement Accelerated Mobile Pages (AMP) for faster mobile load times. - Review and optimize the file to manage crawler access effectively. - Regularly conduct technical SEO audits to identify and fix issues. - Implement and ensure compliance with HTTPS and other security standards. Ideal Skills: - Proven experience with technical SEO, particularly for news sites. - Deep understanding of Google News policies. - Strong skills in improving site crawlability, mobile optimization, and page load times. This proje...
My existing Python web crawling program has stopped working. I need an engineer to either fix it or create a new one. The program retrieves book data from based on ISBNs. Key Tasks: - Analyze the current Python web crawling program to identify issues - Fix the program or create an entirely new one - Retrieve specific book data from the target site - Test the program using provided ISBNs Data to be Retrieved: - Book data based on ISBNs - Translator information - Additional data as outlined in provided documentation Ideal Skills: - Proficient in Python - Experience with web crawling and data retrieval - Ability to troubleshoot and solve problems effectively - Familiarity with book data and ISBNs is a plus The project needs to be completed within 1 week. The new or f...
Milestone 1: Collect Data running your web crawler Budget $100.00 Time limit: 3 days, daily update is required You will use Excel sheet provided by me. You must follow all the instruction in the requirement document in the link below. You are doing Milestone 1a and 1b only.
I'm seeking a skilled developer or team to create a Wallet Crawler Bot. This bot will navigate through Ethereum and Binance Smart Chain networks, investigating wallet interactions and providing insights into transaction flows, wallet interconnections, and activities. Deadline:5 days Key Features: - Advanced analytics: The bot should be capable of conducting in-depth analysis of transactional flows and wallet interconnections. - Real-time Monitoring: The bot needs to provide real-time insights and updates. Ideal skills and experience include: - Proficiency in blockchain development, particularly in Ethereum and Binance Smart Chain. - Experience in constructing data analysis and monitoring bots. - Familiarity with creating tools for visualizing complex data sets. - Previous wo...
I'm in need of a web scraper and crawler that can extract specific data from the CheerTheory programs page and its subpages, up to 102 pages deep. The data needs to be saved in a CSV file and should include: - Business Name - Business Info (Address Location) - Contact Info (Website URL, Phone Number) - Category Type The scraper should handle pagination automatically, traversing each program on the main page and each of the underlying 102 pages (from to ). Please note that this task is a one-time job. Ideal skills and experience for the job include: - Proficiency in web scraping and crawling - Familiarity with data extraction and CSV formatting - Ability to create a scraper that handles pagination automatically
I'm looking for a skilled web scraper or crawler to extract specific user profile data from LinkedIn. Key Requirements: - Extract user profiles from LinkedIn - Focus on specific pieces of data: Contact information, current Job position, current Company, Email address - Target profiles located in Switzerland, Germany, and Austria Banking, Insurance, Manufacture, Medical Devices and others Ideal Skills: - Proficient in web scraping and data extraction - Familiarity with LinkedIn's structure and data points - Ability to filter data based on geographic location - Experience with ethical and legal considerations of web scraping Please note that the data extraction should be limited to profiles from Switzerland, Germany, and Austria. Please provide the ...
I'm looking for a skilled web developer to write a web crawler script to collect data from social media platforms. The project involves gathering article titles, publication dates, and authors' bios as well as Collect Millions of Rows of Data From Thousands of Websites by Writing Your Own Web Crawler Script (Code) then running it through VPN. There are 6 milestones. You will be paid for one milestone at a time, but with my approval you may start multiple milestones simultaneously. $70.00 x 6 milestones = $420.00 Deliverables: 2 days for each milestone. Daily updated data in Excel sheet is required. I will repost each milestone as separate job. You will bid $70.00 for each milestone/job. I will then award you one milestone/job at a time. Vi...
We are seeking an experienced Erlang developer to help with specific updates and enhancements to an existing project. The work involves using pattern matching to complete tasks outlined in the provided instructions. Key Tasks: Implement specified updates based on provided guidelines. Ensure robust functionality across various scenarios. Deliver a single .erl file with the required structure, including module and export statements. Top Skills Required: Erlang Programming Functional Programming Code Optimization and Debugging Clean Code Practices Deliverables: A complete .erl file with all updates implemented. Well-structured, maintainable code with basic comments for clarity. Testing to confirm functionality and reliability.
I'm looking for a professional web developer with experience in creating web crawlers and databases. Key Responsibilities: - Develop a web crawler for 1-5 e-commerce sites - The crawler should extract product details, price changes, and availability - The data should be stored in an intermediary database that updates daily Ideal Skills and Experience: - Proven experience in web crawling and database creation - Familiarity with e-commerce sites - Ability to ensure the crawler operates smoothly and the database is accurate and up-to-date - Familiarty with python and woo commerce Please provide examples of similar projects you've completed. Crawl 3rd party supplier sites > Update database > make updates to our woo commerce s...
I have a finished Python Flask crawler project that needs to be hosted on a live server. The application is already installed on the server, however, the HTML pages are not displaying. The website works perfectly on the local server, but fails to render content on the live server. Your task will be to diagnose the issue and get the site up and running. Key Requirements: - Proficiency in Python and Flask is crucial. - Experience with deploying web applications on live servers. - Familiarity with troubleshooting server-related issues. - Ability to work with databases as the Flask application employs one. - Knowledge of server types (Apache, Nginx, etc.) is beneficial. The server is running Python v3.11.10 and currently, there are no server error messages - just a blank page...
- Website environment must display product images and descriptions ( must allow for future expansion into other retail products (furniture, shoes, etc.) - Must allow users to create an account for storing grocery list - Must allow users to select from mulitiple stores - Must use Geo location to display local stores to choose from - Must use web crawler to extract prices from selected websites - Web crawler must evade website barriers such as , Dynamic content, IP bans, CAPTCHA, ReCAPTCHA and Honeypots - Must recover pricing for user grocery list from selected stores - Must compare pricing and display results in order from best single store overall savings to second and third best savings - Must compare pricing and display results in order from the best c...
No placeholder bids! We are looking for an experienced developer to develop a crawler that checks certain criteria on websites. The goal is to identify potential clients for our agency that need optimisation in the following areas. Existing tools may be used or combined for this purpose. Crawler requirements: Checking search term in google and crawl all found pages. Checking the CMS (content management system) for severely outdated versions (ignore minor updates) Check the ‘Last Modified Date’: If older than 4 years, include URL in the results list Detection of missing responsive design Check for use of Google Tag Manager or similar third-party providers without cookie banner (direct loading on page view) Incorrect certificates (expired or non-existent SSL cert...
I am seeking someone to sort out the issues that are on my webpage with SE0. Optimise SEO. -Work through crawler issues, no indexes, redirects and meta data issues. -Optimise SEO for 1st page ranking( Google) including backlinks,blogs and blog creation as applicable -willing to meet with me and discuss key word strategy -create a google business page -guarantee first page result -this is a niche business and does not require monthly seo rather will optimise in intervals through ongoing projects -strategies that are in line with google policies only
I need an experienced data crawler to help me. Key Requirements: - Crawl specific text data from various websites. - Exporting the crawled data into a CSV format. - Customizing the CSV output with specific column headers. - Complying with a set of specific requirements for the CSV output. Ideal Skills: - Proficient in coding for data crawling. - Experienced in exporting data into CSV format. - Able to customize CSV outputs as per requirements. Please note that this is a one-time project, and the data needs to be crawled from websites. Your ability to adhere to specific requirements and to deliver a high-quality output will be key to your success in this project.
I'm looking for a freelancer who can help me scrape a specific website for contact details. The scraped data will need to be compiled into a spreadsheet. Ideal skills and experience for the job include: - Proficiency in web scraping tools and techniques - Experience with data organization and spreadsheet software - Attention to detail to ensure accurate data collection and entry. Please review the website : I need someone who can write a crawler to click each company and collect the details on the inner page: Here are some of the links: I need to be able to collect the Company name, Company description, All Contacts listed. and country or address
I'm in need of a robust Python-based web crawler that can efficiently scrape and aggregate data from multiple news sites. The crawler will need to extract various types of data including article content, author information, publication dates, tags, and cover thumbnails. Key Features: - Proxy Support: The crawler should be equipped with proxy support to bypass geographic and IP restrictions, ensuring uninterrupted access to these news sites. - Configurable: It should be configurable via environment variables, dynamically adjusting proxy usage and frequency to comply with site terms and avoid bans. - Resume Feature: The crawler should include a resume feature, allowing it to track and stop at the last crawled position, making subsequent crawls faster...
I'm looking for a skilled data crawler to extract case details from various one website. The primary purpose of this project is to gather public information. Key Requirements: - Proficiency in data crawling techniques. - Experience with web scraping tools and software. - Ability to filter and organize data effectively. - Experience in handling CAPTCHA challenges during web scraping to ensure uninterrupted data collection. - Knowledge in managing proxies to avoid IP bans while crawling large amounts of data. - Experience in scheduling and automating scraping tasks for regular data updates. - Understanding of and compliance with legal standards and guidelines for data scraping. Ideal Skills: - Web Scraping - Data Analysis - Python or similar programming lan...
I need a new architectural design page added to my existing WordPress site. This page will primarily showcase a 'lake house design', with all content (text and images) already formatted and placed in the Elementor template. Add...template. Address errors (6 reasons from Google) for pages not indexed - refer to attached Google page indexing report for Ideal skills and experience for the job: - Proficient in WordPress, particularly with Elementor - Experience in web design, specifically for showcasing architectural projects - Attention to detail to ensure all content is accurately placed and hyperlinked - Ability to meet deadlines and communicate effectively - Create Google API key so page is ranked with Google bot crawler - Add page link to top drop down menu
I'm in need of an experienced Erlang programmer to assist in the development of a new mobile application. The primary focus of this application will be real-time processing capabilities. Key aspects of the project: - Development of a new mobile application using Erlang - Implementation of real-time processing features - Ensuring the application is optimally designed for mobile use Ideal skills for the job: - Proficient in Erlang programming - Experience in mobile app development - Knowledgeable in real-time processing systems - Previous work in developing communication systems will be a plus Looking forward to your proposals.
Job Responsibilities: * Develop and deploy custom web crawlers to collect data from specified websites, databases, and APIs. * Utilize advanced AI tools like OpenAI GPT-4, Scrapy, BeautifulSoup, TensorFlow, and PyTorch to optimize data gathering and processing. * Ensure data accuracy and compliance with data protection regulations (e.g., GDPR). * Customize crawling scripts to target specific data based on our needs. * Collaborate with us to ensure all deliverables meet our objectives. Requirements: * Proven experience in web crawling, data scraping, and automation. * Proficiency with AI-powered tools (e.g., GPT models, TensorFlow, PyTorch). * Strong English communication skills (written and spoken). * Knowledge of online databases, APIs, and data compliance standards. * Abili...
...Specific Tasks: Update/Replace Selector Logo: The current selector logo needs to be replaced with a new image. The logo is visible in the frontend but we are unable to locate its specific placement in the backend of Joomla for editing. It may be linked to a plugin or module, but we are unsure which one. Image Slider Review: We noticed two different image components potentially in use: the Image Crawler and possibly a previously used DJ Image Slider plugin. The DJ Image Slider does not appear in the components section, but the slider is still functioning on the frontend. We need assistance in identifying and confirming which plugin/module is controlling the image slider, particularly in regard to the two side images, to ensure everything is functioning correctly and can be easil...
I'm looking for a developer proficient in web scraping and dashboard management to help me with my admin dashboard. Tasks: Crawl Data in dashboard panel from deposit menu with "In Progress" status Cross check Data Crawl with data Name, Bank, and Date using other API Condition match then script auto-trigger button approval (label : Auto) in table In Progress after login Ideal candidates should have: - Proven experience with web scraping, especially from admin-type dashboards - Strong skills in dashboard management and data integration - Ability to create auto-trigger functions for dashboard updates
...price 15% if price is not mentioned no need to enter price :// Increse price 15% if price is not mentioned no need :// No need to enter price carefully add cranes, each crane has different type like rough terrain or all terrain or crawler crane 4. each crane has different type like rough terrain or all terrain or crawler crane 5. uplaod all forklifts No need to enter price if you do then increase price 15% and change the price to JPY 6. can you upload all by monday 14th 7. would you uplaod manually one by one ? 8. it is easy to uplaod in our website first enter data
I'm seeking a proficient .NET Core developer to create a web crawler for me. This crawler should efficiently extract relevant information from a specified website and send this data to my API in JSON format. Key Requirements: - Expertise in .NET Core is a must. - Proven experience in developing web crawlers. - Familiarity with API integration. - Ability to structure data into JSON format correctly. Please note, specific details regarding the type of information to be crawled will be provided upon project commencement. The ideal candidate will have a keen eye for detail and commitment to delivering high-quality work on time.
I need a crawler that can extract data from a Chinese app. The main purpose of this crawler is data extraction. Key Data Types: - Text Content - Images - Metadata The crawler should run daily to ensure consistent and up-to-date data collection. Ideal Skills and Experience: - Proficiency in web scraping and data extraction - Familiarity with crawling tools and software - Previous experience with Chinese apps is a plus - Ability to handle and extract different types of data (text, images, metadata) - Experience in setting up daily crawling schedules
I need a web scraping expert to extract text data from a directory website. The task involves collecting business listings information. Approx 3000 listing. Ideal Skills and Experience: - Proficient in web scraping tools and techniques - Previous experience scraping directory sites - Ability to filter and organize data effectively - Attention to detail to ensure data accuracy He can use any tools like Web Crawler or Octoparse etc.
I'm in need of an experienced developer who can create an AI bot for the Kleinanzeigen chat platform. The Kleinanzeigen platform is a search platform for used stuff by privat people. Every Us...the bot shall start requests all 2-4 minutes in a specific daily hours. I planned for this project fix: 250,00€ Write me, then we can speak about the task. Payout is when the Bot completly works, what we discussed. Here is the link to the website : This as a web crawler I could find in the Internet. Important to know: we have here a new Owner. Before it was Ebay Kleinanzeigen. Now its just Kleianzeigen.
I'm in need of a web crawler that will gather comprehensive information from the real estate website: The primary focus is on prices of residential properties, but I want the crawler to pull all relevant data such as regions, floor, upload time, etc. Essentially, I need every piece of data associated with each residential property for sale. Ideal Skills: - Proficiency in web scraping - Experience with Python or similar programming languages - Familiarity with real estate data collection Please note, I don't require price conversion into different currencies. The aim is to accumulate as much data as possible for analysis and market understanding.
I'm looking for a professional data engineer with experience in web scraping and data extraction. The goal is to create a crawler that can download historical and aggregated data from Google Trends, NSE, and Yahoo Finance. Key Requirements: - Develop a reliable data crawler. - Data to be extracted: Historical data and Aggregated data. - Format: All extracted data should be neatly organized and stored as Database entries. Ideal Skills: - Proficiency in Python or similar programming languages. - Experience with web scraping tools (like BeautifulSoup, Scrapy, Selenium). - Knowledge of databases (SQL, MongoDB etc). - Prior experience with data extraction from Google Trends, NSE, and Yahoo Finance is a plus.
Simple web crawler task to extract specific text data from the website into CVS list and JSON files. The filters for the data extraction will be provided. Ideal Skills and Experience: - Proficient in web crawling and data extraction. - Familiar with handling and processing text data. - Experience in delivering data in CSV format. Please only apply if you can work with the provided filters and deliver the extracted data in the specified format.